Agent Architectures for Compliance
NASA Astrophysics Data System (ADS)
Burgemeestre, Brigitte; Hulstijn, Joris; Tan, Yao-Hua
A Normative Multi-Agent System consists of autonomous agents who must comply with social norms. Different kinds of norms make different assumptions about the cognitive architecture of the agents. For example, a principle-based norm assumes that agents can reflect upon the consequences of their actions; a rule-based formulation only assumes that agents can avoid violations. In this paper we present several cognitive agent architectures for self-monitoring and compliance. We show how different assumptions about the cognitive architecture lead to different information needs when assessing compliance. The approach is validated with a case study of horizontal monitoring, an approach to corporate tax auditing recently introduced by the Dutch Customs and Tax Authority.
Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B
2013-01-01
The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.
A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture
NASA Technical Reports Server (NTRS)
Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.
2005-01-01
Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.
Internet-enabled collaborative agent-based supply chains
NASA Astrophysics Data System (ADS)
Shen, Weiming; Kremer, Rob; Norrie, Douglas H.
2000-12-01
This paper presents some results of our recent research work related to the development of a new Collaborative Agent System Architecture (CASA) and an Infrastructure for Collaborative Agent Systems (ICAS). Initially being proposed as a general architecture for Internet based collaborative agent systems (particularly complex industrial collaborative agent systems), the proposed architecture is very suitable for managing the Internet enabled complex supply chain for a large manufacturing enterprise. The general collaborative agent system architecture with the basic communication and cooperation services, domain independent components, prototypes and mechanisms are described. Benefits of implementing Internet enabled supply chains with the proposed infrastructure are discussed. A case study on Internet enabled supply chain management is presented.
Study on the E-commerce platform based on the agent
NASA Astrophysics Data System (ADS)
Fu, Ruixue; Qin, Lishuan; Gao, Yinmin
2011-10-01
To solve problem of dynamic integration in e-commerce, the Multi-Agent architecture of electronic commerce platform system based on Agent and Ontology has been introduced, which includes three major types of agent, Ontology and rule collection. In this architecture, service agent and rule are used to realize the business process reengineering, the reuse of software component, and agility of the electronic commerce platform. To illustrate the architecture, a simulation work has been done and the results imply that the architecture provides a very efficient method to design and implement the flexible, distributed, open and intelligent electronic commerce platform system to solve problem of dynamic integration in ecommerce. The objective of this paper is to illustrate the architecture of electronic commerce platform system, and the approach how Agent and Ontology support the electronic commerce platform system.
A multi-agent architecture for geosimulation of moving agents
NASA Astrophysics Data System (ADS)
Vahidnia, Mohammad H.; Alesheikh, Ali A.; Alavipanah, Seyed Kazem
2015-10-01
In this paper, a novel architecture is proposed in which an axiomatic derivation system in the form of first-order logic facilitates declarative explanation and spatial reasoning. Simulation of environmental perception and interaction between autonomous agents is designed with a geographic belief-desire-intention and a request-inform-query model. The architecture has a complementary quantitative component that supports collaborative planning based on the concept of equilibrium and game theory. This new architecture presents a departure from current best practices geographic agent-based modelling. Implementation tasks are discussed in some detail, as well as scenarios for fleet management and disaster management.
2007-12-01
model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality. 15. NUMBER OF...and a Behavioral model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality...prototypes an architectural design which is generalizable, reusable, and extensible. We have created an initial set of model elements that demonstrate
A Distributed Intelligent E-Learning System
ERIC Educational Resources Information Center
Kristensen, Terje
2016-01-01
An E-learning system based on a multi-agent (MAS) architecture combined with the Dynamic Content Manager (DCM) model of E-learning, is presented. We discuss the benefits of using such a multi-agent architecture. Finally, the MAS architecture is compared with a pure service-oriented architecture (SOA). This MAS architecture may also be used within…
SiC: An Agent Based Architecture for Preventing and Detecting Attacks to Ubiquitous Databases
NASA Astrophysics Data System (ADS)
Pinzón, Cristian; de Paz, Yanira; Bajo, Javier; Abraham, Ajith; Corchado, Juan M.
One of the main attacks to ubiquitous databases is the structure query language (SQL) injection attack, which causes severe damages both in the commercial aspect and in the user’s confidence. This chapter proposes the SiC architecture as a solution to the SQL injection attack problem. This is a hierarchical distributed multiagent architecture, which involves an entirely new approach with respect to existing architectures for the prevention and detection of SQL injections. SiC incorporates a kind of intelligent agent, which integrates a case-based reasoning system. This agent, which is the core of the architecture, allows the application of detection techniques based on anomalies as well as those based on patterns, providing a great degree of autonomy, flexibility, robustness and dynamic scalability. The characteristics of the multiagent system allow an architecture to detect attacks from different types of devices, regardless of the physical location. The architecture has been tested on a medical database, guaranteeing safe access from various devices such as PDAs and notebook computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsmith, Steven Y.; Spires, Shannon V.
There are currently two proposed standards for agent communication languages, namely, KQML (Finin, Lobrou, and Mayfield 1994) and the FIPA ACL. Neither standard has yet achieved primacy, and neither has been evaluated extensively in an open environment such as the Internet. It seems prudent therefore to design a general-purpose agent communications facility for new agent architectures that is flexible yet provides an architecture that accepts many different specializations. In this paper we exhibit the salient features of an agent communications architecture based on distributed metaobjects. This architecture captures design commitments at a metaobject level, leaving the base-level design and implementationmore » up to the agent developer. The scope of the metamodel is broad enough to accommodate many different communication protocols, interaction protocols, and knowledge sharing regimes through extensions to the metaobject framework. We conclude that with a powerful distributed object substrate that supports metaobject communications, a general framework can be developed that will effectively enable different approaches to agent communications in the same agent system. We have implemented a KQML-based communications protocol and have several special-purpose interaction protocols under development.« less
Multi-Agent Architecture with Support to Quality of Service and Quality of Control
NASA Astrophysics Data System (ADS)
Poza-Luján, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, Jose-Enrique
Multi Agent Systems (MAS) are one of the most suitable frameworks for the implementation of intelligent distributed control system. Agents provide suitable flexibility to give support to implied heterogeneity in cyber-physical systems. Quality of Service (QoS) and Quality of Control (QoC) parameters are commonly utilized to evaluate the efficiency of the communications and the control loop. Agents can use the quality measures to take a wide range of decisions, like suitable placement on the control node or to change the workload to save energy. This article describes the architecture of a multi agent system that provides support to QoS and QoC parameters to optimize de system. The architecture uses a Publish-Subscriber model, based on Data Distribution Service (DDS) to send the control messages. Due to the nature of the Publish-Subscribe model, the architecture is suitable to implement event-based control (EBC) systems. The architecture has been called FSACtrl.
Multi-Agent Diagnosis and Control of an Air Revitalization System for Life Support in Space
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Kowing, Jeffrey; Nieten, Joseph; Graham, Jeffrey s.; Schreckenghost, Debra; Bonasso, Pete; Fleming, Land D.; MacMahon, Matt; Thronesbery, Carroll
2000-01-01
An architecture of interoperating agents has been developed to provide control and fault management for advanced life support systems in space. In this adjustable autonomy architecture, software agents coordinate with human agents and provide support in novel fault management situations. This architecture combines the Livingstone model-based mode identification and reconfiguration (MIR) system with the 3T architecture for autonomous flexible command and control. The MIR software agent performs model-based state identification and diagnosis. MIR identifies novel recovery configurations and the set of commands required for the recovery. The AZT procedural executive and the human operator use the diagnoses and recovery recommendations, and provide command sequencing. User interface extensions have been developed to support human monitoring of both AZT and MIR data and activities. This architecture has been demonstrated performing control and fault management for an oxygen production system for air revitalization in space. The software operates in a dynamic simulation testbed.
Model-Drive Architecture for Agent-Based Systems
NASA Technical Reports Server (NTRS)
Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.
2004-01-01
The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.
NASA Technical Reports Server (NTRS)
Lindley, Craig A.
1995-01-01
This paper presents an architecture for satellites regarded as intercommunicating agents. The architecture is based upon a postmodern paradigm of artificial intelligence in which represented knowledge is regarded as text, inference procedures are regarded as social discourse and decision making conventions and the semantics of representations are grounded in the situated behaviour and activity of agents. A particular protocol is described for agent participation in distributed search and retrieval operations conducted as joint activities.
2012-09-30
System N Agent « datatype » SoS Architecture -Receives Capabilities1 -Provides Capabilities1 1 -Provides Capabilities1 1 -Provides Capabilities1 -Updates 1...fitness, or objective function. The structure of the SoS Agent is depicted in Figure 10. SoS Agent Architecture « datatype » Initial SoS...Architecture «subsystem» Fuzzy Inference Engine FAM « datatype » Affordability « datatype » Flexibility « datatype » Performance « datatype » Robustness Input Input
Knowledge Management in Role Based Agents
NASA Astrophysics Data System (ADS)
Kır, Hüseyin; Ekinci, Erdem Eser; Dikenelli, Oguz
In multi-agent system literature, the role concept is getting increasingly researched to provide an abstraction to scope beliefs, norms, goals of agents and to shape relationships of the agents in the organization. In this research, we propose a knowledgebase architecture to increase applicability of roles in MAS domain by drawing inspiration from the self concept in the role theory of sociology. The proposed knowledgebase architecture has granulated structure that is dynamically organized according to the agent's identification in a social environment. Thanks to this dynamic structure, agents are enabled to work on consistent knowledge in spite of inevitable conflicts between roles and the agent. The knowledgebase architecture is also implemented and incorporated into the SEAGENT multi-agent system development framework.
Formalism Challenges of the Cougaar Model Driven Architecture
NASA Technical Reports Server (NTRS)
Bohner, Shawn A.; George, Boby; Gracanin, Denis; Hinchey, Michael G.
2004-01-01
The Cognitive Agent Architecture (Cougaar) is one of the most sophisticated distributed agent architectures developed today. As part of its research and evolution, Cougaar is being studied for application to large, logistics-based applications for the Department of Defense (DoD). Anticipiting future complex applications of Cougaar, we are investigating the Model Driven Architecture (MDA) approach to understand how effective it would be for increasing productivity in Cougar-based development efforts. Recognizing the sophistication of the Cougaar development environment and the limitations of transformation technologies for agents, we have systematically developed an approach that combines component assembly in the large and transformation in the small. This paper describes some of the key elements that went into the Cougaar Model Driven Architecture approach and the characteristics that drove the approach.
A practical approach for active camera coordination based on a fusion-driven multi-agent system
NASA Astrophysics Data System (ADS)
Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.
2014-04-01
In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.
Brahms Mobile Agents: Architecture and Field Tests
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron
2002-01-01
We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.
A Coupled Simulation Architecture for Agent-Based/Geohydrological Modelling
NASA Astrophysics Data System (ADS)
Jaxa-Rozen, M.
2016-12-01
The quantitative modelling of social-ecological systems can provide useful insights into the interplay between social and environmental processes, and their impact on emergent system dynamics. However, such models should acknowledge the complexity and uncertainty of both of the underlying subsystems. For instance, the agent-based models which are increasingly popular for groundwater management studies can be made more useful by directly accounting for the hydrological processes which drive environmental outcomes. Conversely, conventional environmental models can benefit from an agent-based depiction of the feedbacks and heuristics which influence the decisions of groundwater users. From this perspective, this work describes a Python-based software architecture which couples the popular NetLogo agent-based platform with the MODFLOW/SEAWAT geohydrological modelling environment. This approach enables users to implement agent-based models in NetLogo's user-friendly platform, while benefiting from the full capabilities of MODFLOW/SEAWAT packages or reusing existing geohydrological models. The software architecture is based on the pyNetLogo connector, which provides an interface between the NetLogo agent-based modelling software and the Python programming language. This functionality is then extended and combined with Python's object-oriented features, to design a simulation architecture which couples NetLogo with MODFLOW/SEAWAT through the FloPy library (Bakker et al., 2016). The Python programming language also provides access to a range of external packages which can be used for testing and analysing the coupled models, which is illustrated for an application of Aquifer Thermal Energy Storage (ATES).
The Study on Collaborative Manufacturing Platform Based on Agent
NASA Astrophysics Data System (ADS)
Zhang, Xiao-yan; Qu, Zheng-geng
To fulfill the trends of knowledge-intensive in collaborative manufacturing development, we have described multi agent architecture supporting knowledge-based platform of collaborative manufacturing development platform. In virtue of wrapper service and communication capacity agents provided, the proposed architecture facilitates organization and collaboration of multi-disciplinary individuals and tools. By effectively supporting the formal representation, capture, retrieval and reuse of manufacturing knowledge, the generalized knowledge repository based on ontology library enable engineers to meaningfully exchange information and pass knowledge across boundaries. Intelligent agent technology increases traditional KBE systems efficiency and interoperability and provides comprehensive design environments for engineers.
Agent Collaborative Target Localization and Classification in Wireless Sensor Networks
Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng
2007-01-01
Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.
A Distributed Ambient Intelligence Based Multi-Agent System for Alzheimer Health Care
NASA Astrophysics Data System (ADS)
Tapia, Dante I.; RodríGuez, Sara; Corchado, Juan M.
This chapter presents ALZ-MAS (Alzheimer multi-agent system), an ambient intelligence (AmI)-based multi-agent system aimed at enhancing the assistance and health care for Alzheimer patients. The system makes use of several context-aware technologies that allow it to automatically obtain information from users and the environment in an evenly distributed way, focusing on the characteristics of ubiquity, awareness, intelligence, mobility, etc., all of which are concepts defined by AmI. ALZ-MAS makes use of a services oriented multi-agent architecture, called flexible user and services oriented multi-agent architecture, to distribute resources and enhance its performance. It is demonstrated that a SOA approach is adequate to build distributed and highly dynamic AmI-based multi-agent systems.
ERIC Educational Resources Information Center
Ahmed, Iftikhar; Sadeq, Muhammad Jafar
2006-01-01
Current distance learning systems are increasingly packing highly data-intensive contents on servers, resulting in the congestion of network and server resources at peak service times. A distributed learning system based on faded information field (FIF) architecture that employs mobile agents (MAs) has been proposed and simulated in this work. The…
Formal Assurance for Cognitive Architecture Based Autonomous Agent
NASA Technical Reports Server (NTRS)
Bhattacharyya, Siddhartha; Eskridge, Thomas; Neogi, Natasha; Carvalho, Marco
2017-01-01
Autonomous systems are designed and deployed in different modeling paradigms. These environments focus on specific concepts in designing the system. We focus our effort in the use of cognitive architectures to design autonomous agents to collaborate with humans to accomplish tasks in a mission. Our research focuses on introducing formal assurance methods to verify the behavior of agents designed in Soar, by translating the agent to the formal verification environment Uppaal.
A New Approach To Secure Federated Information Bases Using Agent Technology.
ERIC Educational Resources Information Center
Weippi, Edgar; Klug, Ludwig; Essmayr, Wolfgang
2003-01-01
Discusses database agents which can be used to establish federated information bases by integrating heterogeneous databases. Highlights include characteristics of federated information bases, including incompatible database management systems, schemata, and frequently changing context; software agent technology; Java agents; system architecture;…
2013-03-29
Assessor that is in the SoS agent. Figure 31. Fuzzy Assessor for the SoS Agent for Assessment of SoS Architecture «subsystem» Fuzzy Rules « datatype ...Affordability « datatype » Flexibility « datatype » Performance « datatype » Robustness Input Input Input Input « datatype » Architecture QualityOutput Fuzzy
System design in an evolving system-of-systems architecture and concept of operations
NASA Astrophysics Data System (ADS)
Rovekamp, Roger N., Jr.
Proposals for space exploration architectures have increased in complexity and scope. Constituent systems (e.g., rovers, habitats, in-situ resource utilization facilities, transfer vehicles, etc) must meet the needs of these architectures by performing in multiple operational environments and across multiple phases of the architecture's evolution. This thesis proposes an approach for using system-of-systems engineering principles in conjunction with system design methods (e.g., Multi-objective optimization, genetic algorithms, etc) to create system design options that perform effectively at both the system and system-of-systems levels, across multiple concepts of operations, and over multiple architectural phases. The framework is presented by way of an application problem that investigates the design of power systems within a power sharing architecture for use in a human Lunar Surface Exploration Campaign. A computer model has been developed that uses candidate power grid distribution solutions for a notional lunar base. The agent-based model utilizes virtual control agents to manage the interactions of various exploration and infrastructure agents. The philosophy behind the model is based both on lunar power supply strategies proposed in literature, as well as on the author's own approaches for power distribution strategies of future lunar bases. In addition to proposing a framework for system design, further implications of system-of-systems engineering principles are briefly explored, specifically as they relate to producing more robust cross-cultural system-of-systems architecture solutions.
Modelling of internal architecture of kinesin nanomotor as a machine language.
Khataee, H R; Ibrahim, M Y
2012-09-01
Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.
An Agent-Based Dynamic Model for Analysis of Distributed Space Exploration Architectures
NASA Astrophysics Data System (ADS)
Sindiy, Oleg V.; DeLaurentis, Daniel A.; Stein, William B.
2009-07-01
A range of complex challenges, but also potentially unique rewards, underlie the development of exploration architectures that use a distributed, dynamic network of resources across the solar system. From a methodological perspective, the prime challenge is to systematically model the evolution (and quantify comparative performance) of such architectures, under uncertainty, to effectively direct further study of specialized trajectories, spacecraft technologies, concept of operations, and resource allocation. A process model for System-of-Systems Engineering is used to define time-varying performance measures for comparative architecture analysis and identification of distinguishing patterns among interoperating systems. Agent-based modeling serves as the means to create a discrete-time simulation that generates dynamics for the study of architecture evolution. A Solar System Mobility Network proof-of-concept problem is introduced representing a set of longer-term, distributed exploration architectures. Options within this set revolve around deployment of human and robotic exploration and infrastructure assets, their organization, interoperability, and evolution, i.e., a system-of-systems. Agent-based simulations quantify relative payoffs for a fully distributed architecture (which can be significant over the long term), the latency period before they are manifest, and the up-front investment (which can be substantial compared to alternatives). Verification and sensitivity results provide further insight on development paths and indicate that the framework and simulation modeling approach may be useful in architectural design of other space exploration mass, energy, and information exchange settings.
A Unified Approach to Model-Based Planning and Execution
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)
2000-01-01
Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.
A real-time architecture for time-aware agents.
Prouskas, Konstantinos-Vassileios; Pitt, Jeremy V
2004-06-01
This paper describes the specification and implementation of a new three-layer time-aware agent architecture. This architecture is designed for applications and environments where societies of humans and agents play equally active roles, but interact and operate in completely different time frames. The architecture consists of three layers: the April real-time run-time (ART) layer, the time aware layer (TAL), and the application agents layer (AAL). The ART layer forms the underlying real-time agent platform. An original online, real-time, dynamic priority-based scheduling algorithm is described for scheduling the computation time of agent processes, and it is shown that the algorithm's O(n) complexity and scalable performance are sufficient for application in real-time domains. The TAL layer forms an abstraction layer through which human and agent interactions are temporally unified, that is, handled in a common way irrespective of their temporal representation and scale. A novel O(n2) interaction scheduling algorithm is described for predicting and guaranteeing interactions' initiation and completion times. The time-aware predicting component of a workflow management system is also presented as an instance of the AAL layer. The described time-aware architecture addresses two key challenges in enabling agents to be effectively configured and applied in environments where humans and agents play equally active roles. It provides flexibility and adaptability in its real-time mechanisms while placing them under direct agent control, and it temporally unifies human and agent interactions.
Dynamic Task Assignment of Autonomous Distributed AGV in an Intelligent FMS Environment
NASA Astrophysics Data System (ADS)
Fauadi, Muhammad Hafidz Fazli Bin Md; Lin, Hao Wen; Murata, Tomohiro
The need of implementing distributed system is growing significantly as it is proven to be effective for organization to be flexible against a highly demanding market. Nevertheless, there are still large technical gaps need to be addressed to gain significant achievement. We propose a distributed architecture to control Automated Guided Vehicle (AGV) operation based on multi-agent architecture. System architectures and agents' functions have been designed to support distributed control of AGV. Furthermore, enhanced agent communication protocol has been configured to accommodate dynamic attributes of AGV task assignment procedure. Result proved that the technique successfully provides a better solution.
NASA Technical Reports Server (NTRS)
Lee, S. Daniel
1990-01-01
We propose a distributed agent architecture (DAA) that can support a variety of paradigms based on both traditional real-time computing and artificial intelligence. DAA consists of distributed agents that are classified into two categories: reactive and cognitive. Reactive agents can be implemented directly in Ada to meet hard real-time requirements and be deployed on on-board embedded processors. A traditional real-time computing methodology under consideration is the rate monotonic theory that can guarantee schedulability based on analytical methods. AI techniques under consideration for reactive agents are approximate or anytime reasoning that can be implemented using Bayesian belief networks as in Guardian. Cognitive agents are traditional expert systems that can be implemented in ART-Ada to meet soft real-time requirements. During the initial design of cognitive agents, it is critical to consider the migration path that would allow initial deployment on ground-based workstations with eventual deployment on on-board processors. ART-Ada technology enables this migration while Lisp-based technologies make it difficult if not impossible. In addition to reactive and cognitive agents, a meta-level agent would be needed to coordinate multiple agents and to provide meta-level control.
NASA Technical Reports Server (NTRS)
Dorais, Gregory A.; Nicewarner, Keith
2006-01-01
We present an multi-agent model-based autonomy architecture with monitoring, planning, diagnosis, and execution elements. We discuss an internal spacecraft free-flying robot prototype controlled by an implementation of this architecture and a ground test facility used for development. In addition, we discuss a simplified environment control life support system for the spacecraft domain also controlled by an implementation of this architecture. We discuss adjustable autonomy and how it applies to this architecture. We describe an interface that provides the user situation awareness of both autonomous systems and enables the user to dynamically edit the plans prior to and during execution as well as control these agents at various levels of autonomy. This interface also permits the agents to query the user or request the user to perform tasks to help achieve the commanded goals. We conclude by describing a scenario where these two agents and a human interact to cooperatively detect, diagnose and recover from a simulated spacecraft fault.
Exploration for Agents with Different Personalities in Unknown Environments
NASA Astrophysics Data System (ADS)
Doumit, Sarjoun; Minai, Ali
We present in this paper a personality-based architecture (PA) that combines elements from the subsumption architecture and reinforcement learning to find alternate solutions for problems facing artificial agents exploring unknown environments. The underlying PA algorithm is decomposed into layers according to the different (non-contiguous) stages that our agent passes in, which in turn are influenced by the sources of rewards present in the environment. The cumulative rewards collected by an agent, in addition to its internal composition serve as factors in shaping its personality. In missions where multiple agents are deployed, our solution-goal is to allow each of the agents develop its own distinct personality in order for the collective to reach a balanced society, which then can accumulate the largest possible amount of rewards for the agent and society as well. The architecture is tested in a simulated matrix world which embodies different types of positive rewards and negative rewards. Varying experiments are performed to compare the performance of our algorithm with other algorithms under the same environment conditions. The use of our architecture accelerates the overall adaptation of the agents to their environment and goals by allowing the emergence of an optimal society of agents with different personalities. We believe that our approach achieves much efficient results when compared to other more restrictive policy designs.
NASA Astrophysics Data System (ADS)
Wattawa, Scott
1995-11-01
Offering interactive services and data in a hybrid fiber/coax cable system requires the coordination of a host of operations and business support systems. New service offerings and network growth and evolution create never-ending changes in the network infrastructure. Agent-based enterprise models provide a flexible mechanism for systems integration of service and support systems. Agent models also provide a mechanism to decouple interactive services from network architecture. By using the Java programming language, agents may be made safe, portable, and intelligent. This paper investigates the application of the Object Management Group's Common Object Request Brokering Architecture to the integration of a multiple services metropolitan area network.
Managing the Evolution of an Enterprise Architecture using a MAS-Product-Line Approach
NASA Technical Reports Server (NTRS)
Pena, Joaquin; Hinchey, Michael G.; Resinas, manuel; Sterritt, Roy; Rash, James L.
2006-01-01
We view an evolutionary system ns being n software product line. The core architecture is the unchanging part of the system, and each version of the system may be viewed as a product from the product line. Each "product" may be described as the core architecture with sonre agent-based additions. The result is a multiagent system software product line. We describe an approach to such n Software Product Line-based approach using the MaCMAS Agent-Oriented nzethoclology. The approach scales to enterprise nrchitectures as a multiagent system is an approprinre means of representing a changing enterprise nrchitectclre nnd the inferaction between components in it.
A Multi Agent Based Approach for Prehospital Emergency Management.
Safdari, Reza; Shoshtarian Malak, Jaleh; Mohammadzadeh, Niloofar; Danesh Shahraki, Azimeh
2017-07-01
To demonstrate an architecture to automate the prehospital emergency process to categorize the specialized care according to the situation at the right time for reducing the patient mortality and morbidity. Prehospital emergency process were analyzed using existing prehospital management systems, frameworks and the extracted process were modeled using sequence diagram in Rational Rose software. System main agents were identified and modeled via component diagram, considering the main system actors and by logically dividing business functionalities, finally the conceptual architecture for prehospital emergency management was proposed. The proposed architecture was simulated using Anylogic simulation software. Anylogic Agent Model, State Chart and Process Model were used to model the system. Multi agent systems (MAS) had a great success in distributed, complex and dynamic problem solving environments, and utilizing autonomous agents provides intelligent decision making capabilities. The proposed architecture presents prehospital management operations. The main identified agents are: EMS Center, Ambulance, Traffic Station, Healthcare Provider, Patient, Consultation Center, National Medical Record System and quality of service monitoring agent. In a critical condition like prehospital emergency we are coping with sophisticated processes like ambulance navigation health care provider and service assignment, consultation, recalling patients past medical history through a centralized EHR system and monitoring healthcare quality in a real-time manner. The main advantage of our work has been the multi agent system utilization. Our Future work will include proposed architecture implementation and evaluation of its impact on patient quality care improvement.
A Multi Agent Based Approach for Prehospital Emergency Management
Safdari, Reza; Shoshtarian Malak, Jaleh; Mohammadzadeh, Niloofar; Danesh Shahraki, Azimeh
2017-01-01
Objective: To demonstrate an architecture to automate the prehospital emergency process to categorize the specialized care according to the situation at the right time for reducing the patient mortality and morbidity. Methods: Prehospital emergency process were analyzed using existing prehospital management systems, frameworks and the extracted process were modeled using sequence diagram in Rational Rose software. System main agents were identified and modeled via component diagram, considering the main system actors and by logically dividing business functionalities, finally the conceptual architecture for prehospital emergency management was proposed. The proposed architecture was simulated using Anylogic simulation software. Anylogic Agent Model, State Chart and Process Model were used to model the system. Results: Multi agent systems (MAS) had a great success in distributed, complex and dynamic problem solving environments, and utilizing autonomous agents provides intelligent decision making capabilities. The proposed architecture presents prehospital management operations. The main identified agents are: EMS Center, Ambulance, Traffic Station, Healthcare Provider, Patient, Consultation Center, National Medical Record System and quality of service monitoring agent. Conclusion: In a critical condition like prehospital emergency we are coping with sophisticated processes like ambulance navigation health care provider and service assignment, consultation, recalling patients past medical history through a centralized EHR system and monitoring healthcare quality in a real-time manner. The main advantage of our work has been the multi agent system utilization. Our Future work will include proposed architecture implementation and evaluation of its impact on patient quality care improvement. PMID:28795061
NASA Astrophysics Data System (ADS)
Zhang, Wenyu; Zhang, Shuai; Cai, Ming; Jian, Wu
2015-04-01
With the development of virtual enterprise (VE) paradigm, the usage of serviceoriented architecture (SOA) is increasingly being considered for facilitating the integration and utilisation of distributed manufacturing resources. However, due to the heterogeneous nature among VEs, the dynamic nature of a VE and the autonomous nature of each VE member, the lack of both sophisticated coordination mechanism in the popular centralised infrastructure and semantic expressivity in the existing SOA standards make the current centralised, syntactic service discovery method undesirable. This motivates the proposed agent-based peer-to-peer (P2P) architecture for semantic discovery of manufacturing services across VEs. Multi-agent technology provides autonomous and flexible problemsolving capabilities in dynamic and adaptive VE environments. Peer-to-peer overlay provides highly scalable coupling across decentralised VEs, each of which exhibiting as a peer composed of multiple agents dealing with manufacturing services. The proposed architecture utilises a novel, efficient, two-stage search strategy - semantic peer discovery and semantic service discovery - to handle the complex searches of manufacturing services across VEs through fast peer filtering. The operation and experimental evaluation of the prototype system are presented to validate the implementation of the proposed approach.
Bouzguenda, Lotfi; Turki, Manel
2014-04-01
This paper shows how the combined use of agent and web services technologies can help to design an architectural style for dynamic medical Cross-Organizational Workflow (COW) management system. Medical COW aims at supporting the collaboration between several autonomous and possibly heterogeneous medical processes, distributed over different organizations (Hospitals, Clinic or laboratories). Dynamic medical COW refers to occasional cooperation between these health organizations, free of structural constraints, where the medical partners involved and their number are not pre-defined. More precisely, this paper proposes a new architecture style based on agents and web services technologies to deal with two key coordination issues of dynamic COW: medical partners finding and negotiation between them. It also proposes how the proposed architecture for dynamic medical COW management system can connect to a multi-agent system coupling the Clinical Decision Support System (CDSS) with Computerized Prescriber Order Entry (CPOE). The idea is to assist the health professionals such as doctors, nurses and pharmacists with decision making tasks, as determining diagnosis or patient data analysis without stopping their clinical processes in order to act in a coherent way and to give care to the patient.
A Multi-Agent System for Intelligent Online Education.
ERIC Educational Resources Information Center
O'Riordan, Colm; Griffith, Josephine
1999-01-01
Describes the system architecture of an intelligent Web-based education system that includes user modeling agents, information filtering agents for automatic information gathering, and the multi-agent interaction. Discusses information management; user interaction; support for collaborative peer-peer learning; implementation; testing; and future…
Shen, Ying; Colloc, Joël; Jacquet-Andrieu, Armelle; Lei, Kai
2015-08-01
This research aims to depict the methodological steps and tools about the combined operation of case-based reasoning (CBR) and multi-agent system (MAS) to expose the ontological application in the field of clinical decision support. The multi-agent architecture works for the consideration of the whole cycle of clinical decision-making adaptable to many medical aspects such as the diagnosis, prognosis, treatment, therapeutic monitoring of gastric cancer. In the multi-agent architecture, the ontological agent type employs the domain knowledge to ease the extraction of similar clinical cases and provide treatment suggestions to patients and physicians. Ontological agent is used for the extension of domain hierarchy and the interpretation of input requests. Case-based reasoning memorizes and restores experience data for solving similar problems, with the help of matching approach and defined interfaces of ontologies. A typical case is developed to illustrate the implementation of the knowledge acquisition and restitution of medical experts. Copyright © 2015 Elsevier Inc. All rights reserved.
Developing a Conceptual Architecture for a Generalized Agent-based Modeling Environment (GAME)
2008-03-01
4. REPAST (Java, Python , C#, Open Source) ........28 5. MASON: Multi-Agent Modeling Language (Swarm Extension... Python , C#, Open Source) Repast (Recursive Porous Agent Simulation Toolkit) was designed for building agent-based models and simulations in the...Repast makes it easy for inexperienced users to build models by including a built-in simple model and provide interfaces through which menus and Python
An Architecture for Controlling Multiple Robots
NASA Technical Reports Server (NTRS)
Aghazarian, Hrand; Pirjanian, Paolo; Schenker, Paul; Huntsberger, Terrance
2004-01-01
The Control Architecture for Multirobot Outpost (CAMPOUT) is a distributed-control architecture for coordinating the activities of multiple robots. In the CAMPOUT, multiple-agent activities and sensor-based controls are derived as group compositions and involve coordination of more basic controllers denoted, for present purposes, as behaviors. The CAMPOUT provides basic mechanistic concepts for representation and execution of distributed group activities. One considers a network of nodes that comprise behaviors (self-contained controllers) augmented with hyper-links, which are used to exchange information between the nodes to achieve coordinated activities. Group behavior is guided by a scripted plan, which encodes a conditional sequence of single-agent activities. Thus, higher-level functionality is composed by coordination of more basic behaviors under the downward task decomposition of a multi-agent planner
Casuist BDI-Agent: A New Extended BDI Architecture with the Capability of Ethical Reasoning
NASA Astrophysics Data System (ADS)
Honarvar, Ali Reza; Ghasem-Aghaee, Nasser
Since the intelligent agent is developed to be cleverer, more complex, and yet uncontrollable, a number of problems have been recognized. The capability of agents to make moral decisions has become an important question, when intelligent agents have developed more autonomous and human-like. We propose Casuist BDI-Agent architecture which extends the power of BDI architecture. Casuist BDI-Agent architecture combines CBR method in AI and bottom up casuist approach in ethics in order to add capability of ethical reasoning to BDI-Agent.
Unified web-based network management based on distributed object orientated software agents
NASA Astrophysics Data System (ADS)
Djalalian, Amir; Mukhtar, Rami; Zukerman, Moshe
2002-09-01
This paper presents an architecture that provides a unified web interface to managed network devices that support CORBA, OSI or Internet-based network management protocols. A client gains access to managed devices through a web browser, which is used to issue management operations and receive event notifications. The proposed architecture is compatible with both the OSI Management reference Model and CORBA. The steps required for designing the building blocks of such architecture are identified.
NASA Technical Reports Server (NTRS)
Sierhuis, Maarten; Clancey, William J.; Damer, Bruce; Brodsky, Boris; vanHoff, Ron
2007-01-01
A virtual worlds presentation technique with embodied, intelligent agents is being developed as an instructional medium suitable to present in situ training on long term space flight. The system combines a behavioral element based on finite state automata, a behavior based reactive architecture also described as subsumption architecture, and a belief-desire-intention agent structure. These three features are being integrated to describe a Brahms virtual environment model of extravehicular crew activity which could become a basis for procedure training during extended space flight.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loparo, Kenneth; Kolacinski, Richard; Threeanaew, Wanchat
A central goal of the work was to enable both the extraction of all relevant information from sensor data, and the application of information gained from appropriate processing and fusion at the system level to operational control and decision-making at various levels of the control hierarchy through: 1. Exploiting the deep connection between information theory and the thermodynamic formalism, 2. Deployment using distributed intelligent agents with testing and validation in a hardware-in-the loop simulation environment. Enterprise architectures are the organizing logic for key business processes and IT infrastructure and, while the generality of current definitions provides sufficient flexibility, the currentmore » architecture frameworks do not inherently provide the appropriate structure. Of particular concern is that existing architecture frameworks often do not make a distinction between ``data'' and ``information.'' This work defines an enterprise architecture for health and condition monitoring of power plant equipment and further provides the appropriate foundation for addressing shortcomings in current architecture definition frameworks through the discovery of the information connectivity between the elements of a power generation plant. That is, to identify the correlative structure between available observations streams using informational measures. The principle focus here is on the implementation and testing of an emergent, agent-based, algorithm based on the foraging behavior of ants for eliciting this structure and on measures for characterizing differences between communication topologies. The elicitation algorithms are applied to data streams produced by a detailed numerical simulation of Alstom’s 1000 MW ultra-super-critical boiler and steam plant. The elicitation algorithm and topology characterization can be based on different informational metrics for detecting connectivity, e.g. mutual information and linear correlation.« less
Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron
2003-01-01
We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.
A knowledge base architecture for distributed knowledge agents
NASA Technical Reports Server (NTRS)
Riedesel, Joel; Walls, Bryan
1990-01-01
A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.
Security of Mobile Agents on the Internet.
ERIC Educational Resources Information Center
Corradi, Antonio; Montanari, Rebecca; Stefanelli, Cesare
2001-01-01
Discussion of the Internet focuses on new programming paradigms based on mobile agents. Considers the security issues associated with mobile agents and proposes a security architecture composed of a wide set of services and components capable of adapting to a variety of applications, particularly electronic commerce. (Author/LRW)
Towards an agent-oriented programming language based on Scala
NASA Astrophysics Data System (ADS)
Mitrović, Dejan; Ivanović, Mirjana; Budimac, Zoran
2012-09-01
Scala and its multi-threaded model based on actors represent an excellent framework for developing purely reactive agents. This paper presents an early research on extending Scala with declarative programming constructs, which would result in a new agent-oriented programming language suitable for developing more advanced, BDI agent architectures. The main advantage the new language over many other existing solutions for programming BDI agents is a natural and straightforward integration of imperative and declarative programming constructs, fitted under a single development framework.
MonALISA, an agent-based monitoring and control system for the LHC experiments
NASA Astrophysics Data System (ADS)
Balcas, J.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.
2017-10-01
MonALISA, which stands for Monitoring Agents using a Large Integrated Services Architecture, has been developed over the last fifteen years by California Insitute of Technology (Caltech) and its partners with the support of the software and computing program of the CMS and ALICE experiments at the Large Hadron Collider (LHC). The framework is based on Dynamic Distributed Service Architecture and is able to provide complete system monitoring, performance metrics of applications, Jobs or services, system control and global optimization services for complex systems. A short overview and status of MonALISA is given in this paper.
A multi-agent approach to intelligent monitoring in smart grids
NASA Astrophysics Data System (ADS)
Vallejo, D.; Albusac, J.; Glez-Morcillo, C.; Castro-Schez, J. J.; Jiménez, L.
2014-04-01
In this paper, we propose a scalable multi-agent architecture to give support to smart grids, paying special attention to the intelligent monitoring of distribution substations. The data gathered by multiple sensors are used by software agents that are responsible for monitoring different aspects or events of interest, such as normal voltage values or unbalanced intensity values that can end up blowing fuses and decreasing the quality of service of end consumers. The knowledge bases of these agents have been built by means of a formal model for normality analysis that has been successfully used in other surveillance domains. The architecture facilitates the integration of new agents and can be easily configured and deployed to monitor different environments. The experiments have been conducted over a power distribution network.
NASA Astrophysics Data System (ADS)
Zhang, Daili
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
NASA Technical Reports Server (NTRS)
Hinchey, Michael G. (Inventor); Rash, James L. (Inventor); Pena, Joaquin (Inventor)
2011-01-01
Systems, methods and apparatus are provided through which an evolutionary system is managed and viewed as a software product line. In some embodiments, the core architecture is a relatively unchanging part of the system, and each version of the system is viewed as a product from the product line. Each software product is generated from the core architecture with some agent-based additions. The result may be a multi-agent system software product line.
Distributed environmental control
NASA Technical Reports Server (NTRS)
Cleveland, Gary A.
1992-01-01
We present an architecture of distributed, independent control agents designed to work with the Computer Aided System Engineering and Analysis (CASE/A) simulation tool. CASE/A simulates behavior of Environmental Control and Life Support Systems (ECLSS). We describe a lattice of agents capable of distributed sensing and overcoming certain sensor and effector failures. We address how the architecture can achieve the coordinating functions of a hierarchical command structure while maintaining the robustness and flexibility of independent agents. These agents work between the time steps of the CASE/A simulation tool to arrive at command decisions based on the state variables maintained by CASE/A. Control is evaluated according to both effectiveness (e.g., how well temperature was maintained) and resource utilization (the amount of power and materials used).
NASA Astrophysics Data System (ADS)
Ho, Wan Ching; Dautenhahn, Kerstin; Nehaniv, Chrystopher
2008-03-01
In this paper, we discuss the concept of autobiographic agent and how memory may extend an agent's temporal horizon and increase its adaptability. These concepts are applied to an implementation of a scenario where agents are interacting in a complex virtual artificial life environment. We present computational memory architectures for autobiographic virtual agents that enable agents to retrieve meaningful information from their dynamic memories which increases their adaptation and survival in the environment. The design of the memory architectures, the agents, and the virtual environment are described in detail. Next, a series of experimental studies and their results are presented which show the adaptive advantage of autobiographic memory, i.e. from remembering significant experiences. Also, in a multi-agent scenario where agents can communicate via stories based on their autobiographic memory, it is found that new adaptive behaviours can emerge from an individual's reinterpretation of experiences received from other agents whereby higher communication frequency yields better group performance. An interface is described that visualises the memory contents of an agent. From an observer perspective, the agents' behaviours can be understood as individually structured, and temporally grounded, and, with the communication of experience, can be seen to rely on emergent mixed narrative reconstructions combining the experiences of several agents. This research leads to insights into how bottom-up story-telling and autobiographic reconstruction in autonomous, adaptive agents allow temporally grounded behaviour to emerge. The article concludes with a discussion of possible implications of this research direction for future autobiographic, narrative agents.
Effective Coordination of Multiple Intelligent Agents for Command and Control
2003-09-01
System Architecture As an initial problem domain in E - commerce , we chose collective book purchasing. In the university setting, relatively large numbers... a coalition server, an auctioneer agent, a set of supplier agents, and a web- based interface 9 for end users. The system is based on a simple...buyers are able to request and sellers to respond to a list of items, within a particular category. Sellers present
NASA Technical Reports Server (NTRS)
Callantine, Todd J.
2002-01-01
This report describes preliminary research on intelligent agents that make errors. Such agents are crucial to the development of novel agent-based techniques for assessing system safety. The agents extend an agent architecture derived from the Crew Activity Tracking System that has been used as the basis for air traffic controller agents. The report first reviews several error taxonomies. Next, it presents an overview of the air traffic controller agents, then details several mechanisms for causing the agents to err in realistic ways. The report presents a performance assessment of the error-generating agents, and identifies directions for further research. The research was supported by the System-Wide Accident Prevention element of the FAA/NASA Aviation Safety Program.
2006-12-01
NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI-AGENT PHYSICALLY INTERACTING SPACECRAFT (AMPHIS) TEST BED by Blake D. Eikenberry...Engineer Degree 4. TITLE AND SUBTITLE Guidance and Navigation Software Architecture Design for the Autonomous Multi- Agent Physically Interacting...iii Approved for public release; distribution is unlimited GUIDANCE AND NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI
Mother ship and physical agents collaboration
NASA Astrophysics Data System (ADS)
Young, Stuart H.; Budulas, Peter P.; Emmerman, Philip J.
1999-07-01
This paper discusses ongoing research at the U.S. Army Research Laboratory that investigates the feasibility of developing a collaboration architecture between small physical agents and a mother ship. This incudes the distribution of planning, perception, mobility, processing and communications requirements between the mother ship and the agents. Small physical agents of the future will be virtually everywhere on the battlefield of the 21st century. A mother ship that is coupled to a team of small collaborating physical agents (conducting tasks such as Reconnaissance, Surveillance, and Target Acquisition (RSTA); logistics; sentry; and communications relay) will be used to build a completely effective and mission capable intelligent system. The mother ship must have long-range mobility to deploy the small, highly maneuverable agents that will operate in urban environments and more localized areas, and act as a logistics base for the smaller agents. The mother ship also establishes a robust communications network between the agents and is the primary information disseminating and receiving point to the external world. Because of its global knowledge and processing power, the mother ship does the high-level control and planning for the collaborative physical agents. This high level control and interaction between the mother ship and its agents (including inter agent collaboration) will be software agent architecture based. The mother ship incorporates multi-resolution battlefield visualization and analysis technology, which aids in mission planning and sensor fusion.
Agent-Based Scientific Workflow Composition
NASA Astrophysics Data System (ADS)
Barker, A.; Mann, B.
2006-07-01
Agents are active autonomous entities that interact with one another to achieve their objectives. This paper addresses how these active agents are a natural fit to consume the passive Service Oriented Architecture which is found in Internet and Grid Systems, in order to compose, coordinate and execute e-Science experiments. A framework is introduced which allows an e-Science experiment to be described as a MultiAgent System.
NASA Technical Reports Server (NTRS)
Truszkowski, Walt; Obenschain, Arthur F. (Technical Monitor)
2002-01-01
Currently, spacecraft ground systems have a well defined and somewhat standard architecture and operations concept. Based on domain analysis studies of various control centers conducted over the years it is clear that ground systems have core capabilities and functionality that are common across all ground systems. This observation alone supports the realization of reuse. Additionally, spacecraft ground systems are increasing in their ability to do things autonomously. They are being engineered using advanced expert systems technology to provide automated support for operators. A clearer understanding of the possible roles of agent technology is advancing the prospects of greater autonomy for these systems. Many of their functional and management tasks are or could be supported by applied agent technology, the dynamics of the ground system's infrastructure could be monitored by agents, there are intelligent agent-based approaches to user-interfaces, etc. The premise of this paper is that the concepts associated with software reuse, applicable in consideration of classically-engineered ground systems, can be updated to address their application in highly agent-based realizations of future ground systems. As a somewhat simplified example consider the following situation, involving human agents in a ground system context. Let Group A of controllers be working on Mission X. They are responsible for the command, control and health and safety of the Mission X spacecraft. Let us suppose that mission X successfully completes it mission and is turned off. Group A could be dispersed or perhaps move to another Mission Y. In this case there would be reuse of the human agents from Mission X to Mission Y. The Group A agents perform their well-understood functions in a somewhat but related context. There will be a learning or familiarization process that the group A agents go through to make the new context, determined by the new Mission Y, understood. This simplified scenario highlights some of the major issues that need to be addressed when considering the situation where Group A is composed of software-based agents (not their human counterparts) and they migrate from one mission support system to another. This paper will address: - definition of an agent architecture appropriate to support reuse; - identification of non-mission-specific agent capabilities required; - appropriate knowledge representation schemes for mission-specific knowledge; - agent interface with mission-specific knowledge (a type of Learning); development of a fully-operational group of cooperative software agents for ground system support; architecture and operation of a repository of reusable agents that could be the source of intelligent components for realizing an autonomous (or nearly autonomous) agent-based ground system, and an agent-based approach to repository management and operation (an intelligent interface for human use of the repository in a ground-system development activity).
Fuselets: an agent based architecture for fusion of heterogeneous information and data
NASA Astrophysics Data System (ADS)
Beyerer, Jürgen; Heizmann, Michael; Sander, Jennifer
2006-04-01
A new architecture for fusing information and data from heterogeneous sources is proposed. The approach takes criminalistics as a model. In analogy to the work of detectives, who attempt to investigate crimes, software agents are initiated that pursue clues and try to consolidate or to dismiss hypotheses. Like their human pendants, they can, if questions beyond their competences arise, consult expert agents. Within the context of a certain task, region, and time interval, specialized operations are applied to each relevant information source, e.g. IMINT, SIGINT, ACINT,..., HUMINT, data bases etc. in order to establish hit lists of first clues. Each clue is described by its pertaining facts, uncertainties, and dependencies in form of a local degree-of-belief (DoB) distribution in a Bayesian sense. For each clue an agent is initiated which cooperates with other agents and experts. Expert agents support to make use of different information sources. Consultations of experts, capable to access certain information sources, result in changes of the DoB of the pertaining clue. According to the significance of concentration of their DoB distribution clues are abandoned or pursued further to formulate task specific hypotheses. Communications between the agents serve to find out whether different clues belong to the same cause and thus can be put together. At the end of the investigation process, the different hypotheses are evaluated by a jury and a final report is created that constitutes the fusion result. The approach proposed avoids calculating global DoB distributions by adopting a local Bayesian approximation and thus reduces the complexity of the exact problem essentially. Different information sources are transformed into DoB distributions using the maximum entropy paradigm and considering known facts as constraints. Nominal, ordinal and cardinal quantities can be treated within this framework equally. The architecture is scalable by tailoring the number of agents according to the available computer resources, to the priority of tasks, and to the maximum duration of the fusion process. Furthermore, the architecture allows cooperative work of human and automated agents and experts, as long as not all subtasks can be accomplished automatically.
Coordinating teams of autonomous vehicles: an architectural perspective
NASA Astrophysics Data System (ADS)
Czichon, Cary; Peterson, Robert W.; Mettala, Erik G.; Vondrak, Ivo
2005-05-01
In defense-related robotics research, a mission level integration gap exists between mission tasks (tactical) performed by ground, sea, or air applications and elementary behaviors enacted by processing, communications, sensors, and weaponry resources (platform specific). The gap spans ensemble (heterogeneous team) behaviors, automatic MOE/MOP tracking, and tactical task modeling/simulation for virtual and mixed teams comprised of robotic and human combatants. This study surveys robotic system architectures, compares approaches for navigating problem/state spaces by autonomous systems, describes an architecture for an integrated, repository-based modeling, simulation, and execution environment, and outlines a multi-tiered scheme for robotic behavior components that is agent-based, platform-independent, and extendable via plug-ins. Tools for this integrated environment, along with a distributed agent framework for collaborative task performance are being developed by a U.S. Army funded SBIR project (RDECOM Contract N61339-04-C-0005).
An Embedded Multi-Agent Systems Based Industrial Wireless Sensor Network
Brennan, Robert W.
2017-01-01
With the emergence of cyber-physical systems, there has been a growing interest in network-connected devices. One of the key requirements of a cyber-physical device is the ability to sense its environment. Wireless sensor networks are a widely-accepted solution for this requirement. In this study, an embedded multi-agent systems-managed wireless sensor network is presented. A novel architecture is proposed, along with a novel wireless sensor network architecture. Active and passive wireless sensor node types are defined, along with their communication protocols, and two application-specific examples are presented. A series of three experiments is conducted to evaluate the performance of the agent-embedded wireless sensor network. PMID:28906452
Acquisition of Autonomous Behaviors by Robotic Assistants
NASA Technical Reports Server (NTRS)
Peters, R. A., II; Sarkar, N.; Bodenheimer, R. E.; Brown, E.; Campbell, C.; Hambuchen, K.; Johnson, C.; Koku, A. B.; Nilas, P.; Peng, J.
2005-01-01
Our research achievements under the NASA-JSC grant contributed significantly in the following areas. Multi-agent based robot control architecture called the Intelligent Machine Architecture (IMA) : The Vanderbilt team received a Space Act Award for this research from NASA JSC in October 2004. Cognitive Control and the Self Agent : Cognitive control in human is the ability to consciously manipulate thoughts and behaviors using attention to deal with conflicting goals and demands. We have been updating the IMA Self Agent towards this goal. If opportunity arises, we would like to work with NASA to empower Robonaut to do cognitive control. Applications 1. SES for Robonaut, 2. Robonaut Fault Diagnostic System, 3. ISAC Behavior Generation and Learning, 4. Segway Research.
An Embedded Multi-Agent Systems Based Industrial Wireless Sensor Network.
Taboun, Mohammed S; Brennan, Robert W
2017-09-14
With the emergence of cyber-physical systems, there has been a growing interest in network-connected devices. One of the key requirements of a cyber-physical device is the ability to sense its environment. Wireless sensor networks are a widely-accepted solution for this requirement. In this study, an embedded multi-agent systems-managed wireless sensor network is presented. A novel architecture is proposed, along with a novel wireless sensor network architecture. Active and passive wireless sensor node types are defined, along with their communication protocols, and two application-specific examples are presented. A series of three experiments is conducted to evaluate the performance of the agent-embedded wireless sensor network.
Agent-oriented privacy-based information brokering architecture for healthcare environments.
Masaud-Wahaishi, Abdulmutalib; Ghenniwa, Hamada
2009-01-01
Healthcare industry is facing a major reform at all levels-locally, regionally, nationally, and internationally. Healthcare services and systems become very complex and comprise of a vast number of components (software systems, doctors, patients, etc.) that are characterized by shared, distributed and heterogeneous information sources with varieties of clinical and other settings. The challenge now faced with decision making, and management of care is to operate effectively in order to meet the information needs of healthcare personnel. Currently, researchers, developers, and systems engineers are working toward achieving better efficiency and quality of service in various sectors of healthcare, such as hospital management, patient care, and treatment. This paper presents a novel information brokering architecture that supports privacy-based information gathering in healthcare. Architecturally, the brokering is viewed as a layer of services where a brokering service is modeled as an agent with a specific architecture and interaction protocol that are appropriate to serve various requests. Within the context of brokering, we model privacy in terms of the entities ability to hide or reveal information related to its identities, requests, and/or capabilities. A prototype of the proposed architecture has been implemented to support information-gathering capabilities in healthcare environments using FIPA-complaint platform JADE.
A Multi-Agent System Architecture for Sensor Networks
Fuentes-Fernández, Rubén; Guijarro, María; Pajares, Gonzalo
2009-01-01
The design of the control systems for sensor networks presents important challenges. Besides the traditional problems about how to process the sensor data to obtain the target information, engineers need to consider additional aspects such as the heterogeneity and high number of sensors, and the flexibility of these networks regarding topologies and the sensors in them. Although there are partial approaches for resolving these issues, their integration relies on ad hoc solutions requiring important development efforts. In order to provide an effective approach for this integration, this paper proposes an architecture based on the multi-agent system paradigm with a clear separation of concerns. The architecture considers sensors as devices used by an upper layer of manager agents. These agents are able to communicate and negotiate services to achieve the required functionality. Activities are organized according to roles related with the different aspects to integrate, mainly sensor management, data processing, communication and adaptation to changes in the available devices and their capabilities. This organization largely isolates and decouples the data management from the changing network, while encouraging reuse of solutions. The use of the architecture is facilitated by a specific modelling language developed through metamodelling. A case study concerning a generic distributed system for fire fighting illustrates the approach and the comparison with related work. PMID:22303172
A multi-agent system architecture for sensor networks.
Fuentes-Fernández, Rubén; Guijarro, María; Pajares, Gonzalo
2009-01-01
The design of the control systems for sensor networks presents important challenges. Besides the traditional problems about how to process the sensor data to obtain the target information, engineers need to consider additional aspects such as the heterogeneity and high number of sensors, and the flexibility of these networks regarding topologies and the sensors in them. Although there are partial approaches for resolving these issues, their integration relies on ad hoc solutions requiring important development efforts. In order to provide an effective approach for this integration, this paper proposes an architecture based on the multi-agent system paradigm with a clear separation of concerns. The architecture considers sensors as devices used by an upper layer of manager agents. These agents are able to communicate and negotiate services to achieve the required functionality. Activities are organized according to roles related with the different aspects to integrate, mainly sensor management, data processing, communication and adaptation to changes in the available devices and their capabilities. This organization largely isolates and decouples the data management from the changing network, while encouraging reuse of solutions. The use of the architecture is facilitated by a specific modelling language developed through metamodelling. A case study concerning a generic distributed system for fire fighting illustrates the approach and the comparison with related work.
Intelligent Agent Architectures: Reactive Planning Testbed
NASA Technical Reports Server (NTRS)
Rosenschein, Stanley J.; Kahn, Philip
1993-01-01
An Integrated Agent Architecture (IAA) is a framework or paradigm for constructing intelligent agents. Intelligent agents are collections of sensors, computers, and effectors that interact with their environments in real time in goal-directed ways. Because of the complexity involved in designing intelligent agents, it has been found useful to approach the construction of agents with some organizing principle, theory, or paradigm that gives shape to the agent's components and structures their relationships. Given the wide variety of approaches being taken in the field, the question naturally arises: Is there a way to compare and evaluate these approaches? The purpose of the present work is to develop common benchmark tasks and evaluation metrics to which intelligent agents, including complex robotic agents, constructed using various architectural approaches can be subjected.
TSI-Enhanced Pedagogical Agents to Engage Learners in Virtual Worlds
ERIC Educational Resources Information Center
Leung, Steve; Virwaney, Sandeep; Lin, Fuhua; Armstrong, AJ; Dubbelboer, Adien
2013-01-01
Building pedagogical applications in virtual worlds is a multi-disciplinary endeavor that involves learning theories, application development framework, and mediated communication theories. This paper presents a project that integrates game-based learning, multi-agent system architecture (MAS), and the theory of Transformed Social Interaction…
NASA Astrophysics Data System (ADS)
Narayan Ray, Dip; Majumder, Somajyoti
2014-07-01
Several attempts have been made by the researchers around the world to develop a number of autonomous exploration techniques for robots. But it has been always an important issue for developing the algorithm for unstructured and unknown environments. Human-like gradual Multi-agent Q-leaming (HuMAQ) is a technique developed for autonomous robotic exploration in unknown (and even unimaginable) environments. It has been successfully implemented in multi-agent single robotic system. HuMAQ uses the concept of Subsumption architecture, a well-known Behaviour-based architecture for prioritizing the agents of the multi-agent system and executes only the most common action out of all the different actions recommended by different agents. Instead of using new state-action table (Q-table) each time, HuMAQ uses the immediate past table for efficient and faster exploration. The proof of learning has also been established both theoretically and practically. HuMAQ has the potential to be used in different and difficult situations as well as applications. The same architecture has been modified to use for multi-robot exploration in an environment. Apart from all other existing agents used in the single robotic system, agents for inter-robot communication and coordination/ co-operation with the other similar robots have been introduced in the present research. Current work uses a series of indigenously developed identical autonomous robotic systems, communicating with each other through ZigBee protocol.
X-ray spatial frequency heterodyne imaging of protein-based nanobubble contrast agents
Rand, Danielle; Uchida, Masaki; Douglas, Trevor; Rose-Petruck, Christoph
2014-01-01
Spatial Frequency Heterodyne Imaging (SFHI) is a novel x-ray scatter imaging technique that utilizes nanoparticle contrast agents. The enhanced sensitivity of this new technique relative to traditional absorption-based x-ray radiography makes it promising for applications in biomedical and materials imaging. Although previous studies on SFHI have utilized only metal nanoparticle contrast agents, we show that nanomaterials with a much lower electron density are also suitable. We prepared protein-based “nanobubble” contrast agents that are comprised of protein cage architectures filled with gas. Results show that these nanobubbles provide contrast in SFHI comparable to that of gold nanoparticles of similar size. PMID:25321797
An agent based architecture for high-risk neonate management at neonatal intensive care unit.
Malak, Jaleh Shoshtarian; Safdari, Reza; Zeraati, Hojjat; Nayeri, Fatemeh Sadat; Mohammadzadeh, Niloofar; Farajollah, Seide Sedighe Seied
2018-01-01
In recent years, the use of new tools and technologies has decreased the neonatal mortality rate. Despite the positive effect of using these technologies, the decisions are complex and uncertain in critical conditions when the neonate is preterm or has a low birth weight or malformations. There is a need to automate the high-risk neonate management process by creating real-time and more precise decision support tools. To create a collaborative and real-time environment to manage neonates with critical conditions at the NICU (Neonatal Intensive Care Unit) and to overcome high-risk neonate management weaknesses by applying a multi agent based analysis and design methodology as a new solution for NICU management. This study was a basic research for medical informatics method development that was carried out in 2017. The requirement analysis was done by reviewing articles on NICU Decision Support Systems. PubMed, Science Direct, and IEEE databases were searched. Only English articles published after 1990 were included; also, a needs assessment was done by reviewing the extracted features and current processes at the NICU environment where the research was conducted. We analyzed the requirements and identified the main system roles (agents) and interactions by a comparative study of existing NICU decision support systems. The Universal Multi Agent Platform (UMAP) was applied to implement a prototype of our multi agent based high-risk neonate management architecture. Local environment agents interacted inside a container and each container interacted with external resources, including other NICU systems and consultation centers. In the NICU container, the main identified agents were reception, monitoring, NICU registry, and outcome prediction, which interacted with human agents including nurses and physicians. Managing patients at the NICU units requires online data collection, real-time collaboration, and management of many components. Multi agent systems are applied as a well-known solution for management, coordination, modeling, and control of NICU processes. We are currently working on an outcome prediction module using artificial intelligence techniques for neonatal mortality risk prediction. The full implementation of the proposed architecture and evaluation is considered the future work.
A Biologically Inspired Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Tom; Craft, Mike; ONeil, Daniel; Howell, Joe T. (Technical Monitor)
2002-01-01
A prototype cooperative multi-robot control architecture suitable for the eventual construction of large space structures has been developed. In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. The prototype control architecture emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
NASA Technical Reports Server (NTRS)
Filho, Aluzio Haendehen; Caminada, Numo; Haeusler, Edward Hermann; vonStaa, Arndt
2004-01-01
To support the development of flexible and reusable MAS, we have built a framework designated MAS-CF. MAS-CF is a component framework that implements a layered architecture based on contextual composition. Interaction rules, controlled by architecture mechanisms, ensure very low coupling, making possible the sharing of distributed services in a transparent, dynamic and independent way. These properties propitiate large-scale reuse, since organizational abstractions can be reused and propagated to all instances created from a framework. The objective is to reduce complexity and development time of multi-agent systems through the reuse of generic organizational abstractions.
NASA Astrophysics Data System (ADS)
Bosse, Stefan
2013-05-01
Sensorial materials consisting of high-density, miniaturized, and embedded sensor networks require new robust and reliable data processing and communication approaches. Structural health monitoring is one major field of application for sensorial materials. Each sensor node provides some kind of sensor, electronics, data processing, and communication with a strong focus on microchip-level implementation to meet the goals of miniaturization and low-power energy environments, a prerequisite for autonomous behaviour and operation. Reliability requires robustness of the entire system in the presence of node, link, data processing, and communication failures. Interaction between nodes is required to manage and distribute information. One common interaction model is the mobile agent. An agent approach provides stronger autonomy than a traditional object or remote-procedure-call based approach. Agents can decide for themselves, which actions are performed, and they are capable of flexible behaviour, reacting on the environment and other agents, providing some degree of robustness. Traditionally multi-agent systems are abstract programming models which are implemented in software and executed on program controlled computer architectures. This approach does not well scale to micro-chip level and requires full equipped computers and communication structures, and the hardware architecture does not consider and reflect the requirements for agent processing and interaction. We propose and demonstrate a novel design paradigm for reliable distributed data processing systems and a synthesis methodology and framework for multi-agent systems implementable entirely on microchip-level with resource and power constrained digital logic supporting Agent-On-Chip architectures (AoC). The agent behaviour and mobility is fully integrated on the micro-chip using pipelined communicating processes implemented with finite-state machines and register-transfer logic. The agent behaviour, interaction (communication), and mobility features are modelled and specified on a machine-independent abstract programming level using a state-based agent behaviour language (APL). With this APL a high-level agent compiler is able to synthesize a hardware model (RTL, VHDL), a software model (C, ML), or a simulation model (XML) suitable to simulate a multi-agent system using the SeSAm simulator framework. Agent communication is provided by a simple tuple-space database implemented on node level providing fault tolerant access of global data. A novel synthesis development kit (SynDK) based on a graph-structured database approach is introduced to support the rapid development of compilers and synthesis tools, used for example for the design and implementation of the APL compiler.
Advanced nanoelectronic architectures for THz-based biological agent detection
NASA Astrophysics Data System (ADS)
Woolard, Dwight L.; Jensen, James O.
2009-02-01
The U.S. Army Research Office (ARO) and the U.S. Army Edgewood Chemical Biological Center (ECBC) jointly lead and support novel research programs that are advancing the state-of-the-art in nanoelectronic engineering in application areas that have relevance to national defense and security. One fundamental research area that is presently being emphasized by ARO and ECBC is the exploratory investigation of new bio-molecular architectural concepts that can be used to achieve rapid, reagent-less detection and discrimination of biological warfare (BW) agents, through the control of multi-photon and multi-wavelength processes at the nanoscale. This paper will overview an ARO/ECBC led multidisciplinary research program presently under the support of the U.S. Defense Threat Reduction Agency (DTRA) that seeks to develop new devices and nanoelectronic architectures that are effective for extracting THz signatures from target bio-molecules. Here, emphasis will be placed on the new nanosensor concepts and THz/Optical measurement methodologies for spectral-based sequencing/identification of genetic molecules.
Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis
Castro, Alfonso; Sedano, Andrés A.; García, Fco. Javier; Villoslada, Eduardo
2017-01-01
Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica’s global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam. PMID:29283398
Application of a Multimedia Service and Resource Management Architecture for Fault Diagnosis.
Castro, Alfonso; Sedano, Andrés A; García, Fco Javier; Villoslada, Eduardo; Villagrá, Víctor A
2017-12-28
Nowadays, the complexity of global video products has substantially increased. They are composed of several associated services whose functionalities need to adapt across heterogeneous networks with different technologies and administrative domains. Each of these domains has different operational procedures; therefore, the comprehensive management of multi-domain services presents serious challenges. This paper discusses an approach to service management linking fault diagnosis system and Business Processes for Telefónica's global video service. The main contribution of this paper is the proposal of an extended service management architecture based on Multi Agent Systems able to integrate the fault diagnosis with other different service management functionalities. This architecture includes a distributed set of agents able to coordinate their actions under the umbrella of a Shared Knowledge Plane, inferring and sharing their knowledge with semantic techniques and three types of automatic reasoning: heterogeneous, ontology-based and Bayesian reasoning. This proposal has been deployed and validated in a real scenario in the video service offered by Telefónica Latam.
Interaction with Machine Improvisation
NASA Astrophysics Data System (ADS)
Assayag, Gerard; Bloch, George; Cont, Arshia; Dubnov, Shlomo
We describe two multi-agent architectures for an improvisation oriented musician-machine interaction systems that learn in real time from human performers. The improvisation kernel is based on sequence modeling and statistical learning. We present two frameworks of interaction with this kernel. In the first, the stylistic interaction is guided by a human operator in front of an interactive computer environment. In the second framework, the stylistic interaction is delegated to machine intelligence and therefore, knowledge propagation and decision are taken care of by the computer alone. The first framework involves a hybrid architecture using two popular composition/performance environments, Max and OpenMusic, that are put to work and communicate together, each one handling the process at a different time/memory scale. The second framework shares the same representational schemes with the first but uses an Active Learning architecture based on collaborative, competitive and memory-based learning to handle stylistic interactions. Both systems are capable of processing real-time audio/video as well as MIDI. After discussing the general cognitive background of improvisation practices, the statistical modelling tools and the concurrent agent architecture are presented. Then, an Active Learning scheme is described and considered in terms of using different improvisation regimes for improvisation planning. Finally, we provide more details about the different system implementations and describe several performances with the system.
Intelligent web agents for a 3D virtual community
NASA Astrophysics Data System (ADS)
Dave, T. M.; Zhang, Yanqing; Owen, G. S. S.; Sunderraman, Rajshekhar
2003-08-01
In this paper, we propose an Avatar-based intelligent agent technique for 3D Web based Virtual Communities based on distributed artificial intelligence, intelligent agent techniques, and databases and knowledge bases in a digital library. One of the goals of this joint NSF (IIS-9980130) and ACM SIGGRAPH Education Committee (ASEC) project is to create a virtual community of educators and students who have a common interest in comptuer graphics, visualization, and interactive techniqeus. In this virtual community (ASEC World) Avatars will represent the educators, students, and other visitors to the world. Intelligent agents represented as specially dressed Avatars will be available to assist the visitors to ASEC World. The basic Web client-server architecture of the intelligent knowledge-based avatars is given. Importantly, the intelligent Web agent software system for the 3D virtual community is implemented successfully.
Optimal use of novel agents in chronic lymphocytic leukemia.
Smith, Mitchell R; Weiss, Robert F
2018-05-07
Novel agents are changing therapy for patients with CLL, but their optimal use remains unclear. We model the clinical situation in which CLL responds to therapy, but resistant clones, generally carrying del17p, progress and lead to relapse. Sub-clones of varying growth rates and treatment sensitivity affect predicted therapy outcomes. We explore effects of different approaches to starting novel agent in relation to bendamustine-rituximab induction therapy: at initiation of therapy, at the end of chemo-immunotherapy, at molecular relapse, or at clinical detection of relapse. The outcomes differ depending on the underlying clonal architecture, raising the concept that personalized approaches based on clinical evaluation of each patient's clonal architecture might optimize outcomes while minimizing toxicity and cost. Copyright © 2018 Elsevier Ltd. All rights reserved.
Fuzzy Hybrid Deliberative/Reactive Paradigm (FHDRP)
NASA Technical Reports Server (NTRS)
Sarmadi, Hengameth
2004-01-01
This work aims to introduce a new concept for incorporating fuzzy sets in hybrid deliberative/reactive paradigm. After a brief review on basic issues of hybrid paradigm the definition of agent-based fuzzy hybrid paradigm, which enables the agents to proceed and extract their behavior through quantitative numerical and qualitative knowledge and to impose their decision making procedure via fuzzy rule bank, is discussed. Next an example performs a more applied platform for the developed approach and finally an overview of the corresponding agents architecture enhances agents logical framework.
A GH-Based Ontology to Support Applications for Automating Decision Support
2005-03-01
architecture for a decision support sys - tem. For this reason, it obtains data from, and updates, a database. IDA also wanted the prototype’s architecture...Chief In- formation Officer CoABS Control of Agent Based Sys - tems DBMS Database Management System DoD Department of Defense DTD Document Type...Generic Hub, the Moyeu Générique, and the Generische Nabe , specifying each as a separate service description with property names and values of the GH
Adaptive method with intercessory feedback control for an intelligent agent
Goldsmith, Steven Y.
2004-06-22
An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.
Mobile Agents: A Distributed Voice-Commanded Sensory and Robotic System for Surface EVA Assistance
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ronnie
2003-01-01
A model-based, distributed architecture integrates diverse components in a system designed for lunar and planetary surface operations: spacesuit biosensors, cameras, GPS, and a robotic assistant. The system transmits data and assists communication between the extra-vehicular activity (EVA) astronauts, the crew in a local habitat, and a remote mission support team. Software processes ("agents"), implemented in a system called Brahms, run on multiple, mobile platforms, including the spacesuit backpacks, all-terrain vehicles, and robot. These "mobile agents" interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. Different types of agents relate platforms to each other ("proxy agents"), devices to software ("comm agents"), and people to the system ("personal agents"). A state-of-the-art spoken dialogue interface enables people to communicate with their personal agents, supporting a speech-driven navigation and scheduling tool, field observation record, and rover command system. An important aspect of the engineering methodology involves first simulating the entire hardware and software system in Brahms, and then configuring the agents into a runtime system. Design of mobile agent functionality has been based on ethnographic observation of scientists working in Mars analog settings in the High Canadian Arctic on Devon Island and the southeast Utah desert. The Mobile Agents system is developed iteratively in the context of use, with people doing authentic work. This paper provides a brief introduction to the architecture and emphasizes the method of empirical requirements analysis, through which observation, modeling, design, and testing are integrated in simulated EVA operations.
Generic Divide and Conquer Internet-Based Computing
NASA Technical Reports Server (NTRS)
Radenski, Atanas; Follen, Gregory J. (Technical Monitor)
2001-01-01
The rapid growth of internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of new, internet-oriented software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high -performance computing applications community. The general goal of this research project is to contribute to better understanding of the transition to internet-based high -performance computing and to develop solutions for some of the difficulties of this transition. More specifically, our goal is to design an architecture for generic divide and conquer internet-based computing, to develop a portable implementation of this architecture, to create an example library of high-performance divide-and-conquer computing agents that run on top of this architecture, and to evaluate the performance of these agents. We have been designing an architecture that incorporates a master task-pool server and utilizes satellite computational servers that operate on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. Our designed architecture is intended to be complementary to and accessible from computational grids such as Globus, Legion, and Condor. Grids provide remote access to existing high-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end internet nodes. Our project is focused on a generic divide-and-conquer paradigm and its applications that operate on a loose and ever changing pool of lower-end internet nodes.
An Evolvable Multi-Agent Approach to Space Operations Engineering
NASA Technical Reports Server (NTRS)
Mandutianu, Sanda; Stoica, Adrian
1999-01-01
A complex system of spacecraft and ground tracking stations, as well as a constellation of satellites or spacecraft, has to be able to reliably withstand sudden environment changes, resource fluctuations, dynamic resource configuration, limited communication bandwidth, etc., while maintaining the consistency of the system as a whole. It is not known in advance when a change in the environment might occur or when a particular exchange will happen. A higher degree of sophistication for the communication mechanisms between different parts of the system is required. The actual behavior has to be determined while the system is performing and the course of action can be decided at the individual level. Under such circumstances, the solution will highly benefit from increased on-board and on the ground adaptability and autonomy. An evolvable architecture based on intelligent agents that communicate and cooperate with each other can offer advantages in this direction. This paper presents an architecture of an evolvable agent-based system (software and software/hardware hybrids) as well as some plans for further implementation.
A Stigmergic Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Thomas G.; O'Neil, Daniel; Craft, Michael A.
2004-01-01
In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. A prototype cooperative multi-robot control architecture which may be suitable for the eventual construction of large space structures has been developed which emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically, i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
Zheng, Song; Zhang, Qi; Zheng, Rong; Huang, Bi-Qin; Song, Yi-Lin; Chen, Xin-Chu
2017-01-01
In recent years, the smart home field has gained wide attention for its broad application prospects. However, families using smart home systems must usually adopt various heterogeneous smart devices, including sensors and devices, which makes it more difficult to manage and control their home system. How to design a unified control platform to deal with the collaborative control problem of heterogeneous smart devices is one of the greatest challenges in the current smart home field. The main contribution of this paper is to propose a universal smart home control platform architecture (IAPhome) based on a multi-agent system and communication middleware, which shows significant adaptability and advantages in many aspects, including heterogeneous devices connectivity, collaborative control, human-computer interaction and user self-management. The communication middleware is an important foundation to design and implement this architecture which makes it possible to integrate heterogeneous smart devices in a flexible way. A concrete method of applying the multi-agent software technique to solve the integrated control problem of the smart home system is also presented. The proposed platform architecture has been tested in a real smart home environment, and the results indicate that the effectiveness of our approach for solving the collaborative control problem of different smart devices. PMID:28926957
Zheng, Song; Zhang, Qi; Zheng, Rong; Huang, Bi-Qin; Song, Yi-Lin; Chen, Xin-Chu
2017-09-16
In recent years, the smart home field has gained wide attention for its broad application prospects. However, families using smart home systems must usually adopt various heterogeneous smart devices, including sensors and devices, which makes it more difficult to manage and control their home system. How to design a unified control platform to deal with the collaborative control problem of heterogeneous smart devices is one of the greatest challenges in the current smart home field. The main contribution of this paper is to propose a universal smart home control platform architecture (IAPhome) based on a multi-agent system and communication middleware, which shows significant adaptability and advantages in many aspects, including heterogeneous devices connectivity, collaborative control, human-computer interaction and user self-management. The communication middleware is an important foundation to design and implement this architecture which makes it possible to integrate heterogeneous smart devices in a flexible way. A concrete method of applying the multi-agent software technique to solve the integrated control problem of the smart home system is also presented. The proposed platform architecture has been tested in a real smart home environment, and the results indicate that the effectiveness of our approach for solving the collaborative control problem of different smart devices.
Control Architecture for Robotic Agent Command and Sensing
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Aghazarian, Hrand; Estlin, Tara; Gaines, Daniel
2008-01-01
Control Architecture for Robotic Agent Command and Sensing (CARACaS) is a recent product of a continuing effort to develop architectures for controlling either a single autonomous robotic vehicle or multiple cooperating but otherwise autonomous robotic vehicles. CARACaS is potentially applicable to diverse robotic systems that could include aircraft, spacecraft, ground vehicles, surface water vessels, and/or underwater vessels. CARACaS incudes an integral combination of three coupled agents: a dynamic planning engine, a behavior engine, and a perception engine. The perception and dynamic planning en - gines are also coupled with a memory in the form of a world model. CARACaS is intended to satisfy the need for two major capabilities essential for proper functioning of an autonomous robotic system: a capability for deterministic reaction to unanticipated occurrences and a capability for re-planning in the face of changing goals, conditions, or resources. The behavior engine incorporates the multi-agent control architecture, called CAMPOUT, described in An Architecture for Controlling Multiple Robots (NPO-30345), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 65. CAMPOUT is used to develop behavior-composition and -coordination mechanisms. Real-time process algebra operators are used to compose a behavior network for any given mission scenario. These operators afford a capability for producing a formally correct kernel of behaviors that guarantee predictable performance. By use of a method based on multi-objective decision theory (MODT), recommendations from multiple behaviors are combined to form a set of control actions that represents their consensus. In this approach, all behaviors contribute simultaneously to the control of the robotic system in a cooperative rather than a competitive manner. This approach guarantees a solution that is good enough with respect to resolution of complex, possibly conflicting goals within the constraints of the mission to be accomplished by the vehicle(s).
Plug-In Tutor Agents: Still Pluggin'
ERIC Educational Resources Information Center
Ritter, Steven
2016-01-01
"An Architecture for Plug-in Tutor Agents" (Ritter and Koedinger 1996) proposed a software architecture designed around the idea that tutors could be built as plug-ins for existing software applications. Looking back on the paper now, we can see that certain assumptions about the future of software architecture did not come to be, making…
Systemic risk on different interbank network topologies
NASA Astrophysics Data System (ADS)
Lenzu, Simone; Tedeschi, Gabriele
2012-09-01
In this paper we develop an interbank market with heterogeneous financial institutions that enter into lending agreements on different network structures. Credit relationships (links) evolve endogenously via a fitness mechanism based on agents' performance. By changing the agent's trust on its neighbor's performance, interbank linkages self-organize themselves into very different network architectures, ranging from random to scale-free topologies. We study which network architecture can make the financial system more resilient to random attacks and how systemic risk spreads over the network. To perturb the system, we generate a random attack via a liquidity shock. The hit bank is not automatically eliminated, but its failure is endogenously driven by its incapacity to raise liquidity in the interbank network. Our analysis shows that a random financial network can be more resilient than a scale free one in case of agents' heterogeneity.
Open multi-agent control architecture to support virtual-reality-based man-machine interfaces
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel
2001-10-01
Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.
Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture
NASA Technical Reports Server (NTRS)
Fiene, Bruce F.
1994-01-01
The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.
An enhanced performance through agent-based secure approach for mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Bisen, Dhananjay; Sharma, Sanjeev
2018-01-01
This paper proposes an agent-based secure enhanced performance approach (AB-SEP) for mobile ad hoc network. In this approach, agent nodes are selected through optimal node reliability as a factor. This factor is calculated on the basis of node performance features such as degree difference, normalised distance value, energy level, mobility and optimal hello interval of node. After selection of agent nodes, a procedure of malicious behaviour detection is performed using fuzzy-based secure architecture (FBSA). To evaluate the performance of the proposed approach, comparative analysis is done with conventional schemes using performance parameters such as packet delivery ratio, throughput, total packet forwarding, network overhead, end-to-end delay and percentage of malicious detection.
An SNMP-based solution to enable remote ISO/IEEE 11073 technical management.
Lasierra, Nelia; Alesanco, Alvaro; García, José
2012-07-01
This paper presents the design and implementation of an architecture based on the integration of simple network management protocol version 3 (SNMPv3) and the standard ISO/IEEE 11073 (X73) to manage technical information in home-based telemonitoring scenarios. This architecture includes the development of an SNMPv3-proxyX73 agent which comprises a management information base (MIB) module adapted to X73. In the proposed scenario, medical devices (MDs) send information to a concentrator device [designated as compute engine (CE)] using the X73 standard. This information together with extra information collected in the CE is stored in the developed MIB. Finally, the information collected is available for remote access via SNMP connection. Moreover, alarms and events can be configured by an external manager in order to provide warnings of irregularities in the MDs' technical performance evaluation. This proposed SNMPv3 agent provides a solution to integrate and unify technical device management in home-based telemonitoring scenarios fully adapted to X73.
Distributed Cooperation Solution Method of Complex System Based on MAS
NASA Astrophysics Data System (ADS)
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
Automated monitoring of medical protocols: a secure and distributed architecture.
Alsinet, T; Ansótegui, C; Béjar, R; Fernández, C; Manyà, F
2003-03-01
The control of the right application of medical protocols is a key issue in hospital environments. For the automated monitoring of medical protocols, we need a domain-independent language for their representation and a fully, or semi, autonomous system that understands the protocols and supervises their application. In this paper we describe a specification language and a multi-agent system architecture for monitoring medical protocols. We model medical services in hospital environments as specialized domain agents and interpret a medical protocol as a negotiation process between agents. A medical service can be involved in multiple medical protocols, and so specialized domain agents are independent of negotiation processes and autonomous system agents perform monitoring tasks. We present the detailed architecture of the system agents and of an important domain agent, the database broker agent, that is responsible of obtaining relevant information about the clinical history of patients. We also describe how we tackle the problems of privacy, integrity and authentication during the process of exchanging information between agents.
Enterprise Management Network Architecture Distributed Knowledge Base Support
1990-11-01
Advantages Potentially, this makes a distributed system more powerful than a conventional, centralized one in two ways: " First, it can be more reliable...does not completely apply [35]. The grain size of the processors measures the individual problem-solving power of the agents. In this definition...problem-solving power amounts to the conceptual size of a single action taken by an agent visible to the other agents in the system. If the grain is coarse
UPM: unified policy-based network management
NASA Astrophysics Data System (ADS)
Law, Eddie; Saxena, Achint
2001-07-01
Besides providing network management to the Internet, it has become essential to offer different Quality of Service (QoS) to users. Policy-based management provides control on network routers to achieve this goal. The Internet Engineering Task Force (IETF) has proposed a two-tier architecture whose implementation is based on the Common Open Policy Service (COPS) protocol and Lightweight Directory Access Protocol (LDAP). However, there are several limitations to this design such as scalability and cross-vendor hardware compatibility. To address these issues, we present a functionally enhanced multi-tier policy management architecture design in this paper. Several extensions are introduced thereby adding flexibility and scalability. In particular, an intermediate entity between the policy server and policy rule database called the Policy Enforcement Agent (PEA) is introduced. By keeping internal data in a common format, using a standard protocol, and by interpreting and translating request and decision messages from multi-vendor hardware, this agent allows a dynamic Unified Information Model throughout the architecture. We have tailor-made this unique information system to save policy rules in the directory server and allow executions of policy rules with dynamic addition of new equipment during run-time.
A Participatory Agent-Based Simulation for Indoor Evacuation Supported by Google Glass.
Sánchez, Jesús M; Carrera, Álvaro; Iglesias, Carlos Á; Serrano, Emilio
2016-08-24
Indoor evacuation systems are needed for rescue and safety management. One of the challenges is to provide users with personalized evacuation routes in real time. To this end, this project aims at exploring the possibilities of Google Glass technology for participatory multiagent indoor evacuation simulations. Participatory multiagent simulation combines scenario-guided agents and humans equipped with Google Glass that coexist in a shared virtual space and jointly perform simulations. The paper proposes an architecture for participatory multiagent simulation in order to combine devices (Google Glass and/or smartphones) with an agent-based social simulator and indoor tracking services.
A security architecture for interconnecting health information systems.
Gritzalis, Dimitris; Lambrinoudakis, Costas
2004-03-31
Several hereditary and other chronic diseases necessitate continuous and complicated health care procedures, typically offered in different, often distant, health care units. Inevitably, the medical records of patients suffering from such diseases become complex, grow in size very fast and are scattered all over the units involved in the care process, hindering communication of information between health care professionals. Web-based electronic medical records have been recently proposed as the solution to the above problem, facilitating the interconnection of the health care units in the sense that health care professionals can now access the complete medical record of the patient, even if it is distributed in several remote units. However, by allowing users to access information from virtually anywhere, the universe of ineligible people who may attempt to harm the system is dramatically expanded, thus severely complicating the design and implementation of a secure environment. This paper presents a security architecture that has been mainly designed for providing authentication and authorization services in web-based distributed systems. The architecture has been based on a role-based access scheme and on the implementation of an intelligent security agent per site (i.e. health care unit). This intelligent security agent: (a). authenticates the users, local or remote, that can access the local resources; (b). assigns, through temporary certificates, access privileges to the authenticated users in accordance to their role; and (c). communicates to other sites (through the respective security agents) information about the local users that may need to access information stored in other sites, as well as about local resources that can be accessed remotely.
KeyWare: an open wireless distributed computing environment
NASA Astrophysics Data System (ADS)
Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir
1995-12-01
Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.
Ontology-based, multi-agent support of production management
NASA Astrophysics Data System (ADS)
Meridou, Despina T.; Inden, Udo; Rückemann, Claus-Peter; Patrikakis, Charalampos Z.; Kaklamani, Dimitra-Theodora I.; Venieris, Iakovos S.
2016-06-01
Over the recent years, the reported incidents on failed aircraft ramp-ups or the delayed production in small-lots have increased substantially. In this paper, we present a production management platform that combines agent-based techniques with the Service Oriented Architecture paradigm. This platform takes advantage of the functionality offered by the semantic web language OWL, which allows the users and services of the platform to speak a common language and, at the same time, facilitates risk management and decision making.
Layered Learning in Multi-Agent Systems
1998-12-15
project almost from the beginning has tirelessly experimented with different robot architectures, always managing to pull things together and create...TEAM MEMBER AGENT ARCHITECTURE I " ! Midfielder, Left : • i ) ( ^ J Goalie , Center Home Coordinates Home Range Max Range Figure
NASA Astrophysics Data System (ADS)
Takeuchi, Eric B.; Rayner, Timothy; Weida, Miles; Crivello, Salvatore; Day, Timothy
2007-10-01
Civilian soft targets such as transportation systems are being targeted by terrorists using IEDs and suicide bombers. Having the capability to remotely detect explosives, precursors and other chemicals would enable these assets to be protected with minimal interruption of the flow of commerce. Mid-IR laser technology offers the potential to detect explosives and other chemicals in real-time and from a safe standoff distance. While many of these agents possess "fingerprint" signatures in the mid-IR (i.e. in the 3-20 micron regime), their effective interrogation by a practical, field-deployable system has been limited by size, complexity, reliability and cost constraints of the base laser technology. Daylight Solutions has addressed these shortcomings by developing compact, portable, broadly tunable mid-IR laser sources based upon external-cavity quantum cascade technology. This technology is now being applied by Daylight in system level architectures for standoff and remote detection of explosives, precursors and chemical agents. Several of these architectures and predicted levels of performance will be presented.
Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar.
Lomp, Oliver; Richter, Mathis; Zibner, Stephan K U; Schöner, Gregor
2016-01-01
Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar , which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs.
Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar
Lomp, Oliver; Richter, Mathis; Zibner, Stephan K. U.; Schöner, Gregor
2016-01-01
Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar, which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs. PMID:27853431
Security patterns and a weighting scheme for mobile agents
NASA Astrophysics Data System (ADS)
Walker, Jessie J.
The notion of mobility has always been a prime factor in human endeavor and achievement. This need to migrate by humans has been distilled into software entities, which are their representatives on distant environments. Software agents are developed to act on behalf of a user. Mobile agents were born from the understanding that many times it was much more useful to move the code (program) to where the resources are located, instead of connecting remotely. Within the mobile agent research community, security has traditionally been the most defining issue facing the community and preventing the paradigm from gaining wide acceptance. There are still numerous difficult problems being addressed with very few practical solutions, such as the malicious host and agent problems. These problems are some of the most active areas of research within the mobile agent community. The major principles, facets, fundamental concepts, techniques and architectures of the field are well understood within the community. This is evident by the many mobile agent systems developed in the last decade that share common core components such as agent management, communication facilities, and mobility services. In other words new mobile agent systems and frameworks do not provide any new insights into agent system architecture or mobility services, agent coordination, communication that could be useful to the agent research community, although these new mobile agent systems do in many instances validate, refine, demonstrate the reuse of many previously proposed and discussed mobile agent research elements. Since mobile agent research for the last decade has been defined by security and related issues, our research into security patterns are within this narrow arena of mobile agent research. The research presented in this thesis examines the issue of mobile agent security from the standpoint of security pattern documented from the universe of mobile agent systems. In addition, we explore how these documented security patterns can be quantitatively compared based on a unique weighting scheme. The scheme is formalized into a theory that can be used improve the development of secure mobile agents and agent-based systems.
A Participatory Agent-Based Simulation for Indoor Evacuation Supported by Google Glass
Sánchez, Jesús M.; Carrera, Álvaro; Iglesias, Carlos Á.; Serrano, Emilio
2016-01-01
Indoor evacuation systems are needed for rescue and safety management. One of the challenges is to provide users with personalized evacuation routes in real time. To this end, this project aims at exploring the possibilities of Google Glass technology for participatory multiagent indoor evacuation simulations. Participatory multiagent simulation combines scenario-guided agents and humans equipped with Google Glass that coexist in a shared virtual space and jointly perform simulations. The paper proposes an architecture for participatory multiagent simulation in order to combine devices (Google Glass and/or smartphones) with an agent-based social simulator and indoor tracking services. PMID:27563911
A Diversified Investment Strategy Using Autonomous Agents
NASA Astrophysics Data System (ADS)
Barbosa, Rui Pedro; Belo, Orlando
In a previously published article, we presented an architecture for implementing agents with the ability to trade autonomously in the Forex market. At the core of this architecture is an ensemble of classification and regression models that is used to predict the direction of the price of a currency pair. In this paper, we will describe a diversified investment strategy consisting of five agents which were implemented using that architecture. By simulating trades with 18 months of out-of-sample data, we will demonstrate that data mining models can produce profitable predictions, and that the trading risk can be diminished through investment diversification.
An Application of Artificial Intelligence to the Implementation of Electronic Commerce
NASA Astrophysics Data System (ADS)
Srivastava, Anoop Kumar
In this paper, we present an application of Artificial Intelligence (AI) to the implementation of Electronic Commerce. We provide a multi autonomous agent based framework. Our agent based architecture leads to flexible design of a spectrum of multiagent system (MAS) by distributing computation and by providing a unified interface to data and programs. Autonomous agents are intelligent enough and provide autonomy, simplicity of communication, computation, and a well developed semantics. The steps of design and implementation are discussed in depth, structure of Electronic Marketplace, an ontology, the agent model, and interaction pattern between agents is given. We have developed mechanisms for coordination between agents using a language, which is called Virtual Enterprise Modeling Language (VEML). VEML is a integration of Java and Knowledge Query and Manipulation Language (KQML). VEML provides application programmers with potential to globally develop different kinds of MAS based on their requirements and applications. We have implemented a multi autonomous agent based system called VE System. We demonstrate efficacy of our system by discussing experimental results and its salient features.
Next Generation System and Software Architectures: Challenges from Future NASA Exploration Missions
NASA Technical Reports Server (NTRS)
Sterritt, Roy; Rouff, Christopher A.; Hinchey, Michael G.; Rash, James L.; Truszkowski, Walt
2006-01-01
The four key objective properties of a system that are required of it in order for it to qualify as "autonomic" are now well-accepted-self-configuring, self-healing, self-protecting, and self-optimizing- together with the attribute properties-viz. self-aware, environment-aware, self-monitoring and self- adjusting. This paper describes the need for next generation system software architectures, where components are agents, rather than objects masquerading as agents, and where support is provided for self-* properties (both existing self-chop and emerging self-* properties). These are discussed as exhibited in NASA missions, and in particular with reference to a NASA concept mission, ANTS, which is illustrative of future NASA exploration missions based on the technology of intelligent swarms.
Agent Technology, Complex Adaptive Systems, and Autonomic Systems: Their Relationships
NASA Technical Reports Server (NTRS)
Truszkowski, Walt; Rash, James; Rouff, Chistopher; Hincheny, Mike
2004-01-01
To reduce the cost of future spaceflight missions and to perform new science, NASA has been investigating autonomous ground and space flight systems. These goals of cost reduction have been further complicated by nanosatellites for future science data-gathering which will have large communications delays and at times be out of contact with ground control for extended periods of time. This paper describes two prototype agent-based systems, the Lights-out Ground Operations System (LOGOS) and the Agent Concept Testbed (ACT), and their autonomic properties that were developed at NASA Goddard Space Flight Center (GSFC) to demonstrate autonomous operations of future space flight missions. The paper discusses the architecture of the two agent-based systems, operational scenarios of both, and the two systems autonomic properties.
DualTrust: A Trust Management Model for Swarm-Based Autonomic Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiden, Wendy M.
Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and stillmore » serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.« less
The AI Bus architecture for distributed knowledge-based systems
NASA Technical Reports Server (NTRS)
Schultz, Roger D.; Stobie, Iain
1991-01-01
The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.
NASA Astrophysics Data System (ADS)
Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun
2004-04-01
This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.
Scoping Planning Agents With Shared Models
NASA Technical Reports Server (NTRS)
Bedrax-Weiss, Tania; Frank, Jeremy D.; Jonsson, Ari K.; McGann, Conor
2003-01-01
In this paper we provide a formal framework to define the scope of planning agents based on a single declarative model. Having multiple agents sharing a single model provides numerous advantages that lead to reduced development costs and increase reliability of the system. We formally define planning in terms of extensions of an initial partial plan, and a set of flaws that make the plan unacceptable. A Flaw Filter (FF) allows us to identify those flaws relevant to an agent. Flaw filters motivate the Plan Identification Function (PIF), which specifies when an agent is is ready hand control to another agent for further work. PIFs define a set of plan extensions that can be generated from a model and a plan request. FFs and PIFs can be used to define the scope of agents without changing the model. We describe an implementation of PIFsand FFswithin the context of EUROPA, a constraint-based planning architecture, and show how it can be used to easily design many different agents.
NASA Astrophysics Data System (ADS)
Hashimoto, Ryoji; Matsumura, Tomoya; Nozato, Yoshihiro; Watanabe, Kenji; Onoye, Takao
A multi-agent object attention system is proposed, which is based on biologically inspired attractor selection model. Object attention is facilitated by using a video sequence and a depth map obtained through a compound-eye image sensor TOMBO. Robustness of the multi-agent system over environmental changes is enhanced by utilizing the biological model of adaptive response by attractor selection. To implement the proposed system, an efficient VLSI architecture is employed with reducing enormous computational costs and memory accesses required for depth map processing and multi-agent attractor selection process. According to the FPGA implementation result of the proposed object attention system, which is accomplished by using 7,063 slices, 640×512 pixel input images can be processed in real-time with three agents at a rate of 9fps in 48MHz operation.
Bosse, Stefan
2015-01-01
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550
Bosse, Stefan
2015-02-16
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.
A New Multi-Agent Approach to Adaptive E-Education
NASA Astrophysics Data System (ADS)
Chen, Jing; Cheng, Peng
Improving customer satisfaction degree is important in e-Education. This paper describes a new approach to adaptive e-Education taking into account the full spectrum of Web service techniques and activities. It presents a multi-agents architecture based on artificial psychology techniques, which makes the e-Education process both adaptable and dynamic, and hence up-to-date. Knowledge base techniques are used to support the e-Education process, and artificial psychology techniques to deal with user psychology, which makes the e-Education system more effective and satisfying.
A development framework for distributed artificial intelligence
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1989-01-01
The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.
NASA Technical Reports Server (NTRS)
Wakim, Nagi T.; Srivastava, Sadanand; Bousaidi, Mehdi; Goh, Gin-Hua
1995-01-01
Agent-based technologies answer to several challenges posed by additional information processing requirements in today's computing environments. In particular, (1) users desire interaction with computing devices in a mode which is similar to that used between people, (2) the efficiency and successful completion of information processing tasks often require a high-level of expertise in complex and multiple domains, (3) information processing tasks often require handling of large volumes of data and, therefore, continuous and endless processing activities. The concept of an agent is an attempt to address these new challenges by introducing information processing environments in which (1) users can communicate with a system in a natural way, (2) an agent is a specialist and a self-learner and, therefore, it qualifies to be trusted to perform tasks independent of the human user, and (3) an agent is an entity that is continuously active performing tasks that are either delegated to it or self-imposed. The work described in this paper focuses on the development of an interface agent for users of a complex information processing environment (IPE). This activity is part of an on-going effort to build a model for developing agent-based information systems. Such systems will be highly applicable to environments which require a high degree of automation, such as, flight control operations and/or processing of large volumes of data in complex domains, such as the EOSDIS environment and other multidisciplinary, scientific data systems. The concept of an agent as an information processing entity is fully described with emphasis on characteristics of special interest to the User-System Interface Agent (USIA). Issues such as agent 'existence' and 'qualification' are discussed in this paper. Based on a definition of an agent and its main characteristics, we propose an architecture for the development of interface agents for users of an IPE that is agent-oriented and whose resources are likely to be distributed and heterogeneous in nature. The architecture of USIA is outlined in two main components: (1) the user interface which is concerned with issues as user dialog and interaction, user modeling, and adaptation to user profile, and (2) the system interface part which deals with identification of IPE capabilities, task understanding and feasibility assessment, and task delegation and coordination of assistant agents.
Model-free learning on robot kinematic chains using a nested multi-agent topology
NASA Astrophysics Data System (ADS)
Karigiannis, John N.; Tzafestas, Costas S.
2016-11-01
This paper proposes a model-free learning scheme for the developmental acquisition of robot kinematic control and dexterous manipulation skills. The approach is based on a nested-hierarchical multi-agent architecture that intuitively encapsulates the topology of robot kinematic chains, where the activity of each independent degree-of-freedom (DOF) is finally mapped onto a distinct agent. Each one of those agents progressively evolves a local kinematic control strategy in a game-theoretic sense, that is, based on a partial (local) view of the whole system topology, which is incrementally updated through a recursive communication process according to the nested-hierarchical topology. Learning is thus approached not through demonstration and training but through an autonomous self-exploration process. A fuzzy reinforcement learning scheme is employed within each agent to enable efficient exploration in a continuous state-action domain. This paper constitutes in fact a proof of concept, demonstrating that global dexterous manipulation skills can indeed evolve through such a distributed iterative learning of local agent sensorimotor mappings. The main motivation behind the development of such an incremental multi-agent topology is to enhance system modularity, to facilitate extensibility to more complex problem domains and to improve robustness with respect to structural variations including unpredictable internal failures. These attributes of the proposed system are assessed in this paper through numerical experiments in different robot manipulation task scenarios, involving both single and multi-robot kinematic chains. The generalisation capacity of the learning scheme is experimentally assessed and robustness properties of the multi-agent system are also evaluated with respect to unpredictable variations in the kinematic topology. Furthermore, these numerical experiments demonstrate the scalability properties of the proposed nested-hierarchical architecture, where new agents can be recursively added in the hierarchy to encapsulate individual active DOFs. The results presented in this paper demonstrate the feasibility of such a distributed multi-agent control framework, showing that the solutions which emerge are plausible and near-optimal. Numerical efficiency and computational cost issues are also discussed.
Hardware accelerated high performance neutron transport computation based on AGENT methodology
NASA Astrophysics Data System (ADS)
Xiao, Shanjie
The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.
Dual Rationality and Deliberative Agents
NASA Astrophysics Data System (ADS)
Debenham, John; Sierra, Carles
Human agents deliberate using models based on reason for only a minute proportion of the decisions that they make. In stark contrast, the deliberation of artificial agents is heavily dominated by formal models based on reason such as game theory, decision theory and logic—despite that fact that formal reasoning will not necessarily lead to superior real-world decisions. Further the Nobel Laureate Friedrich Hayek warns us of the ‘fatal conceit’ in controlling deliberative systems using models based on reason as the particular model chosen will then shape the system’s future and either impede, or eventually destroy, the subtle evolutionary processes that are an integral part of human systems and institutions, and are crucial to their evolution and long-term survival. We describe an architecture for artificial agents that is founded on Hayek’s two rationalities and supports the two forms of deliberation used by mankind.
High performance cellular level agent-based simulation with FLAME for the GPU.
Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela
2010-05-01
Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.
A Real-Time Rover Executive based On Model-Based Reactive Planning
NASA Technical Reports Server (NTRS)
Bias, M. Bernardine; Lemai, Solange; Muscettola, Nicola; Korsmeyer, David (Technical Monitor)
2003-01-01
This paper reports on the experimental verification of the ability of IDEA (Intelligent Distributed Execution Architecture) effectively operate at multiple levels of abstraction in an autonomous control system. The basic hypothesis of IDEA is that a large control system can be structured as a collection of interacting control agents, each organized around the same fundamental structure. Two IDEA agents, a system-level agent and a mission-level agent, are designed and implemented to autonomously control the K9 rover in real-time. The system is evaluated in the scenario where the rover must acquire images from a specified set of locations. The IDEA agents are responsible for enabling the rover to achieve its goals while monitoring the execution and safety of the rover and recovering from dangerous states when necessary. Experiments carried out both in simulation and on the physical rover, produced highly promising results.
Learning Agents for Autonomous Space Asset Management (LAASAM)
NASA Astrophysics Data System (ADS)
Scally, L.; Bonato, M.; Crowder, J.
2011-09-01
Current and future space systems will continue to grow in complexity and capabilities, creating a formidable challenge to monitor, maintain, and utilize these systems and manage their growing network of space and related ground-based assets. Integrated System Health Management (ISHM), and in particular, Condition-Based System Health Management (CBHM), is the ability to manage and maintain a system using dynamic real-time data to prioritize, optimize, maintain, and allocate resources. CBHM entails the maintenance of systems and equipment based on an assessment of current and projected conditions (situational and health related conditions). A complete, modern CBHM system comprises a number of functional capabilities: sensing and data acquisition; signal processing; conditioning and health assessment; diagnostics and prognostics; and decision reasoning. In addition, an intelligent Human System Interface (HSI) is required to provide the user/analyst with relevant context-sensitive information, the system condition, and its effect on overall situational awareness of space (and related) assets. Colorado Engineering, Inc. (CEI) and Raytheon are investigating and designing an Intelligent Information Agent Architecture that will provide a complete range of CBHM and HSI functionality from data collection through recommendations for specific actions. The research leverages CEI’s expertise with provisioning management network architectures and Raytheon’s extensive experience with learning agents to define a system to autonomously manage a complex network of current and future space-based assets to optimize their utilization.
Activity-Centric Approach to Distributed Programming
NASA Technical Reports Server (NTRS)
Levy, Renato; Satapathy, Goutam; Lang, Jun
2004-01-01
The first phase of an effort to develop a NASA version of the Cybele software system has been completed. To give meaning to even a highly abbreviated summary of the modifications to be embodied in the NASA version, it is necessary to present the following background information on Cybele: Cybele is a proprietary software infrastructure for use by programmers in developing agent-based application programs [complex application programs that contain autonomous, interacting components (agents)]. Cybele provides support for event handling from multiple sources, multithreading, concurrency control, migration, and load balancing. A Cybele agent follows a programming paradigm, called activity-centric programming, that enables an abstraction over system-level thread mechanisms. Activity centric programming relieves application programmers of the complex tasks of thread management, concurrency control, and event management. In order to provide such functionality, activity-centric programming demands support of other layers of software. This concludes the background information. In the first phase of the present development, a new architecture for Cybele was defined. In this architecture, Cybele follows a modular service-based approach to coupling of the programming and service layers of software architecture. In a service-based approach, the functionalities supported by activity-centric programming are apportioned, according to their characteristics, among several groups called services. A well-defined interface among all such services serves as a path that facilitates the maintenance and enhancement of such services without adverse effect on the whole software framework. The activity-centric application-program interface (API) is part of a kernel. The kernel API calls the services by use of their published interface. This approach makes it possible for any application code written exclusively under the API to be portable for any configuration of Cybele.
Collected Papers of the Soar/IFOR Project. Spring 1994
1994-04-25
leads directly to com- aent th kowledge necessary for posite tactical actions. For example, the agent to complete the 1-v-i aggressive bo-fighter may...expressive power and ease of mainte- 47 nance. For example, when mapping all to architectural goals in a simple manner. agent goals to architectural...bear to law-~ehlo fmv, as in Eagle 11 DPowal Aire power that no singl agent has alone. The and Hutchinson, 1993]. However, this did not problenm is
Agents, assemblers, and ANTS: scheduling assembly with market and biological software mechanisms
NASA Astrophysics Data System (ADS)
Toth-Fejel, Tihamer T.
2000-06-01
Nanoscale assemblers will need robust, scalable, flexible, and well-understood mechanisms such as software agents to control them. This paper discusses assemblers and agents, and proposes a taxonomy of their possible interaction. Molecular assembly is seen as a special case of general assembly, subject to many of the same issues, such as the advantages of convergent assembly, and the problem of scheduling. This paper discusses the contract net architecture of ANTS, an agent-based scheduling application under development. It also describes an algorithm for least commitment scheduling, which uses probabilistic committed capacity profiles of resources over time, along with realistic costs, to provide an abstract search space over which the agents can wander to quickly find optimal solutions.
Design Principles of an Open Agent Architecture for Web-Based Learning Community.
ERIC Educational Resources Information Center
Jin, Qun; Ma, Jianhua; Huang, Runhe; Shih, Timothy K.
A Web-based learning community involves much more than putting learning materials into a Web site. It can be seen as a complex virtual organization involved with people, facilities, and cyber-environment. Tremendous work and manpower for maintaining, upgrading, and managing facilities and the cyber-environment are required. There is presented an…
Design, Implementation and Case Study of WISEMAN: WIreless Sensors Employing Mobile AgeNts
NASA Astrophysics Data System (ADS)
González-Valenzuela, Sergio; Chen, Min; Leung, Victor C. M.
We describe the practical implementation of Wiseman: our proposed scheme for running mobile agents in Wireless Sensor Networks. Wiseman’s architecture derives from a much earlier agent system originally conceived for distributed process coordination in wired networks. Given the memory constraints associated with small sensor devices, we revised the architecture of the original agent system to make it applicable to this type of networks. Agents are programmed as compact text scripts that are interpreted at the sensor nodes. Wiseman is currently implemented in TinyOS ver. 1, its binary image occupies 19Kbytes of ROM memory, and it occupies 3Kbytes of RAM to operate. We describe the rationale behind Wiseman’s interpreter architecture and unique programming features that can help reduce packet overhead in sensor networks. In addition, we gauge the proposed system’s efficiency in terms of task duration with different network topologies through a case study that involves an early-fire-detection application in a fictitious forest setting.
Lessons Learned from Autonomous Sciencecraft Experiment
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Sherwood, Rob; Tran, Daniel; Cichy, Benjamin; Rabideau, Gregg; Castano, Rebecca; Davies, Ashley; Mandl, Dan; Frye, Stuart; Trout, Bruce;
2005-01-01
An Autonomous Science Agent has been flying onboard the Earth Observing One Spacecraft since 2003. This software enables the spacecraft to autonomously detect and responds to science events occurring on the Earth such as volcanoes, flooding, and snow melt. The package includes AI-based software systems that perform science data analysis, deliberative planning, and run-time robust execution. This software is in routine use to fly the EO-l mission. In this paper we briefly review the agent architecture and discuss lessons learned from this multi-year flight effort pertinent to deployment of software agents to critical applications.
NASA Astrophysics Data System (ADS)
Williams, Mary-Anne
This paper uses robot experience to explore key concepts of autonomy, life and being. Unfortunately, there are no widely accepted definitions of autonomy, life or being. Using a new cognitive agent architecture we argue that autonomy is a key ingredient for both life and being, and set about exploring autonomy as a concept and a capability. Some schools of thought regard autonomy as the key characteristic that distinguishes a system from an agent; agents are systems with autonomy, but rarely is a definition of autonomy provided. Living entities are autonomous systems, and autonomy is vital to life. Intelligence presupposes autonomy too; what would it mean for a system to be intelligent but not exhibit any form of genuine autonomy. Our philosophical, scientific and legal understanding of autonomy and its implications is immature and as a result progress towards designing, building, managing, exploiting and regulating autonomous systems is retarded. In response we put forward a framework for exploring autonomy as a concept and capability based on a new cognitive architecture. Using this architecture tools and benchmarks can be developed to analyze and study autonomy in its own right as a means to further our understanding of autonomous systems, life and being. This endeavor would lead to important practical benefits for autonomous systems design and help determine the legal status of autonomous systems. It is only with a new enabling understanding of autonomy that the dream of Artificial Intelligence and Artificial Life can be realized. We argue that designing systems with genuine autonomy capabilities can be achieved by focusing on agent experiences of being rather than attempting to encode human experiences as symbolic knowledge and know-how in the artificial agents we build.
Synthetic Ni3S2/Ni hybrid architectures as potential contrast agents in MRI
NASA Astrophysics Data System (ADS)
Ma, J.; Chen, K.
2016-04-01
Traditional magnetic resonance imaging (MRI) contrast agents mainly include superparamagnetic (SPM) iron oxide nanoparticle as T 2 contrast agent for liver and paramagnetic Gd (III)-chelate as T 1 contrast agent for all organs. In this work, weak ferromagnetic kale-like and SPM cabbage-like Ni3S2@Ni hybrid architectures were synthesized and evaluated as potential T 1 MRI contrast agents. Their relatively small r 2/r 1 ratios of 2.59 and 2.38, and high r 1 values of 11.27 and 4.89 mmol-1 L s-1 (for the kale-like and cabbage-like Ni3S2@Ni, respectively) will shed some light on the development of new-type MRI contrast agents.
Metareasoning and Social Evaluations in Cognitive Agents
NASA Astrophysics Data System (ADS)
Pinyol, Isaac; Sabater-Mir, Jordi
Reputation mechanisms have been recognized one of the key technologies when designing multi-agent systems. They are specially relevant in complex open environments, becoming a non-centralized mechanism to control interactions among agents. Cognitive agents tackling such complex societies must use reputation information not only for selecting partners to interact with, but also in metareasoning processes to change reasoning rules. This is the focus of this paper. We argue about the necessity to allow, as a cognitive systems designers, certain degree of freedom in the reasoning rules of the agents. We also describes cognitive approaches of agency that support this idea. Furthermore, taking as a base the computational reputation model Repage, and its integration in a BDI architecture, we use the previous ideas to specify metarules and processes to modify at run-time the reasoning paths of the agent. In concrete we propose a metarule to update the link between Repage and the belief base, and a metarule and a process to update an axiom incorporated in the belief logic of the agent. Regarding this last issue we also provide empirical results that show the evolution of agents that use it.
Architecture for Adaptive Intelligent Systems
NASA Technical Reports Server (NTRS)
Hayes-Roth, Barbara
1993-01-01
We identify a class of niches to be occupied by 'adaptive intelligent systems (AISs)'. In contrast with niches occupied by typical AI agents, AIS niches present situations that vary dynamically along several key dimensions: different combinations of required tasks, different configurations of available resources, contextual conditions ranging from benign to stressful, and different performance criteria. We present a small class hierarchy of AIS niches that exhibit these dimensions of variability and describe a particular AIS niche, ICU (intensive care unit) patient monitoring, which we use for illustration throughout the paper. We have designed and implemented an agent architecture that supports all of different kinds of adaptation by exploiting a single underlying theoretical concept: An agent dynamically constructs explicit control plans to guide its choices among situation-triggered behaviors. We illustrate the architecture and its support for adaptation with examples from Guardian, an experimental agent for ICU monitoring.
Virtual odors to transmit emotions in virtual agents
NASA Astrophysics Data System (ADS)
Delgado-Mata, Carlos; Aylett, Ruth
2003-04-01
In this paper we describe an emotional-behvioral architecture. The emotional engine sits at a higher layer than the behavior system, and can alter behavior patterns, the engine is designed to simulate Emotionally-Intelligent Agents in a Virtual Environment, where each agent senses its own emotions, and other creature emotions through a virtual smell sensor; senses obstacles and other moving creatures in the environment and reacts to them. The architecture consists of an emotion engine, behavior synthesis system, a motor layer and a library of sensors.
Duff, Armin; Fibla, Marti Sanchez; Verschure, Paul F M J
2011-06-30
Intelligence depends on the ability of the brain to acquire and apply rules and representations. At the neuronal level these properties have been shown to critically depend on the prefrontal cortex. Here we present, in the context of the Distributed Adaptive Control architecture (DAC), a biologically based model for flexible control and planning based on key physiological properties of the prefrontal cortex, i.e. reward modulated sustained activity and plasticity of lateral connectivity. We test the model in a series of pertinent tasks, including multiple T-mazes and the Tower of London that are standard experimental tasks to assess flexible control and planning. We show that the model is both able to acquire and express rules that capture the properties of the task and to quickly adapt to changes. Further, we demonstrate that this biomimetic self-contained cognitive architecture generalizes to planning. In addition, we analyze the extended DAC architecture, called DAC 6, as a model that can be applied for the creation of intelligent and psychologically believable synthetic agents. Copyright © 2010 Elsevier Inc. All rights reserved.
Laghari, Samreen; Niazi, Muaz A
2016-01-01
Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.
Intelligent agents for adaptive security market surveillance
NASA Astrophysics Data System (ADS)
Chen, Kun; Li, Xin; Xu, Baoxun; Yan, Jiaqi; Wang, Huaiqing
2017-05-01
Market surveillance systems have increasingly gained in usage for monitoring trading activities in stock markets to maintain market integrity. Existing systems primarily focus on the numerical analysis of market activity data and generally ignore textual information. To fulfil the requirements of information-based surveillance, a multi-agent-based architecture that uses agent intercommunication and incremental learning mechanisms is proposed to provide a flexible and adaptive inspection process. A prototype system is implemented using the techniques of text mining and rule-based reasoning, among others. Based on experiments in the scalping surveillance scenario, the system can identify target information evidence up to 87.50% of the time and automatically identify 70.59% of cases depending on the constraints on the available information sources. The results of this study indicate that the proposed information surveillance system is effective. This study thus contributes to the market surveillance literature and has significant practical implications.
Learning classifier systems for single and multiple mobile robots in unstructured environments
NASA Astrophysics Data System (ADS)
Bay, John S.
1995-12-01
The learning classifier system (LCS) is a learning production system that generates behavioral rules via an underlying discovery mechanism. The LCS architecture operates similarly to a blackboard architecture; i.e., by posted-message communications. But in the LCS, the message board is wiped clean at every time interval, thereby requiring no persistent shared resource. In this paper, we adapt the LCS to the problem of mobile robot navigation in completely unstructured environments. We consider the model of the robot itself, including its sensor and actuator structures, to be part of this environment, in addition to the world-model that includes a goal and obstacles at unknown locations. This requires a robot to learn its own I/O characteristics in addition to solving its navigation problem, but results in a learning controller that is equally applicable, unaltered, in robots with a wide variety of kinematic structures and sensing capabilities. We show the effectiveness of this LCS-based controller through both simulation and experimental trials with a small robot. We then propose a new architecture, the Distributed Learning Classifier System (DLCS), which generalizes the message-passing behavior of the LCS from internal messages within a single agent to broadcast massages among multiple agents. This communications mode requires little bandwidth and is easily implemented with inexpensive, off-the-shelf hardware. The DLCS is shown to have potential application as a learning controller for multiple intelligent agents.
Developing a Qualia-Based Multi-Agent Architecture for Use in Malware Detection
2010-03-01
executables were correctly classified with a 6% false positive rate [7]. Kolter and Maloof expand Schultz’s work by analyzing different...Proceedings of the 2001 IEEE Symposium on Security and Privacy. Los Alamitos, CA: IEEE Computer Society, 2001. [8] J. Z. Kolter and M. A. Maloof
Computing architecture for autonomous microgrids
Goldsmith, Steven Y.
2015-09-29
A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the .
A cognitive computational model inspired by the immune system response.
Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim
2014-01-01
The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective.
A Cognitive Computational Model Inspired by the Immune System Response
Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim
2014-01-01
The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective. PMID:25003131
Resource allocation and supervisory control architecture for intelligent behavior generation
NASA Astrophysics Data System (ADS)
Shah, Hitesh K.; Bahl, Vikas; Moore, Kevin L.; Flann, Nicholas S.; Martin, Jason
2003-09-01
In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) was funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). As part of our research, we presented the use of a grammar-based approach to enabling intelligent behaviors in autonomous robotic vehicles. With the growth of the number of available resources on the robot, the variety of the generated behaviors and the need for parallel execution of multiple behaviors to achieve reaction also grew. As continuation of our past efforts, in this paper, we discuss the parallel execution of behaviors and the management of utilized resources. In our approach, available resources are wrapped with a layer (termed services) that synchronizes and serializes access to the underlying resources. The controlling agents (called behavior generating agents) generate behaviors to be executed via these services. The agents are prioritized and then, based on their priority and the availability of requested services, the Control Supervisor decides on a winner for the grant of access to services. Though the architecture is applicable to a variety of autonomous vehicles, we discuss its application on T4, a mid-sized autonomous vehicle developed for security applications.
MRI based on iron oxide nanoparticles contrast agents: effect of oxidation state and architecture
NASA Astrophysics Data System (ADS)
Javed, Yasir; Akhtar, Kanwal; Anwar, Hafeez; Jamil, Yasir
2017-11-01
Iron oxide nanoparticles (IONPs) extensively employed beyond regenerative medicines to imaging disciplines because of their great constituents for magneto-responsive nano-systems. The unique superparamagnetic behavior makes IONPs very suitable for hyperthermia and imaging applications. From the last decade, versatile functionalization with surface capabilities, efficient contrast properties and biocompatibilities make IONPs an essential imaging contrast agent for magnetic resonance imaging (MRI). IONPs have shown signals for both longitudinal relaxation and transverse relaxation; therefore, negative contrast as well as dual contrast can be used for imaging in MRI. In the current review, we have focused on different oxidation state of iron oxides, i.e., magnetite, maghemite and hematite for their T1 and T2 contrast enhancement properties. We have also discussed different factors (synthesis protocols, biocompatibility, toxicity, architecture, etc.) that can affect the contrast properties of the IONPs. [Figure not available: see fulltext.
Multi-agent systems and their applications
Xie, Jing; Liu, Chen-Ching
2017-07-14
The number of distributed energy components and devices continues to increase globally. As a result, distributed control schemes are desirable for managing and utilizing these devices, together with the large amount of data. In recent years, agent-based technology becomes a powerful tool for engineering applications. As a computational paradigm, multi agent systems (MASs) provide a good solution for distributed control. Here in this paper, MASs and applications are discussed. A state-of-the-art literature survey is conducted on the system architecture, consensus algorithm, and multi-agent platform, framework, and simulator. In addition, a distributed under-frequency load shedding (UFLS) scheme is proposed using themore » MAS. Simulation results for a case study are presented. The future of MASs is discussed in the conclusion.« less
Multi-agent systems and their applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Jing; Liu, Chen-Ching
The number of distributed energy components and devices continues to increase globally. As a result, distributed control schemes are desirable for managing and utilizing these devices, together with the large amount of data. In recent years, agent-based technology becomes a powerful tool for engineering applications. As a computational paradigm, multi agent systems (MASs) provide a good solution for distributed control. Here in this paper, MASs and applications are discussed. A state-of-the-art literature survey is conducted on the system architecture, consensus algorithm, and multi-agent platform, framework, and simulator. In addition, a distributed under-frequency load shedding (UFLS) scheme is proposed using themore » MAS. Simulation results for a case study are presented. The future of MASs is discussed in the conclusion.« less
Web-based health care agents; the case of reminders and todos, too (R2Do2).
Silverman, B G; Andonyadis, C; Morales, A
1998-11-01
This paper describes efforts to develop and field an agent-based, healthcare middleware framework that securely connects practice rule sets to patient records to anticipate health todo items and to remind and alert users about these items over the web. Reminders and todos, too (R2Do2) is an example of merging data- and document-centric architectures, and of integrating agents into patient-provider collaboration environments. A test of this capability verifies that R2Do2 is progressing toward its two goals: (1) an open standards framework for middleware in the healthcare field; and (2) an implementation of the 'principle of optimality' to derive the best possible health plans for each user. This paper concludes with lessons learned to date.
Inconsistency as a diagnostic tool in a society of intelligent agents.
McShane, Marjorie; Beale, Stephen; Nirenburg, Sergei; Jarrell, Bruce; Fantry, George
2012-07-01
To use the detection of clinically relevant inconsistencies to support the reasoning capabilities of intelligent agents acting as physicians and tutors in the realm of clinical medicine. We are developing a cognitive architecture, OntoAgent, that supports the creation and deployment of intelligent agents capable of simulating human-like abilities. The agents, which have a simulated mind and, if applicable, a simulated body, are intended to operate as members of multi-agent teams featuring both artificial and human agents. The agent architecture and its underlying knowledge resources and processors are being developed in a sufficiently generic way to support a variety of applications. We show how several types of inconsistency can be detected and leveraged by intelligent agents in the setting of clinical medicine. The types of inconsistencies discussed include: test results not supporting the doctor's hypothesis; the results of a treatment trial not supporting a clinical diagnosis; and information reported by the patient not being consistent with observations. We show the opportunities afforded by detecting each inconsistency, such as rethinking a hypothesis, reevaluating evidence, and motivating or teaching a patient. Inconsistency is not always the absence of the goal of consistency; rather, it can be a valuable trigger for further exploration in the realm of clinical medicine. The OntoAgent cognitive architecture, along with its extensive suite of knowledge resources an processors, is sufficient to support sophisticated agent functioning such as detecting clinically relevant inconsistencies and using them to benefit patient-centered medical training and practice. Copyright © 2012 Elsevier B.V. All rights reserved.
The Emergence of Agent-Based Technology as an Architectural Component of Serious Games
NASA Technical Reports Server (NTRS)
Phillips, Mark; Scolaro, Jackie; Scolaro, Daniel
2010-01-01
The evolution of games as an alternative to traditional simulations in the military context has been gathering momentum over the past five years, even though the exploration of their use in the serious sense has been ongoing since the mid-nineties. Much of the focus has been on the aesthetics of the visuals provided by the core game engine as well as the artistry provided by talented development teams to produce not only breathtaking artwork, but highly immersive game play. Consideration of game technology is now so much a part of the modeling and simulation landscape that it is becoming difficult to distinguish traditional simulation solutions from game-based approaches. But games have yet to provide the much needed interactive free play that has been the domain of semi-autonomous forces (SAF). The component-based middleware architecture that game engines provide promises a great deal in terms of options for the integration of agent solutions to support the development of non-player characters that engage the human player without the deterministic nature of scripted behaviors. However, there are a number of hard-learned lessons on the modeling and simulation side of the equation that game developers have yet to learn, such as: correlation of heterogeneous systems, scalability of both terrain and numbers of non-player entities, and the bi-directional nature of simulation to game interaction provided by Distributed Interactive Simulation (DIS) and High Level Architecture (HLA).
NASA Astrophysics Data System (ADS)
Dewi, Cut; Nopera Rauzi, Era
2018-05-01
This paper discusses the role of architectural heritage as a tool for resilience in a community after a surpassing disaster. It argues that architectural heritage is not merely a passive victim needing to be rescued; rather it is also an active agent in providing resilience for survivors. It is evidence in the ways it acts as a signifier of collective memories and place identities, and a place to seek refuge in emergency time and to decide central decision during the reconstruction process. This paper explores several theories related to architectural heritage in post-disaster context and juxtaposes them in a case study of Banda Aceh after the 2004 Tsunami Disaster. The paper is based on a six-month anthropological fieldwork in 2012 in Banda Aceh after the Tsunami Disaster. During the fieldwork, 166 respondents were interviewed to gain extensive insight into the ways architecture might play a role in post-disaster reconstruction.
Agility: Agent - Ility Architecture
2002-10-01
existing and emerging standards (e.g., distributed objects, email, web, search engines , XML, Java, Jini). Three agent system components resulted from...agents and other Internet resources and operate over the web (AgentGram), a yellow pages service that uses Internet search engines to locate XML ads for agents and other Internet resources (WebTrader).
A Methodology For Developing an Agent Systems Reference Architecture
2010-05-01
agent framworks , we create an abstraction noting similarities and differences. The differences are documented as points of variation. The result...situated in the physical en- vironment. Addressing how conceptual components of an agent system is beneficial to agent system architects, developers, and
BTFS: The Border Trade Facilitation System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, L.R.
The author demonstrates the Border Trade Facilitation System (BTFS), an agent-based bilingual e-commerce system built to expedite the regulation, control, and execution of commercial trans-border shipments during the delivery phase. The system was built to serve maquila industries at the US/Mexican border. The BTFS uses foundation technology developed here at Sandia Laboratories' Advanced Information Systems Lab (AISL), including a distributed object substrate, a general-purpose agent development framework, dynamically generated agent-human interaction via the World-Wide Web, and a collaborative agent architecture. This technology is also the substrate for the Multi-Agent Simulation Management System (MASMAS) proposed for demonstration at this conference. Themore » BTFS executes authenticated transactions among agents performing open trading over the Internet. With the BTFS in place, one could conduct secure international transactions from any site with an Internet connection and a web browser. The BTFS is currently being evaluated for commercialization.« less
Basic emotions and adaptation. A computational and evolutionary model.
Pacella, Daniela; Ponticorvo, Michela; Gigliotta, Onofrio; Miglino, Orazio
2017-01-01
The core principles of the evolutionary theories of emotions declare that affective states represent crucial drives for action selection in the environment and regulated the behavior and adaptation of natural agents in ancestrally recurrent situations. While many different studies used autonomous artificial agents to simulate emotional responses and the way these patterns can affect decision-making, few are the approaches that tried to analyze the evolutionary emergence of affective behaviors directly from the specific adaptive problems posed by the ancestral environment. A model of the evolution of affective behaviors is presented using simulated artificial agents equipped with neural networks and physically inspired on the architecture of the iCub humanoid robot. We use genetic algorithms to train populations of virtual robots across generations, and investigate the spontaneous emergence of basic emotional behaviors in different experimental conditions. In particular, we focus on studying the emotion of fear, therefore the environment explored by the artificial agents can contain stimuli that are safe or dangerous to pick. The simulated task is based on classical conditioning and the agents are asked to learn a strategy to recognize whether the environment is safe or represents a threat to their lives and select the correct action to perform in absence of any visual cues. The simulated agents have special input units in their neural structure whose activation keep track of their actual "sensations" based on the outcome of past behavior. We train five different neural network architectures and then test the best ranked individuals comparing their performances and analyzing the unit activations in each individual's life cycle. We show that the agents, regardless of the presence of recurrent connections, spontaneously evolved the ability to cope with potentially dangerous environment by collecting information about the environment and then switching their behavior to a genetically selected pattern in order to maximize the possible reward. We also prove the determinant presence of an internal time perception unit for the robots to achieve the highest performance and survivability across all conditions.
2007-09-01
behaviour based on past experience of interacting with the operator), and mobile (i.e., can move themselves from one machine to another). Edwards argues that...Sofge, D., Bugajska, M., Adams, W., Perzanowski, D., and Schultz, A. (2003). Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots...based architecture can provide a natural and scalable approach to implementing a multimodal interface to control mobile robots through dynamic
ERIC Educational Resources Information Center
Nunez Esquer, Gustavo; Sheremetov, Leonid
This paper reports on the results and future research work within the paradigm of Configurable Collaborative Distance Learning, called Espacios Virtuales de Apredizaje (EVA). The paper focuses on: (1) description of the main concepts, including virtual learning spaces for knowledge, collaboration, consulting, and experimentation, a…
Space Situational Awareness using Market Based Agents
NASA Astrophysics Data System (ADS)
Sullivan, C.; Pier, E.; Gregory, S.; Bush, M.
2012-09-01
Space surveillance for the DoD is not limited to the Space Surveillance Network (SSN). Other DoD-owned assets have some existing capabilities for tasking but have no systematic way to work collaboratively with the SSN. These are run by diverse organizations including the Services, other defense and intelligence agencies and national laboratories. Beyond these organizations, academic and commercial entities have systems that possess SSA capability. Most all of these assets have some level of connectivity, security, and potential autonomy. Exploiting them in a mutually beneficial structure could provide a more comprehensive, efficient and cost effective solution for SSA. The collection of all potential assets, providers and consumers of SSA data comprises a market which is functionally illiquid. The development of a dynamic marketplace for SSA data could enable would-be providers the opportunity to sell data to SSA consumers for monetary or incentive based compensation. A well-conceived market architecture could drive down SSA data costs through increased supply and improve efficiency through increased competition. Oceanit will investigate market and market agent architectures, protocols, standards, and incentives toward producing high-volume/low-cost SSA.
2016-01-01
Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235
iCrowd: agent-based behavior modeling and crowd simulator
NASA Astrophysics Data System (ADS)
Kountouriotis, Vassilios I.; Paterakis, Manolis; Thomopoulos, Stelios C. A.
2016-05-01
Initially designed in the context of the TASS (Total Airport Security System) FP-7 project, the Crowd Simulation platform developed by the Integrated Systems Lab of the Institute of Informatics and Telecommunications at N.C.S.R. Demokritos, has evolved into a complete domain-independent agent-based behavior simulator with an emphasis on crowd behavior and building evacuation simulation. Under continuous development, it reflects an effort to implement a modern, multithreaded, data-oriented simulation engine employing latest state-of-the-art programming technologies and paradigms. It is based on an extensible architecture that separates core services from the individual layers of agent behavior, offering a concrete simulation kernel designed for high-performance and stability. Its primary goal is to deliver an abstract platform to facilitate implementation of several Agent-Based Simulation solutions with applicability in several domains of knowledge, such as: (i) Crowd behavior simulation during [in/out] door evacuation. (ii) Non-Player Character AI for Game-oriented applications and Gamification activities. (iii) Vessel traffic modeling and simulation for Maritime Security and Surveillance applications. (iv) Urban and Highway Traffic and Transportation Simulations. (v) Social Behavior Simulation and Modeling.
Technology Review of Multi-Agent Systems and Tools
2005-06-01
over a network, including the Internet. A web services architecture is the logical evolution of object-oriented analysis and design coupled with...the logical evolution of components geared towards the architecture, design, implementation, and deployment of e-business solutions. As in object...querying. The Web Services architecture describes the principles behind the next generation of e- business architectures, presenting a logical evolution
The Use of Software Agents for Autonomous Control of a DC Space Power System
NASA Technical Reports Server (NTRS)
May, Ryan D.; Loparo, Kenneth A.
2014-01-01
In order to enable manned deep-space missions, the spacecraft must be controlled autonomously using on-board algorithms. A control architecture is proposed to enable this autonomous operation for an spacecraft electric power system and then implemented using a highly distributed network of software agents. These agents collaborate and compete with each other in order to implement each of the control functions. A subset of this control architecture is tested against a steadystate power system simulation and found to be able to solve a constrained optimization problem with competing objectives using only local information.
The Reactive-Causal Architecture: Introducing an Emotion Model along with Theories of Needs
NASA Astrophysics Data System (ADS)
Aydin, Ali Orhan; Orgun, Mehmet Ali
In the entertainment application area, one of the major aims is to develop believable agents. To achieve this aim, agents should be highly autonomous, situated, flexible, and display affect. The Reactive-Causal Architecture (ReCau) is proposed to simulate these core attributes. In its current form, ReCau cannot explain the effects of emotions on intelligent behaviour. This study aims is to further improve the emotion model of ReCau to explain the effects of emotions on intelligent behaviour. This improvement allows ReCau to be emotional to support the development of believable agents.
Conflict resolution in multi-agent hybrid systems
DOT National Transportation Integrated Search
1996-12-01
A conflict resolution architecture for multi-agent hybrid systems with emphasis on Air Traffic Management Systems (ATMS) is presented. In such systems, conflicts arise in the form of potential collisions which are resolved locally by inter-agent coor...
Home Energy Management System - VOLTTRON Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zandi, Helia
In most Home Energy Management Systems (HEMS) available in the market, different devices running different communication protocols cannot interact with each other and exchange information. As a result of this integration, the information about different devices running different communication protocol can be accessible by other agents and devices running on VOLTTRON platform. The integration process can be used by any HEMS available in the market regardless of the programming language they use. If the existing HEMS provides an Application Programming Interface (API) based on the RESTFul architecture, that API can be used for integration. Our candidate HEMS in this projectmore » is home-assistant (Hass). An agent is implemented which can communicate with the Hass API and receives information about the devices loaded on the API. The agent publishes the information it receives on the VOLTTRON message bus so other agents can have access to this information. On the other side, for each type of devices, an agent is implemented such as Climate Agent, Lock Agent, Switch Agent, Light Agent, etc. Each of these agents is subscribed to the messages published on the message bus about their associated devices. These agents can also change the status of the devices by sending appropriate service calls to the API. Other agents and services on the platform can also access this information and coordinate their decision-making process based on this information.« less
Distributed Planning in a Mixed-Initiative Environment
2008-06-01
Knowledge Sources Control Remote Blackboard Remote Knowledge Sources Remot e Data Remot e Data Java Distributed Blackboard Figure 3 - Distributed...an interface agent or planning agent and the second type is a critic agent. Agents in the DEEP architecture extend and use the Java Agent...chosen because it is fully implemented in Java , and supports these requirements. 2.3.3 Interface Agents Interface agents are the interfaces through
Behavior believability in virtual worlds: agents acting when they need to.
Avradinis, Nikos; Panayiotopoulos, Themis; Anastassakis, George
2013-12-01
Believability has been a perennial goal for the intelligent virtual agent community. One important aspect of believability largely consists in demonstrating autonomous behavior, consistent with the agent's personality and motivational state, as well as the world conditions. Autonomy, on behalf of the agent, implies the existence of an internal structure and mechanism that allows the agent to have its own needs and interests, based on which the agent will dynamically select and generate goals that will in turn lead to self-determined behavior. Intrinsic motivation allows the agent to function and demonstrate behavior, even when no external stimulus is present, due to the constant change of its internal emotional and physiological state. The concept of motivation has already been investigated by research works on intelligent agents, trying to achieve autonomy. The current work presents an architecture and model to represent and manage internal driving factors in intelligent virtual agents, using the concept of motivations. Based on Maslow and Alderfer's bio-psychological needs theories, we present a motivational approach to represent human needs and produce emergent behavior through motivation synthesis. Particular attention is given to basic, physiological level needs, which are the basis of behavior and can produce tendency to action even when there is no other interaction with the environment.
Enhanced risk management by an emerging multi-agent architecture
NASA Astrophysics Data System (ADS)
Lin, Sin-Jin; Hsu, Ming-Fu
2014-07-01
Classification in imbalanced datasets has attracted much attention from researchers in the field of machine learning. Most existing techniques tend not to perform well on minority class instances when the dataset is highly skewed because they focus on minimising the forecasting error without considering the relative distribution of each class. This investigation proposes an emerging multi-agent architecture, grounded on cooperative learning, to solve the class-imbalanced classification problem. Additionally, this study deals further with the obscure nature of the multi-agent architecture and expresses comprehensive rules for auditors. The results from this study indicate that the presented model performs satisfactorily in risk management and is able to tackle a highly class-imbalanced dataset comparatively well. Furthermore, the knowledge visualised process, supported by real examples, can assist both internal and external auditors who must allocate limited detecting resources; they can take the rules as roadmaps to modify the auditing programme.
Space/ground systems as cooperating agents
NASA Technical Reports Server (NTRS)
Grant, T. J.
1994-01-01
Within NASA and the European Space Agency (ESA) it is agreed that autonomy is an important goal for the design of future spacecraft and that this requires on-board artificial intelligence. NASA emphasizes deep space and planetary rover missions, while ESA considers on-board autonomy as an enabling technology for missions that must cope with imperfect communications. ESA's attention is on the space/ground system. A major issue is the optimal distribution of intelligent functions within the space/ground system. This paper describes the multi-agent architecture for space/ground systems (MAASGS) which would enable this issue to be investigated. A MAASGS agent may model a complete spacecraft, a spacecraft subsystem or payload, a ground segment, a spacecraft control system, a human operator, or an environment. The MAASGS architecture has evolved through a series of prototypes. The paper recommends that the MAASGS architecture should be implemented in the operational Dutch Utilization Center.
ERIC Educational Resources Information Center
Sadiig, I. Ahmed M. J.
2005-01-01
The traditional learning paradigm involving face-to-face interaction with students is shifting to highly data-intensive electronic learning with the advances in Information and Communication Technology. An important component of the e-learning process is the delivery of the learning contents to their intended audience over a network. A distributed…
Introduction to Architectures: HSCB Information - What It Is and How It Fits (or Doesn’t Fit)
2010-10-01
Simulation Interoperability Workshop, 01E- SIW -080 [15] Barry G. Silverman, Gnana Gharathy, Kevin O’Brien, Jason Cornwell, “Human Behavior Models for Agents...Workshop, 10F- SIW -023, September 2010. [17] Christiansen, John H., “A flexible object-based software framework for modelling complex systems with
A Semantic Grid Oriented to E-Tourism
NASA Astrophysics Data System (ADS)
Zhang, Xiao Ming
With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.
NASA Astrophysics Data System (ADS)
Nejad, Hossein Tehrani Nik; Sugimura, Nobuhiro; Iwamura, Koji; Tanimizu, Yoshitaka
Process planning and scheduling are important manufacturing planning activities which deal with resource utilization and time span of manufacturing operations. The process plans and the schedules generated in the planning phase shall be modified in the execution phase due to the disturbances in the manufacturing systems. This paper deals with a multi-agent architecture of an integrated and dynamic system for process planning and scheduling for multi jobs. A negotiation protocol is discussed, in this paper, to generate the process plans and the schedules of the manufacturing resources and the individual jobs, dynamically and incrementally, based on the alternative manufacturing processes. The alternative manufacturing processes are presented by the process plan networks discussed in the previous paper, and the suitable process plans and schedules are searched and generated to cope with both the dynamic status and the disturbances of the manufacturing systems. We initiatively combine the heuristic search algorithms of the process plan networks with the negotiation protocols, in order to generate suitable process plans and schedules in the dynamic manufacturing environment. A simulation software has been developed to carry out case studies, aimed at verifying the performance of the proposed multi-agent architecture.
Process Management inside ATLAS DAQ
NASA Astrophysics Data System (ADS)
Alexandrov, I.; Amorim, A.; Badescu, E.; Burckhart-Chromek, D.; Caprini, M.; Dobson, M.; Duval, P. Y.; Hart, R.; Jones, R.; Kazarov, A.; Kolos, S.; Kotov, V.; Liko, D.; Lucio, L.; Mapelli, L.; Mineev, M.; Moneta, L.; Nassiakou, M.; Pedro, L.; Ribeiro, A.; Roumiantsev, V.; Ryabov, Y.; Schweiger, D.; Soloviev, I.; Wolters, H.
2002-10-01
The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.
An integrative architecture for a sensor-supported trust management system.
Trček, Denis
2012-01-01
Trust plays a key role not only in e-worlds and emerging pervasive computing environments, but also already for millennia in human societies. Trust management solutions that have being around now for some fifteen years were primarily developed for the above mentioned cyber environments and they are typically focused on artificial agents, sensors, etc. However, this paper presents extensions of a new methodology together with architecture for trust management support that is focused on humans and human-like agents. With this methodology and architecture sensors play a crucial role. The architecture presents an already deployable tool for multi and interdisciplinary research in various areas where humans are involved. It provides new ways to obtain an insight into dynamics and evolution of such structures, not only in pervasive computing environments, but also in other important areas like management and decision making support.
Semiotics and agents for integrating and navigating through multimedia representations of concepts
NASA Astrophysics Data System (ADS)
Joyce, Dan W.; Lewis, Paul H.; Tansley, Robert H.; Dobie, Mark R.; Hall, Wendy
1999-12-01
The purpose of this paper is two-fold. We begin by exploring the emerging trend to view multimedia information in terms of low-level and high-level components; the former being feature-based and the latter the 'semantics' intrinsic to what is portrayed by the media object. Traditionally, this has been viewed by employing analogies with generative linguistics. Recently, a new perceptive based on the semiotic tradition has been alluded to in several papers. We believe this to be a more appropriate approach. From this, we propose an approach for tackling this problem which uses an associative data structure expressing authored information together with intelligent agents acting autonomously over this structure. We then show how neural networks can be used to implement such agents. The agents act as 'vehicles' for bridging the gap between multimedia semantics and concrete expressions of high-level knowledge, but we suggest that traditional neural network techniques for classification are not architecturally adequate.
Use of agents to implement an integrated computing environment
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1995-01-01
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implement the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.
A Mobile Multi-Agent Information System for Ubiquitous Fetal Monitoring
Su, Chuan-Jun; Chu, Ta-Wei
2014-01-01
Electronic fetal monitoring (EFM) systems integrate many previously separate clinical activities related to fetal monitoring. Promoting the use of ubiquitous fetal monitoring services with real time status assessments requires a robust information platform equipped with an automatic diagnosis engine. This paper presents the design and development of a mobile multi-agent platform-based open information systems (IMAIS) with an automated diagnosis engine to support intensive and distributed ubiquitous fetal monitoring. The automatic diagnosis engine that we developed is capable of analyzing data in both traditional paper-based and digital formats. Issues related to interoperability, scalability, and openness in heterogeneous e-health environments are addressed through the adoption of a FIPA2000 standard compliant agent development platform—the Java Agent Development Environment (JADE). Integrating the IMAIS with light-weight, portable fetal monitor devices allows for continuous long-term monitoring without interfering with a patient’s everyday activities and without restricting her mobility. The system architecture can be also applied to vast monitoring scenarios such as elder care and vital sign monitoring. PMID:24452256
Alor-Hernández, Giner; Sánchez-Cervantes, José Luis; Juárez-Martínez, Ulises; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Aguilar-Laserre, Alberto
2012-03-01
Emergency healthcare is one of the emerging application domains for information services, which requires highly multimodal information services. The time of consuming pre-hospital emergency process is critical. Therefore, the minimization of required time for providing primary care and consultation to patients is one of the crucial factors when trying to improve the healthcare delivery in emergency situations. In this sense, dynamic location of medical entities is a complex process that needs time and it can be critical when a person requires medical attention. This work presents a multimodal location-based system for locating and assigning medical entities called ITOHealth. ITOHealth provides a multimodal middleware-oriented integrated architecture using a service-oriented architecture in order to provide information of medical entities in mobile devices and web browsers with enriched interfaces providing multimodality support. ITOHealth's multimodality is based on the use of Microsoft Agent Characters, the integration of natural language voice to the characters, and multi-language and multi-characters support providing an advantage for users with visual impairments.
New generation of magnetic and luminescent nanoparticles for in vivo real-time imaging
Lacroix, Lise-Marie; Delpech, Fabien; Nayral, Céline; Lachaize, Sébastien; Chaudret, Bruno
2013-01-01
A new generation of optimized contrast agents is emerging, based on metallic nanoparticles (NPs) and semiconductor nanocrystals for, respectively, magnetic resonance imaging (MRI) and near-infrared (NIR) fluorescent imaging techniques. Compared with established contrast agents, such as iron oxide NPs or organic dyes, these NPs benefit from several advantages: their magnetic and optical properties can be tuned through size, shape and composition engineering, their efficiency can exceed by several orders of magnitude that of contrast agents clinically used, their surface can be modified to incorporate specific targeting agents and antifolding polymers to increase blood circulation time and tumour recognition, and they can possibly be integrated in complex architecture to yield multi-modal imaging agents. In this review, we will report the materials of choice based on the understanding of the basic physics of NIR and MRI techniques and their corresponding syntheses as NPs. Surface engineering, water transfer and specific targeting will be highlighted prior to their first use for in vivo real-time imaging. Highly efficient NPs that are safer and target specific are likely to enter clinical application in a near future. PMID:24427542
NASA Astrophysics Data System (ADS)
Kock, B. E.
2008-12-01
The increased availability and understanding of agent-based modeling technology and techniques provides a unique opportunity for water resources modelers, allowing them to go beyond traditional behavioral approaches from neoclassical economics, and add rich cognition to social-hydrological models. Agent-based models provide for an individual focus, and the easier and more realistic incorporation of learning, memory and other mechanisms for increased cognitive sophistication. We are in an age of global change impacting complex water resources systems, and social responses are increasingly recognized as fundamentally adaptive and emergent. In consideration of this, water resources models and modelers need to better address social dynamics in a manner beyond the capabilities of neoclassical economics theory and practice. However, going beyond the unitary curve requires unique levels of engagement with stakeholders, both to elicit the richer knowledge necessary for structuring and parameterizing agent-based models, but also to make sure such models are appropriately used. With the aim of encouraging epistemological and methodological convergence in the agent-based modeling of water resources, we have developed a water resources-specific cognitive model and an associated collaborative modeling process. Our cognitive model emphasizes efficiency in architecture and operation, and capacity to adapt to different application contexts. We describe a current application of this cognitive model and modeling process in the Arkansas Basin of Colorado. In particular, we highlight the potential benefits of, and challenges to, using more sophisticated cognitive models in agent-based water resources models.
Nebot, Patricio; Torres-Sospedra, Joaquín; Martínez, Rafael J
2011-01-01
The control architecture is one of the most important part of agricultural robotics and other robotic systems. Furthermore its importance increases when the system involves a group of heterogeneous robots that should cooperate to achieve a global goal. A new control architecture is introduced in this paper for groups of robots in charge of doing maintenance tasks in agricultural environments. Some important features such as scalability, code reuse, hardware abstraction and data distribution have been considered in the design of the new architecture. Furthermore, coordination and cooperation among the different elements in the system is allowed in the proposed control system. By integrating a network oriented device server Player, Java Agent Development Framework (JADE) and High Level Architecture (HLA), the previous concepts have been considered in the new architecture presented in this paper. HLA can be considered the most important part because it not only allows the data distribution and implicit communication among the parts of the system but also allows to simultaneously operate with simulated and real entities, thus allowing the use of hybrid systems in the development of applications.
Agents Control in Intelligent Learning Systems: The Case of Reactive Characteristics
ERIC Educational Resources Information Center
Laureano-Cruces, Ana Lilia; Ramirez-Rodriguez, Javier; de Arriaga, Fernando; Escarela-Perez, Rafael
2006-01-01
Intelligent learning systems (ILSs) have evolved in the last few years basically because of influences received from multi-agent architectures (MAs). Conflict resolution among agents has been a very important problem for multi-agent systems, with specific features in the case of ILSs. The literature shows that ILSs with cognitive or pedagogical…
A Comparison of Computational Cognitive Models: Agent-Based Systems Versus Rule-Based Architectures
2003-03-01
Java™ How To Program , Prentice Hall, 1999. Friedman-Hill, E., Jess, The Expert System Shell for the Java Platform, Sandia National Laboratories, 2001...transition from the descriptive NDM theory to a computational model raises several questions: Who is an experienced decision maker? How do you model the...progression from being a novice to an experienced decision maker? How does the model account for previous experiences? Are there situations where
Multi-Agent Design and Implementation for an Online Peer Help System
ERIC Educational Resources Information Center
Meng, Anbo
2014-01-01
With the rapid advance of e-learning, the online peer help is playing increasingly important role. This paper explores the application of MAS to an online peer help system (MAPS). In the design phase, the architecture of MAPS is proposed, which consists of a set of agents including the personal agent, the course agent, the diagnosis agent, the DF…
Pricing the Services in Dynamic Environment: Agent Pricing Model
NASA Astrophysics Data System (ADS)
Žagar, Drago; Rupčić, Slavko; Rimac-Drlje, Snježana
New Internet applications and services as well as new user demands open many new issues concerning dynamic management of quality of service and price for received service, respectively. The main goals of Internet service providers are to maximize profit and maintain a negotiated quality of service. From the users' perspective the main goal is to maximize ratio of received QoS and costs of service. However, achieving these objectives could become very complex if we know that Internet service users might during the session become highly dynamic and proactive. This connotes changes in user profile or network provider/s profile caused by high level of user mobility or variable level of user demands. This paper proposes a new agent based pricing architecture for serving the highly dynamic customers in context of dynamic user/network environment. The proposed architecture comprises main aspects and basic parameters that will enable objective and transparent assessment of the costs for the service those Internet users receive while dynamically change QoS demands and cost profile.
Bone vascularization and bone micro-architecture characterizations according to the μCT resolution
NASA Astrophysics Data System (ADS)
Crauste, E.; Autrusseau, F.; Guédon, Jp.; Pilet, P.; Amouriq, Y.; Weiss, P.; Giumelli, B.
2015-03-01
Trabecular bone and its micro-architecture are of prime importance for health. Changes of bone micro-architecture are linked to different pathological situations like osteoporosis and begin now to be understood. In a previous paper [12], we started to investigate the relationships between bone and vessels and proposed some indices of characterization for the vessels issued from those used for the bone. Our main objective in this paper is to qualify the classical values used for bone as well as those we proposed for vessels according to different acquisition parameters and for several thresholding methods used to separate bone vessels and background. This study is also based on vessels perfusion by a contrast agent (barium sulfate mixed with gelatin) before euthanasia on rats. Femurs and tibias as well as mandibles were removed after rat's death and were imaged by microCT (Skyscan 1272, Bruker, Belgium) with a resolution ranging from 18 to 3μm. The so obtained images were analyzed with various softwares (NRecon Reconstruction, CtAn, and CtVox from Bruker) in order to calculate bone and vessels micro-architecture parameters (density of bone/blood within the volume), and to know if the results both for bone and vascular micro-architecture are constant along the chosen pixel resolution. The result is clearly negative. We found a very different characterization both for bone and vessels with the 3μm acquisition. Tibia and mandibles bones were also used to show results that can be visually assessed. The largest portions of the vascular tree are orthogonal to the obtained slices of the bone. Therefore, the contrast agent appears as cylinders of various sizes.
Homeostatic Agent for General Environment
NASA Astrophysics Data System (ADS)
Yoshida, Naoto
2018-03-01
One of the essential aspect in biological agents is dynamic stability. This aspect, called homeostasis, is widely discussed in ethology, neuroscience and during the early stages of artificial intelligence. Ashby's homeostats are general-purpose learning machines for stabilizing essential variables of the agent in the face of general environments. However, despite their generality, the original homeostats couldn't be scaled because they searched their parameters randomly. In this paper, first we re-define the objective of homeostats as the maximization of a multi-step survival probability from the view point of sequential decision theory and probabilistic theory. Then we show that this optimization problem can be treated by using reinforcement learning algorithms with special agent architectures and theoretically-derived intrinsic reward functions. Finally we empirically demonstrate that agents with our architecture automatically learn to survive in a given environment, including environments with visual stimuli. Our survival agents can learn to eat food, avoid poison and stabilize essential variables through theoretically-derived single intrinsic reward formulations.
ERIC Educational Resources Information Center
Beuls, Katrien
2013-01-01
Construction Grammar (CxG) is a well-established linguistic theory that takes the notion of a construction as the basic unit of language. Yet, because the potential of this theory for language teaching or SLA has largely remained ignored, this paper demonstrates the benefits of adopting the CxG approach for modelling a student's linguistic…
Constructing Virtual Training Demonstrations
2008-12-01
virtual environments have been shown to be effective for training, and distributed game -based architectures contribute an added benefit of wide...investigation of how a demonstration authoring toolset can be constructed from existing virtual training environments using 3-D multiplayer gaming ...intelligent agents project to create AI middleware for simulations and videogames . The result was SimBionic®, which enables users to graphically author
An Integrative Architecture for a Sensor-Supported Trust Management System
Trček, Denis
2012-01-01
Trust plays a key role not only in e-worlds and emerging pervasive computing environments, but also already for millennia in human societies. Trust management solutions that have being around now for some fifteen years were primarily developed for the above mentioned cyber environments and they are typically focused on artificial agents, sensors, etc. However, this paper presents extensions of a new methodology together with architecture for trust management support that is focused on humans and human-like agents. With this methodology and architecture sensors play a crucial role. The architecture presents an already deployable tool for multi and interdisciplinary research in various areas where humans are involved. It provides new ways to obtain an insight into dynamics and evolution of such structures, not only in pervasive computing environments, but also in other important areas like management and decision making support. PMID:23112628
Collected notes from the Benchmarks and Metrics Workshop
NASA Technical Reports Server (NTRS)
Drummond, Mark E.; Kaelbling, Leslie P.; Rosenschein, Stanley J.
1991-01-01
In recent years there has been a proliferation of proposals in the artificial intelligence (AI) literature for integrated agent architectures. Each architecture offers an approach to the general problem of constructing an integrated agent. Unfortunately, the ways in which one architecture might be considered better than another are not always clear. There has been a growing realization that many of the positive and negative aspects of an architecture become apparent only when experimental evaluation is performed and that to progress as a discipline, we must develop rigorous experimental methods. In addition to the intrinsic intellectual interest of experimentation, rigorous performance evaluation of systems is also a crucial practical concern to our research sponsors. DARPA, NASA, and AFOSR (among others) are actively searching for better ways of experimentally evaluating alternative approaches to building intelligent agents. One tool for experimental evaluation involves testing systems on benchmark tasks in order to assess their relative performance. As part of a joint DARPA and NASA funded project, NASA-Ames and Teleos Research are carrying out a research effort to establish a set of benchmark tasks and evaluation metrics by which the performance of agent architectures may be determined. As part of this project, we held a workshop on Benchmarks and Metrics at the NASA Ames Research Center on June 25, 1990. The objective of the workshop was to foster early discussion on this important topic. We did not achieve a consensus, nor did we expect to. Collected here is some of the information that was exchanged at the workshop. Given here is an outline of the workshop, a list of the participants, notes taken on the white-board during open discussions, position papers/notes from some participants, and copies of slides used in the presentations.
NASA Astrophysics Data System (ADS)
Torre, Gerardo De La; Yucelen, Tansel
2018-03-01
Control algorithms of networked multiagent systems are generally computed distributively without having a centralised entity monitoring the activity of agents; and therefore, unforeseen adverse conditions such as uncertainties or attacks to the communication network and/or failure of agent-wise components can easily result in system instability and prohibit the accomplishment of system-level objectives. In this paper, we study resilient coordination of networked multiagent systems in the presence of misbehaving agents, i.e. agents that are subject to exogenous disturbances that represent a class of adverse conditions. In particular, a distributed adaptive control architecture is presented for directed and time-varying graph topologies to retrieve a desired networked multiagent system behaviour. Apart from the existing relevant literature that make specific assumptions on the graph topology and/or the fraction of misbehaving agents, we show that the considered class of adverse conditions can be mitigated by the proposed adaptive control approach that utilises a local state emulator - even if all agents are misbehaving. Illustrative numerical examples are provided to demonstrate the theoretical findings.
Basic emotions and adaptation. A computational and evolutionary model
2017-01-01
The core principles of the evolutionary theories of emotions declare that affective states represent crucial drives for action selection in the environment and regulated the behavior and adaptation of natural agents in ancestrally recurrent situations. While many different studies used autonomous artificial agents to simulate emotional responses and the way these patterns can affect decision-making, few are the approaches that tried to analyze the evolutionary emergence of affective behaviors directly from the specific adaptive problems posed by the ancestral environment. A model of the evolution of affective behaviors is presented using simulated artificial agents equipped with neural networks and physically inspired on the architecture of the iCub humanoid robot. We use genetic algorithms to train populations of virtual robots across generations, and investigate the spontaneous emergence of basic emotional behaviors in different experimental conditions. In particular, we focus on studying the emotion of fear, therefore the environment explored by the artificial agents can contain stimuli that are safe or dangerous to pick. The simulated task is based on classical conditioning and the agents are asked to learn a strategy to recognize whether the environment is safe or represents a threat to their lives and select the correct action to perform in absence of any visual cues. The simulated agents have special input units in their neural structure whose activation keep track of their actual “sensations” based on the outcome of past behavior. We train five different neural network architectures and then test the best ranked individuals comparing their performances and analyzing the unit activations in each individual’s life cycle. We show that the agents, regardless of the presence of recurrent connections, spontaneously evolved the ability to cope with potentially dangerous environment by collecting information about the environment and then switching their behavior to a genetically selected pattern in order to maximize the possible reward. We also prove the determinant presence of an internal time perception unit for the robots to achieve the highest performance and survivability across all conditions. PMID:29107988
Hybrid Multiagent System for Automatic Object Learning Classification
NASA Astrophysics Data System (ADS)
Gil, Ana; de La Prieta, Fernando; López, Vivian F.
The rapid evolution within the context of e-learning is closely linked to international efforts on the standardization of learning object metadata, which provides learners in a web-based educational system with ubiquitous access to multiple distributed repositories. This article presents a hybrid agent-based architecture that enables the recovery of learning objects tagged in Learning Object Metadata (LOM) and provides individualized help with selecting learning materials to make the most suitable choice among many alternatives.
A Workshop on Analysis and Evaluation of Enterprise Architectures
2010-11-01
Rev. 8-98) Prescribed by ANSI Std Z39-18 This report was prepared for the SEI Administrative Agent ESC/XPK 5 Eglin Street Hanscom AFB, MA...Enterprise Business 4 2.3 Bounding Enterprise Architecture in Practice 5 3 Enterprise Architecture Design and Documentation Practices 7 3.1 Typical...Methods 12 4.5 Federation and Acquisition 13 5 Summary 14 5.1 Workshop Findings 14 5.2 Future Work 15 Appendix A – Survey of Enterprise Architecture
NASA Astrophysics Data System (ADS)
Hanford, Scott D.
Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the object of interest has been detected, the Soar agent uses the topological map to make decisions about how to efficiently return to the location where the mission began. Additionally, the CRS can send an email containing step-by-step directions using the intersections in the environment as landmarks that describe a direct path from the mission's start location to the object of interest. The CRS has displayed several characteristics of intelligent behavior, including reasoning, planning, learning, and communication of learned knowledge, while autonomously performing two missions. The CRS has also demonstrated how Soar can be integrated with common robotic motor and perceptual systems that complement the strengths of Soar for unmanned vehicles and is one of the few systems that use perceptual systems such as occupancy grid, computer vision, and fuzzy logic algorithms with cognitive architectures for robotics. The use of these perceptual systems to generate symbolic information about the environment during the indoor search mission allowed the CRS to use Soar's planning and learning mechanisms, which have rarely been used by agents to control mobile robots in real environments. Additionally, the system developed for the indoor search mission represents the first known use of a topological map with a cognitive architecture on a mobile robot. The ability to learn both a topological map and production rules allowed the Soar agent used during the indoor search mission to make intelligent decisions and behave more efficiently as it learned about its environment. While the CRS has been applied to two different missions, it has been developed with the intention that it be extended in the future so it can be used as a general system for mobile robot control. The CRS can be expanded through the addition of new sensors and sensor processing algorithms, development of Soar agents with more production rules, and the use of new architectural mechanisms in Soar.
The Action Execution Process Implemented in Different Cognitive Architectures: A Review
NASA Astrophysics Data System (ADS)
Dong, Daqi; Franklin, Stan
2014-12-01
An agent achieves its goals by interacting with its environment, cyclically choosing and executing suitable actions. An action execution process is a reasonable and critical part of an entire cognitive architecture, because the process of generating executable motor commands is not only driven by low-level environmental information, but is also initiated and affected by the agent's high-level mental processes. This review focuses on cognitive models of action, or more specifically, of the action execution process, as implemented in a set of popular cognitive architectures. We examine the representations and procedures inside the action execution process, as well as the cooperation between action execution and other high-level cognitive modules. We finally conclude with some general observations regarding the nature of action execution.
Biomorphic Multi-Agent Architecture for Persistent Computing
NASA Technical Reports Server (NTRS)
Lodding, Kenneth N.; Brewster, Paul
2009-01-01
A multi-agent software/hardware architecture, inspired by the multicellular nature of living organisms, has been proposed as the basis of design of a robust, reliable, persistent computing system. Just as a multicellular organism can adapt to changing environmental conditions and can survive despite the failure of individual cells, a multi-agent computing system, as envisioned, could adapt to changing hardware, software, and environmental conditions. In particular, the computing system could continue to function (perhaps at a reduced but still reasonable level of performance) if one or more component( s) of the system were to fail. One of the defining characteristics of a multicellular organism is unity of purpose. In biology, the purpose is survival of the organism. The purpose of the proposed multi-agent architecture is to provide a persistent computing environment in harsh conditions in which repair is difficult or impossible. A multi-agent, organism-like computing system would be a single entity built from agents or cells. Each agent or cell would be a discrete hardware processing unit that would include a data processor with local memory, an internal clock, and a suite of communication equipment capable of both local line-of-sight communications and global broadcast communications. Some cells, denoted specialist cells, could contain such additional hardware as sensors and emitters. Each cell would be independent in the sense that there would be no global clock, no global (shared) memory, no pre-assigned cell identifiers, no pre-defined network topology, and no centralized brain or control structure. Like each cell in a living organism, each agent or cell of the computing system would contain a full description of the system encoded as genes, but in this case, the genes would be components of a software genome.
BGen: A UML Behavior Network Generator Tool
NASA Technical Reports Server (NTRS)
Huntsberger, Terry; Reder, Leonard J.; Balian, Harry
2010-01-01
BGen software was designed for autogeneration of code based on a graphical representation of a behavior network used for controlling automatic vehicles. A common format used for describing a behavior network, such as that used in the JPL-developed behavior-based control system, CARACaS ["Control Architecture for Robotic Agent Command and Sensing" (NPO-43635), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 40] includes a graph with sensory inputs flowing through the behaviors in order to generate the signals for the actuators that drive and steer the vehicle. A computer program to translate Unified Modeling Language (UML) Freeform Implementation Diagrams into a legacy C implementation of Behavior Network has been developed in order to simplify the development of C-code for behavior-based control systems. UML is a popular standard developed by the Object Management Group (OMG) to model software architectures graphically. The C implementation of a Behavior Network is functioning as a decision tree.
Multi-Agent Flight Simulation with Robust Situation Generation
NASA Technical Reports Server (NTRS)
Johnson, Eric N.; Hansman, R. John, Jr.
1994-01-01
A robust situation generation architecture has been developed that generates multi-agent situations for human subjects. An implementation of this architecture was developed to support flight simulation tests of air transport cockpit systems. This system maneuvers pseudo-aircraft relative to the human subject's aircraft, generating specific situations for the subject to respond to. These pseudo-aircraft maneuver within reasonable performance constraints, interact in a realistic manner, and make pre-recorded voice radio communications. Use of this system minimizes the need for human experimenters to control the pseudo-agents and provides consistent interactions between the subject and the pseudo-agents. The achieved robustness of this system to typical variations in the subject's flight path was explored. It was found to successfully generate specific situations within the performance limitations of the subject-aircraft, pseudo-aircraft, and the script used.
2008-10-01
Agents in the DEEP architecture extend and use the Java Agent Development (JADE) framework. DEEP requires a distributed multi-agent system and a...framework to help simplify the implementation of this system. JADE was chosen because it is fully implemented in Java , and supports these requirements
Believable Social and Emotional Agents.
1996-05-01
While building tools to support the creation of believable emotional agents, I had to make a number of important design decisions . Before describing...processing systems, it is difficult to give an artist direct control over the emotion - al aspects of the character. By making these decisions explicit, I hope...Woody on “Cheers”). Believable Agents BELIEVABLE SOCIAL AND EMOTIONAL AGENTS 11 Lesson: We don’t want agent architectures that enforce rationality and
Rheology of Hyperbranched Poly(triglyceride)-Based Thermoplastic Elastomers via RAFT polymerization
NASA Astrophysics Data System (ADS)
Yan, Mengguo; Cochran, Eric
2014-03-01
In this contribution we discuss how melt- and solid-state properties are influenced by the degree of branching and molecular weight in a family of hyperbranched thermoplastics derived from soybean oil. Acrylated epoxidized triglycerides from soybean oils have been polymerized to hyperbranched thermoplastic elastomers using reversible addition-fragmentation chain transfer (RAFT) polymerization. With the proper choice of chain transfer agent, both homopolymer and block copolymer can be synthesized. By changing the number of acrylic groups per triglycerides, the chain architectures can range from nearly linear to highly branched. We show how the fundamental viscoelastic properties (e.g. entanglement molecular weight, plateau modulus, etc.) are influenced by chain architecture and molecular weight.
FRIEND: a brain-monitoring agent for adaptive and assistive systems.
Morris, Alexis; Ulieru, Mihaela
2012-01-01
This paper presents an architectural design for adaptive-systems agents (FRIEND) that use brain state information to make more effective decisions on behalf of a user; measuring brain context versus situational demands. These systems could be useful for alerting users to cognitive workload levels or fatigue, and could attempt to compensate for higher cognitive activity by filtering noise information. In some cases such systems could also share control of devices, such as pulling over in an automated vehicle. These aim to assist people in everyday systems to perform tasks better and be more aware of internal states. Achieving a functioning system of this sort is a challenge, involving a unification of brain- computer-interfaces, human-computer-interaction, soft-computin deliberative multi-agent systems disciplines. Until recently, these were not able to be combined into a usable platform due largely to technological limitations (e.g., size, cost, and processing speed), insufficient research on extracting behavioral states from EEG signals, and lack of low-cost wireless sensing headsets. We aim to surpass these limitations and develop control architectures for making sense of brain state in applications by realizing an agent architecture for adaptive (human-aware) technology. In this paper we present an early, high-level design towards implementing a multi-purpose brain-monitoring agent system to improve user quality of life through the assistive applications of psycho-physiological monitoring, noise-filtering, and shared system control.
Autonomous Mission Operations for Sensor Webs
NASA Astrophysics Data System (ADS)
Underbrink, A.; Witt, K.; Stanley, J.; Mandl, D.
2008-12-01
We present interim results of a 2005 ROSES AIST project entitled, "Using Intelligent Agents to Form a Sensor Web for Autonomous Mission Operations", or SWAMO. The goal of the SWAMO project is to shift the control of spacecraft missions from a ground-based, centrally controlled architecture to a collaborative, distributed set of intelligent agents. The network of intelligent agents intends to reduce management requirements by utilizing model-based system prediction and autonomic model/agent collaboration. SWAMO agents are distributed throughout the Sensor Web environment, which may include multiple spacecraft, aircraft, ground systems, and ocean systems, as well as manned operations centers. The agents monitor and manage sensor platforms, Earth sensing systems, and Earth sensing models and processes. The SWAMO agents form a Sensor Web of agents via peer-to-peer coordination. Some of the intelligent agents are mobile and able to traverse between on-orbit and ground-based systems. Other agents in the network are responsible for encapsulating system models to perform prediction of future behavior of the modeled subsystems and components to which they are assigned. The software agents use semantic web technologies to enable improved information sharing among the operational entities of the Sensor Web. The semantics include ontological conceptualizations of the Sensor Web environment, plus conceptualizations of the SWAMO agents themselves. By conceptualizations of the agents, we mean knowledge of their state, operational capabilities, current operational capacities, Web Service search and discovery results, agent collaboration rules, etc. The need for ontological conceptualizations over the agents is to enable autonomous and autonomic operations of the Sensor Web. The SWAMO ontology enables automated decision making and responses to the dynamic Sensor Web environment and to end user science requests. The current ontology is compatible with Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) Sensor Model Language (SensorML) concepts and structures. The agents are currently deployed on the U.S. Naval Academy MidSTAR-1 satellite and are actively managing the power subsystem on-orbit without the need for human intervention.
The Space Microbe Invasion: To Eat or Not to Eat
NASA Technical Reports Server (NTRS)
Munoz, Angela; Jones, Wanda
2013-01-01
Objective: To investigate how different cleaning agents sanitize an assortment of vegetables and fruits for nsumption on board the International Space Station (ISS). -Description: This laboratory investigation will have students testing different cleaning agents on a variety of vegetables and fruits that can be grown on board the ISS. Students will determine which cleaning agent most effectively lowers the number of bacteria on a variety of vegetables and fruits. This lab will also lend itself to investigations dealing with pH and its' role in lowering bacterial counts. In addition, students will figure out the correct balance between plant architecture and effectiveness of sanitizing these surfaces to achieve lower bacteria counts. This will be determined based on swabbed bacteria samples later grown on a Petri dish.
Autonomous Distributed Congestion Control Scheme in WCDMA Network
NASA Astrophysics Data System (ADS)
Ahmad, Hafiz Farooq; Suguri, Hiroki; Choudhary, Muhammad Qaisar; Hassan, Ammar; Liaqat, Ali; Khan, Muhammad Umer
Wireless technology has become widely popular and an important means of communication. A key issue in delivering wireless services is the problem of congestion which has an adverse impact on the Quality of Service (QoS), especially timeliness. Although a lot of work has been done in the context of RRM (Radio Resource Management), the deliverance of quality service to the end user still remains a challenge. Therefore there is need for a system that provides real-time services to the users through high assurance. We propose an intelligent agent-based approach to guarantee a predefined Service Level Agreement (SLA) with heterogeneous user requirements for appropriate bandwidth allocation in QoS sensitive cellular networks. The proposed system architecture exploits Case Based Reasoning (CBR) technique to handle RRM process of congestion management. The system accomplishes predefined SLA through the use of Retrieval and Adaptation Algorithm based on CBR case library. The proposed intelligent agent architecture gives autonomy to Radio Network Controller (RNC) or Base Station (BS) in accepting, rejecting or buffering a connection request to manage system bandwidth. Instead of simply blocking the connection request as congestion hits the system, different buffering durations are allocated to diverse classes of users based on their SLA. This increases the opportunity of connection establishment and reduces the call blocking rate extensively in changing environment. We carry out simulation of the proposed system that verifies efficient performance for congestion handling. The results also show built-in dynamism of our system to cater for variety of SLA requirements.
Cooperative crossing of traffic intersections in a distributed robot system
NASA Astrophysics Data System (ADS)
Rausch, Alexander; Oswald, Norbert; Levi, Paul
1995-09-01
In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.
2013-11-18
for each valid interface between the systems. The factor is proportional to the count of feasible interfaces in the meta-architecture framework... proportional to the square root of the sector area being covered by each type of system, plus some time for transmitting data to, and double checking by, the...22] J.-H. Ahn, "An Archietcture Description method for Acknowledged System of Systems based on Federated Architeture ," in Advanced Science and
OntoTrader: An Ontological Web Trading Agent Approach for Environmental Information Retrieval
Iribarne, Luis; Padilla, Nicolás; Ayala, Rosa; Asensio, José A.; Criado, Javier
2014-01-01
Modern Web-based Information Systems (WIS) are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS). This framework implements a “Query-Searching/Recovering-Response” information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated. PMID:24977211
OntoTrader: an ontological Web trading agent approach for environmental information retrieval.
Iribarne, Luis; Padilla, Nicolás; Ayala, Rosa; Asensio, José A; Criado, Javier
2014-01-01
Modern Web-based Information Systems (WIS) are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS). This framework implements a "Query-Searching/Recovering-Response" information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated.
Model-Unified Planning and Execution for Distributed Autonomous System Control
NASA Technical Reports Server (NTRS)
Aschwanden, Pascal; Baskaran, Vijay; Bernardini, Sara; Fry, Chuck; Moreno, Maria; Muscettola, Nicola; Plaunt, Chris; Rijsman, David; Tompkins, Paul
2006-01-01
The Intelligent Distributed Execution Architecture (IDEA) is a real-time architecture that exploits artificial intelligence planning as the core reasoning engine for interacting autonomous agents. Rather than enforcing separate deliberation and execution layers, IDEA unifies them under a single planning technology. Deliberative and reactive planners reason about and act according to a single representation of the past, present and future domain state. The domain state behaves the rules dictated by a declarative model of the subsystem to be controlled, internal processes of the IDEA controller, and interactions with other agents. We present IDEA concepts - modeling, the IDEA core architecture, the unification of deliberation and reaction under planning - and illustrate its use in a simple example. Finally, we present several real-world applications of IDEA, and compare IDEA to other high-level control approaches.
Autonomous Agents on Expedition: Humans and Progenitor Ants and Planetary Exploration
NASA Astrophysics Data System (ADS)
Rilee, M. L.; Clark, P. E.; Curtis, S. A.; Truszkowski, W. F.
2002-01-01
The Autonomous Nano-Technology Swarm (ANTS) is an advanced mission architecture based on a social insect analog of many specialized spacecraft working together to achieve mission goals. The principal mission concept driving the ANTS architecture is a Main Belt Asteroid Survey in the 2020s that will involve a thousand or more nano-technology enabled, artificially intelligent, autonomous pico-spacecraft (< 1 kg). The objective of this survey is to construct a compendium of composition, shape, and other physical parameter observations of a significant fraction of asteroid belt objects. Such an atlas will be of primary scientific importance for the understanding of Solar System origins and evolution and will lay the foundation for future exploration and capitalization of space. As the capabilities enabling ANTS are developed over the next two decades, these capabilities will need to be proven. Natural milestones for this process include the deployment of progenitors to ANTS on human expeditions to space and remote missions with interfaces for human interaction and control. These progenitors can show up in a variety of forms ranging from spacecraft subsystems and advanced handheld sensors, through complete prototypical ANTS spacecraft. A critical capability to be demonstrated is reliable, long-term autonomous operations across the ANTS architecture. High level, mission-oriented behaviors are to be managed by a control / communications layer of the swarm, whereas common low level functions required of all spacecraft, e.g. attitude control and guidance and navigation, are handled autonomically on each spacecraft. At the higher levels of mission planning and social interaction deliberative techniques are to be used. For the asteroid survey, ANTS acts as a large community of cooperative agents while for precursor missions there arises the intriguing possibility of Progenitor ANTS and humans acting together as agents. For optimal efficiency and responsiveness for individual spacecraft at the lowest levels of control we have been studying control methods based on nonlinear dynamical systems. We describe the critically important autonomous control architecture of the ANTS mission concept and a sequence of partial implementations that feature increasingly autonomous behaviors. The scientific and engineering roles that these Progenitor ANTS could play in human missions or remote missions with near real time human interactions, particularly to the Moon and Mars, will be discussed.
NASA Astrophysics Data System (ADS)
Alford, W. A.; Kawamura, Kazuhiko; Wilkes, Don M.
1997-12-01
This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.
Trust Management in Swarm-Based Autonomic Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiden, Wendy M.; Haack, Jereme N.; Fink, Glenn A.
2009-07-07
Reputation-based trust management techniques can address issues such as insider threat as well as quality of service issues that may be malicious in nature. However, trust management techniques must be adapted to the unique needs of the architectures and problem domains to which they are applied. Certain characteristics of swarms such as their lightweight ephemeral nature and indirect communication make this adaptation especially challenging. In this paper we look at the trust issues and opportunities in mobile agent swarm-based autonomic systems and find that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust managementmore » problem becomes much more scalable and still serves to protect the swarms. We also analyze the applicability of trust management research as it has been applied to architectures with similar characteristics. Finally, we specify required characteristics for trust management mechanisms to be used to monitor the trustworthiness of the entities in a swarm-based autonomic computing system.« less
NASA Astrophysics Data System (ADS)
Lewe, Jung-Ho
The National Transportation System (NTS) is undoubtedly a complex system-of-systems---a collection of diverse 'things' that evolve over time, organized at multiple levels, to achieve a range of possibly conflicting objectives, and never quite behaving as planned. The purpose of this research is to develop a virtual transportation architecture for the ultimate goal of formulating an integrated decision-making framework. The foundational endeavor begins with creating an abstraction of the NTS with the belief that a holistic frame of reference is required to properly study such a multi-disciplinary, trans-domain system. The culmination of the effort produces the Transportation Architecture Field (TAF) as a mental model of the NTS, in which the relationships between four basic entity groups are identified and articulated. This entity-centric abstraction framework underpins the construction of a virtual NTS couched in the form of an agent-based model. The transportation consumers and the service providers are identified as adaptive agents that apply a set of preprogrammed behavioral rules to achieve their respective goals. The transportation infrastructure and multitude of exogenous entities (disruptors and drivers) in the whole system can also be represented without resorting to an extremely complicated structure. The outcome is a flexible, scalable, computational model that allows for examination of numerous scenarios which involve the cascade of interrelated effects of aviation technology, infrastructure, and socioeconomic changes throughout the entire system.
On the Design of Smart Homes: A Framework for Activity Recognition in Home Environment.
Cicirelli, Franco; Fortino, Giancarlo; Giordano, Andrea; Guerrieri, Antonio; Spezzano, Giandomenico; Vinci, Andrea
2016-09-01
A smart home is a home environment enriched with sensing, actuation, communication and computation capabilities which permits to adapt it to inhabitants preferences and requirements. Establishing a proper strategy of actuation on the home environment can require complex computational tasks on the sensed data. This is the case of activity recognition, which consists in retrieving high-level knowledge about what occurs in the home environment and about the behaviour of the inhabitants. The inherent complexity of this application domain asks for tools able to properly support the design and implementation phases. This paper proposes a framework for the design and implementation of smart home applications focused on activity recognition in home environments. The framework mainly relies on the Cloud-assisted Agent-based Smart home Environment (CASE) architecture offering basic abstraction entities which easily allow to design and implement Smart Home applications. CASE is a three layered architecture which exploits the distributed multi-agent paradigm and the cloud technology for offering analytics services. Details about how to implement activity recognition onto the CASE architecture are supplied focusing on the low-level technological issues as well as the algorithms and the methodologies useful for the activity recognition. The effectiveness of the framework is shown through a case study consisting of a daily activity recognition of a person in a home environment.
NASA Astrophysics Data System (ADS)
Zhou, Changjiu; Meng, Qingchun; Guo, Zhongwen; Qu, Wiefen; Yin, Bo
2002-04-01
Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.
Agent Architecture for Aviation Data Integration System
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Wang, Yao; Windrem, May; Patel, Hemil; Wei, Mei
2004-01-01
This paper describes the proposed agent-based architecture of the Aviation Data Integration System (ADIS). ADIS is a software system that provides integrated heterogeneous data to support aviation problem-solving activities. Examples of aviation problem-solving activities include engineering troubleshooting, incident and accident investigation, routine flight operations monitoring, safety assessment, maintenance procedure debugging, and training assessment. A wide variety of information is typically referenced when engaging in these activities. Some of this information includes flight recorder data, Automatic Terminal Information Service (ATIS) reports, Jeppesen charts, weather data, air traffic control information, safety reports, and runway visual range data. Such wide-ranging information cannot be found in any single unified information source. Therefore, this information must be actively collected, assembled, and presented in a manner that supports the users problem-solving activities. This information integration task is non-trivial and presents a variety of technical challenges. ADIS has been developed to do this task and it permits integration of weather, RVR, radar data, and Jeppesen charts with flight data. ADIS has been implemented and used by several airlines FOQA teams. The initial feedback from airlines is that such a system is very useful in FOQA analysis. Based on the feedback from the initial deployment, we are developing a new version of the system that would make further progress in achieving following goals of our project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, M.A.; Craig, J.I.
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implementmore » the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.« less
The Real-Time ObjectAgent Software Architecture for Distributed Satellite Systems
2001-01-01
real - time operating system selection are also discussed. The fourth section describes a simple demonstration of real-time ObjectAgent. Finally, the...experience with C++. After selecting the programming language, it was necessary to select a target real - time operating system (RTOS) and embedded...ObjectAgent software to run on the OSE Real Time Operating System . In addition, she is responsible for the integration of ObjectAgent
A Distributed Trajectory-Oriented Approach to Managing Traffic Complexity
NASA Technical Reports Server (NTRS)
Idris, Husni; Wing, David J.; Vivona, Robert; Garcia-Chico, Jose-Luis
2007-01-01
In order to handle the expected increase in air traffic volume, the next generation air transportation system is moving towards a distributed control architecture, in which ground-based service providers such as controllers and traffic managers and air-based users such as pilots share responsibility for aircraft trajectory generation and management. While its architecture becomes more distributed, the goal of the Air Traffic Management (ATM) system remains to achieve objectives such as maintaining safety and efficiency. It is, therefore, critical to design appropriate control elements to ensure that aircraft and groundbased actions result in achieving these objectives without unduly restricting user-preferred trajectories. This paper presents a trajectory-oriented approach containing two such elements. One is a trajectory flexibility preservation function, by which aircraft plan their trajectories to preserve flexibility to accommodate unforeseen events. And the other is a trajectory constraint minimization function by which ground-based agents, in collaboration with air-based agents, impose just-enough restrictions on trajectories to achieve ATM objectives, such as separation assurance and flow management. The underlying hypothesis is that preserving trajectory flexibility of each individual aircraft naturally achieves the aggregate objective of avoiding excessive traffic complexity, and that trajectory flexibility is increased by minimizing constraints without jeopardizing the intended ATM objectives. The paper presents conceptually how the two functions operate in a distributed control architecture that includes self separation. The paper illustrates the concept through hypothetical scenarios involving conflict resolution and flow management. It presents a functional analysis of the interaction and information flow between the functions. It also presents an analytical framework for defining metrics and developing methods to preserve trajectory flexibility and minimize its constraints. In this framework flexibility is defined in terms of robustness and adaptability to disturbances and the impact of constraints is illustrated through analysis of a trajectory solution space with limited degrees of freedom and in simple constraint situations involving meeting multiple times of arrival and resolving a conflict.
An, Gary
2008-05-27
One of the greatest challenges facing biomedical research is the integration and sharing of vast amounts of information, not only for individual researchers, but also for the community at large. Agent Based Modeling (ABM) can provide a means of addressing this challenge via a unifying translational architecture for dynamic knowledge representation. This paper presents a series of linked ABMs representing multiple levels of biological organization. They are intended to translate the knowledge derived from in vitro models of acute inflammation to clinically relevant phenomenon such as multiple organ failure. ABM development followed a sequence starting with relatively direct translation from in-vitro derived rules into a cell-as-agent level ABM, leading on to concatenated ABMs into multi-tissue models, eventually resulting in topologically linked aggregate multi-tissue ABMs modeling organ-organ crosstalk. As an underlying design principle organs were considered to be functionally composed of an epithelial surface, which determined organ integrity, and an endothelial/blood interface, representing the reaction surface for the initiation and propagation of inflammation. The development of the epithelial ABM derived from an in-vitro model of gut epithelial permeability is described. Next, the epithelial ABM was concatenated with the endothelial/inflammatory cell ABM to produce an organ model of the gut. This model was validated against in-vivo models of the inflammatory response of the gut to ischemia. Finally, the gut ABM was linked to a similarly constructed pulmonary ABM to simulate the gut-pulmonary axis in the pathogenesis of multiple organ failure. The behavior of this model was validated against in-vivo and clinical observations on the cross-talk between these two organ systems. A series of ABMs are presented extending from the level of intracellular mechanism to clinically observed behavior in the intensive care setting. The ABMs all utilize cell-level agents that encapsulate specific mechanistic knowledge extracted from in vitro experiments. The execution of the ABMs results in a dynamic representation of the multi-scale conceptual models derived from those experiments. These models represent a qualitative means of integrating basic scientific information on acute inflammation in a multi-scale, modular architecture as a means of conceptual model verification that can potentially be used to concatenate, communicate and advance community-wide knowledge.
Model-based Executive Control through Reactive Planning for Autonomous Rovers
NASA Technical Reports Server (NTRS)
Finzi, Alberto; Ingrand, Felix; Muscettola, Nicola
2004-01-01
This paper reports on the design and implementation of a real-time executive for a mobile rover that uses a model-based, declarative approach. The control system is based on the Intelligent Distributed Execution Architecture (IDEA), an approach to planning and execution that provides a unified representational and computational framework for an autonomous agent. The basic hypothesis of IDEA is that a large control system can be structured as a collection of interacting agents, each with the same fundamental structure. We show that planning and real-time response are compatible if the executive minimizes the size of the planning problem. We detail the implementation of this approach on an exploration rover (Gromit an RWI ATRV Junior at NASA Ames) presenting different IDEA controllers of the same domain and comparing them with more classical approaches. We demonstrate that the approach is scalable to complex coordination of functional modules needed for autonomous navigation and exploration.
Vehicle Maneuver Detection with Accelerometer-Based Classification.
Cervantes-Villanueva, Javier; Carrillo-Zapata, Daniel; Terroso-Saenz, Fernando; Valdes-Vela, Mercedes; Skarmeta, Antonio F
2016-09-29
In the mobile computing era, smartphones have become instrumental tools to develop innovative mobile context-aware systems. In that sense, their usage in the vehicular domain eases the development of novel and personal transportation solutions. In this frame, the present work introduces an innovative mechanism to perceive the current kinematic state of a vehicle on the basis of the accelerometer data from a smartphone mounted in the vehicle. Unlike previous proposals, the introduced architecture targets the computational limitations of such devices to carry out the detection process following an incremental approach. For its realization, we have evaluated different classification algorithms to act as agents within the architecture. Finally, our approach has been tested with a real-world dataset collected by means of the ad hoc mobile application developed.
Wains: a pattern-seeking artificial life species.
de Buitléir, Amy; Russell, Michael; Daly, Mark
2012-01-01
We describe the initial phase of a research project to develop an artificial life framework designed to extract knowledge from large data sets with minimal preparation or ramp-up time. In this phase, we evolved an artificial life population with a new brain architecture. The agents have sufficient intelligence to discover patterns in data and to make survival decisions based on those patterns. The species uses diploid reproduction, Hebbian learning, and Kohonen self-organizing maps, in combination with novel techniques such as using pattern-rich data as the environment and framing the data analysis as a survival problem for artificial life. The first generation of agents mastered the pattern discovery task well enough to thrive. Evolution further adapted the agents to their environment by making them a little more pessimistic, and also by making their brains more efficient.
ERIC Educational Resources Information Center
Hassani, Kaveh; Nahvi, Ali; Ahmadi, Ali
2016-01-01
In this paper, we present an intelligent architecture, called intelligent virtual environment for language learning, with embedded pedagogical agents for improving listening and speaking skills of non-native English language learners. The proposed architecture integrates virtual environments into the Intelligent Computer-Assisted Language…
Fabbri, M; Celotti, G C; Ravaglioli, A
1995-02-01
At the request of medical teams from the maxillofacial sector, a highly porous ceramic support based on hydroxyapatite of around 70-80% porosity was produced with a pore size distribution similar to bone texture (< 10 microns, approximately 3 vol%; 10-150 microns, approximately 110 vol%; > 150 microns, approximately 86 vol%). The ceramic substrates were conceived not only as a fillers for bone cavities, but also for use as drug dispensers and as supports to host cells to produce particular therapeutic agents. A method is suggested to obtain a substrate of high porosity, exploiting the impregnation of spongy substrate with hydroxyapatite ceramic particles. X-ray and scanning electron microscopy analyses were carried out to evaluate the nature of the new ceramic support in comparison with the most common commercial product; pore size distribution and porosity were controlled to known hydroxyapatite ceramic architecture for the different possible uses.
Using Cognitive Agents to Train Negotiation Skills
Stevens, Christopher A.; Daamen, Jeroen; Gaudrain, Emma; Renkema, Tom; Top, Jakob Dirk; Cnossen, Fokie; Taatgen, Niels A.
2018-01-01
Training negotiation is difficult because it is a complex, dynamic activity that involves multiple parties. It is often not clear how to create situations in which students can practice negotiation or how to measure students' progress. Some have begun to address these issues by creating artificial software agents with which students can train. These agents have the advantage that they can be “reset,” and played against multiple times. This allows students to learn from their mistakes and try different strategies. However, these agents are often based on normative theories of how negotiators should conduct themselves, not necessarily how people actually behave in negotiations. Here, we take a step toward addressing this gap by developing an agent grounded in a cognitive architecture, ACT-R. This agent contains a model of theory-of-mind, the ability of humans to reason about the mental states of others. It uses this model to try to infer the strategy of the opponent and respond accordingly. In a series of experiments, we show that this agent replicates some aspects of human performance, is plausible to human negotiators, and can lead to learning gains in a small-scale negotiation task. PMID:29535654
Using Cognitive Agents to Train Negotiation Skills.
Stevens, Christopher A; Daamen, Jeroen; Gaudrain, Emma; Renkema, Tom; Top, Jakob Dirk; Cnossen, Fokie; Taatgen, Niels A
2018-01-01
Training negotiation is difficult because it is a complex, dynamic activity that involves multiple parties. It is often not clear how to create situations in which students can practice negotiation or how to measure students' progress. Some have begun to address these issues by creating artificial software agents with which students can train. These agents have the advantage that they can be "reset," and played against multiple times. This allows students to learn from their mistakes and try different strategies. However, these agents are often based on normative theories of how negotiators should conduct themselves, not necessarily how people actually behave in negotiations. Here, we take a step toward addressing this gap by developing an agent grounded in a cognitive architecture, ACT-R. This agent contains a model of theory-of-mind, the ability of humans to reason about the mental states of others. It uses this model to try to infer the strategy of the opponent and respond accordingly. In a series of experiments, we show that this agent replicates some aspects of human performance, is plausible to human negotiators, and can lead to learning gains in a small-scale negotiation task.
Next Generation Remote Agent Planner
NASA Technical Reports Server (NTRS)
Jonsson, Ari K.; Muscettola, Nicola; Morris, Paul H.; Rajan, Kanna
1999-01-01
In May 1999, as part of a unique technology validation experiment onboard the Deep Space One spacecraft, the Remote Agent became the first complete autonomous spacecraft control architecture to run as flight software onboard an active spacecraft. As one of the three components of the architecture, the Remote Agent Planner had the task of laying out the course of action to be taken, which included activities such as turning, thrusting, data gathering, and communicating. Building on the successful approach developed for the Remote Agent Planner, the Next Generation Remote Agent Planner is a completely redesigned and reimplemented version of the planner. The new system provides all the key capabilities of the original planner, while adding functionality, improving performance and providing a modular and extendible implementation. The goal of this ongoing project is to develop a system that provides both a basis for future applications and a framework for further research in the area of autonomous planning for spacecraft. In this article, we present an introductory overview of the Next Generation Remote Agent Planner. We present a new and simplified definition of the planning problem, describe the basics of the planning process, lay out the new system design and examine the functionality of the core reasoning module.
A SOA-based approach to geographical data sharing
NASA Astrophysics Data System (ADS)
Li, Zonghua; Peng, Mingjun; Fan, Wei
2009-10-01
In the last few years, large volumes of spatial data have been available in different government departments in China, but these data are mainly used within these departments. With the e-government project initiated, spatial data sharing become more and more necessary. Currently, the Web has been used not only for document searching but also for the provision and use of services, known as Web services, which are published in a directory and may be automatically discovered by software agents. Particularly in the spatial domain, the possibility of accessing these large spatial datasets via Web services has motivated research into the new field of Spatial Data Infrastructure (SDI) implemented using service-oriented architecture. In this paper a Service-Oriented Architecture (SOA) based Geographical Information Systems (GIS) is proposed, and a prototype system is deployed based on Open Geospatial Consortium (OGC) standard in Wuhan, China, thus that all the departments authorized can access the spatial data within the government intranet, and also these spatial data can be easily integrated into kinds of applications.
NASA Astrophysics Data System (ADS)
Zhang, Min; He, Weiyi
2018-06-01
Under the guidance of principal-agent theory and modular theory, the collaborative innovation of green technology-based companies, design contractors and project builders based on united agency will provide direction for the development of green construction supply chain in the future. After analyzing the existing independent agencies, this paper proposes the industry-university-research bilateral collaborative innovation network architecture and modularization with the innovative function of engineering design in the context of non-standard transformation interfaces, analyzes the innovation responsibility center, and gives some countermeasures and suggestions to promote the performance of bilateral cooperative innovation network.
Smart caching based on mobile agent of power WebGIS platform.
Wang, Xiaohui; Wu, Kehe; Chen, Fei
2013-01-01
Power information construction is developing towards intensive, platform, distributed direction with the expansion of power grid and improvement of information technology. In order to meet the trend, power WebGIS was designed and developed. In this paper, we first discuss the architecture and functionality of power WebGIS, and then we study caching technology in detail, which contains dynamic display cache model, caching structure based on mobile agent, and cache data model. We have designed experiments of different data capacity to contrast performance between WebGIS with the proposed caching model and traditional WebGIS. The experimental results showed that, with the same hardware environment, the response time of WebGIS with and without caching model increased as data capacity growing, while the larger the data was, the higher the performance of WebGIS with proposed caching model improved.
A review of agent-based modeling approach in the supply chain collaboration context
NASA Astrophysics Data System (ADS)
Arvitrida, N. I.
2018-04-01
Collaboration is considered as the key aspect of supply chain management (SCM) success. This issue has been addressed by many studies in recent years, but there are still few research employs agent-based modeling (ABM) approach to study business partnerships in SCM. This paper reviews the use of ABM in modeling collaboration in supply chains and inform the scope of ABM application in the existing literature. The review reveals that ABM can be an effective tool to address various aspects in supply chain relationships, but its applications in SCM studies are still limited. Moreover, where ABM is applied in the SCM context, most of the studies focus on software architecture rather than analyzing the supply chain issues. This paper also provides insights to SCM researchers about the opportunity uses of ABM in studying complexity in supply chain collaboration.
The Mobile Agents Integrated Field Test: Mars Desert Research Station April 2003
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ron
2003-01-01
The Mobile Agents model-based, distributed architecture, which integrates diverse components in a system for lunar and planetary surface operations, was extensively tested in a two-week field "technology retreat" at the Mars Society s Desert Research Station (MDRS) during April 2003. More than twenty scientists and engineers from three NASA centers and two universities refined and tested the system through a series of incremental scenarios. Agent software, implemented in runtime Brahms, processed GPS, health data, and voice commands-monitoring, controlling and logging science data throughout simulated EVAs with two geologists. Predefined EVA plans, modified on the fly by voice command, enabled the Mobile Agents system to provide navigation and timing advice. Communications were maintained over five wireless nodes distributed over hills and into canyons for 5 km; data, including photographs and status was transmitted automatically to the desktop at mission control in Houston. This paper describes the system configurations, communication protocols, scenarios, and test results.
Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle
Barriuso, Alberto L.; De Paz, Juan F.; Lozano, Álvaro
2018-01-01
Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed. PMID:29301310
Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle.
Barriuso, Alberto L; Villarrubia González, Gabriel; De Paz, Juan F; Lozano, Álvaro; Bajo, Javier
2018-01-02
Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed.
An architecture for integrating distributed and cooperating knowledge-based Air Force decision aids
NASA Technical Reports Server (NTRS)
Nugent, Richard O.; Tucker, Richard W.
1988-01-01
MITRE has been developing a Knowledge-Based Battle Management Testbed for evaluating the viability of integrating independently-developed knowledge-based decision aids in the Air Force tactical domain. The primary goal for the testbed architecture is to permit a new system to be added to a testbed with little change to the system's software. Each system that connects to the testbed network declares that it can provide a number of services to other systems. When a system wants to use another system's service, it does not address the server system by name, but instead transmits a request to the testbed network asking for a particular service to be performed. A key component of the testbed architecture is a common database which uses a relational database management system (RDBMS). The RDBMS provides a database update notification service to requesting systems. Normally, each system is expected to monitor data relations of interest to it. Alternatively, a system may broadcast an announcement message to inform other systems that an event of potential interest has occurred. Current research is aimed at dealing with issues resulting from integration efforts, such as dealing with potential mismatches of each system's assumptions about the common database, decentralizing network control, and coordinating multiple agents.
Development of an evolutionary simulator and an overall control system for intelligent wheelchair
NASA Astrophysics Data System (ADS)
Imai, Makoto; Kawato, Koji; Hamagami, Tomoki; Hirata, Hironori
The goal of this research is to develop an intelligent wheelchair (IWC) system which aids an indoor safe mobility for elderly and disabled people with a new conceptual architecture which realizes autonomy, cooperativeness, and a collaboration behavior. In order to develop the IWC system in real environment, we need design-tools and flexible architecture. In particular, as more significant ones, this paper describes two key techniques which are an evolutionary simulation and an overall control mechanism. The evolutionary simulation technique corrects the error between the virtual environment in a simulator and real one in during the learning of an IWC agent, and coevolves with the agent. The overall control mechanism is implemented with subsumption architecture which is employed in an autonomous robot controller. By using these techniques in both simulations and experiments, we confirm that our IWC system acquires autonomy, cooperativeness, and a collaboration behavior efficiently.
Building intelligence in third-generation training and battle simulations
NASA Astrophysics Data System (ADS)
Jacobi, Dennis; Anderson, Don; von Borries, Vance; Elmaghraby, Adel; Kantardzic, Mehmed; Ragade, Rammohan
2003-09-01
Current war games and simulations are primarily attrition based, and are centered on the concept of force on force. They constitute what can be defined as "second generation" war games. So-called "first generation" war games were focused on strategy with the primary concept of mind on mind. We envision "third generation" war games and battle simulations as concentrating on effects with the primary concept being system on system. Thus the third generation systems will incorporate each successive generation and take into account strategy, attrition and effects. This paper will describe the principal advantages and features that need to be implemented to create a true "third generation" battle simulation and the architectural issues faced when designing and building such a system. Areas of primary concern are doctrine, command and control, allied and coalition warfare, and cascading effects. Effectively addressing the interactive effects of these issues is of critical importance. In order to provide an adaptable and modular system that will accept future modifications and additions with relative ease, we are researching the use of a distributed Multi-Agent System (MAS) that incorporates various artificial intelligence methods. The agent architecture can mirror the military command structure from both vertical and horizontal perspectives while providing the ability to make modifications to doctrine, command structures, inter-command communications, as well as model the results of various effects upon one another, and upon the components of the simulation. This is commonly referred to as "cascading effects," in which A affects B, B affects C and so on. Agents can be used to simulate units or parts of units that interact to form the whole. Even individuals can eventually be simulated to take into account the affect to key individuals such as commanders, heroes, and aces. Each agent will have a learning component built in to provide "individual intelligence" based on experience.
Martinez-Espronceda, Miguel; Martinez, Ignacio; Serrano, Luis; Led, Santiago; Trigo, Jesús Daniel; Marzo, Asier; Escayola, Javier; Garcia, José
2011-05-01
Traditionally, e-Health solutions were located at the point of care (PoC), while the new ubiquitous user-centered paradigm draws on standard-based personal health devices (PHDs). Such devices place strict constraints on computation and battery efficiency that encouraged the International Organization for Standardization/IEEE11073 (X73) standard for medical devices to evolve from X73PoC to X73PHD. In this context, low-voltage low-power (LV-LP) technologies meet the restrictions of X73PHD-compliant devices. Since X73PHD does not approach the software architecture, the accomplishment of an efficient design falls directly on the software developer. Therefore, computational and battery performance of such LV-LP-constrained devices can even be outperformed through an efficient X73PHD implementation design. In this context, this paper proposes a new methodology to implement X73PHD into microcontroller-based platforms with LV-LP constraints. Such implementation methodology has been developed through a patterns-based approach and applied to a number of X73PHD-compliant agents (including weighing scale, blood pressure monitor, and thermometer specializations) and microprocessor architectures (8, 16, and 32 bits) as a proof of concept. As a reference, the results obtained in the weighing scale guarantee all features of X73PHD running over a microcontroller architecture based on ARM7TDMI requiring only 168 B of RAM and 2546 B of flash memory.
NASA Astrophysics Data System (ADS)
Thomas, Romain; Donikian, Stéphane
Many articles dealing with agent navigation in an urban environment involve the use of various heuristics. Among them, one is prevalent: the search of the shortest path between two points. This strategy impairs the realism of the resulting behaviour. Indeed, psychological studies state that such a navigation behaviour is conditioned by the knowledge the subject has of its environment. Furthermore, the path a city dweller can follow may be influenced by many factors like his daily habits, or the path simplicity in term of minimum of direction changes. It appeared interesting to us to investigate how to mimic human navigation behavior with an autonomous agent. The solution we propose relies on an architecture based on a generic model of informed environment, a spatial cognitive map model merged with a human-like memory model, representing the agent's temporal knowledge of the environment, it gained along its experiences of navigation.
2015-12-24
network, allowing each to communicate with all nodes on the network. Additionally , the transmission power will be turned down to the lowest value . This...reserved for these unmanned agents are gen- erally too dull, dirty, dangerous, or difficult for onboard human pilots to complete. Additionally , the use...architectures do have a much higher level of complexity than single vehicle architectures. Additionally , the weight, size, and power limitations of the
Proton beam therapy control system
Baumann, Michael A [Riverside, CA; Beloussov, Alexandre V [Bernardino, CA; Bakir, Julide [Alta Loma, CA; Armon, Deganit [Redlands, CA; Olsen, Howard B [Colton, CA; Salem, Dana [Riverside, CA
2008-07-08
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A.; Beloussov, Alexandre V.; Bakir, Julide; Armon, Deganit; Olsen, Howard B.; Salem, Dana
2010-09-21
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana
2013-06-25
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana
2013-12-03
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Counter-terrorism threat prediction architecture
NASA Astrophysics Data System (ADS)
Lehman, Lynn A.; Krause, Lee S.
2004-09-01
This paper will evaluate the feasibility of constructing a system to support intelligence analysts engaged in counter-terrorism. It will discuss the use of emerging techniques to evaluate a large-scale threat data repository (or Infosphere) and comparing analyst developed models to identify and discover potential threat-related activity with a uncertainty metric used to evaluate the threat. This system will also employ the use of psychological (or intent) modeling to incorporate combatant (i.e. terrorist) beliefs and intent. The paper will explore the feasibility of constructing a hetero-hierarchical (a hierarchy of more than one kind or type characterized by loose connection/feedback among elements of the hierarchy) agent based framework or "family of agents" to support "evidence retrieval" defined as combing, or searching the threat data repository and returning information with an uncertainty metric. The counter-terrorism threat prediction architecture will be guided by a series of models, constructed to represent threat operational objectives, potential targets, or terrorist objectives. The approach would compare model representations against information retrieved by the agent family to isolate or identify patterns that match within reasonable measures of proximity. The central areas of discussion will be the construction of an agent framework to search the available threat related information repository, evaluation of results against models that will represent the cultural foundations, mindset, sociology and emotional drive of typical threat combatants (i.e. the mind and objectives of a terrorist), and the development of evaluation techniques to compare result sets with the models representing threat behavior and threat targets. The applicability of concepts surrounding Modeling Field Theory (MFT) will be discussed as the basis of this research into development of proximity measures between the models and result sets and to provide feedback in support of model adaptation (learning). The increasingly complex demands facing analysts evaluating activity threatening to the security of the United States make the family of agent-based data collection (fusion) a promising area. This paper will discuss a system to support the collection and evaluation of potential threat activity as well as an approach fro presentation of the information.
Social Simulation for AmI Systems Engineering
NASA Astrophysics Data System (ADS)
Garcia-Valverde, Teresa; Serrano, Emilio; Botia, Juan A.
This paper propose the use of multi-agent based simulation (MABS) to allow testing, validating and verifying Ambient Intelligence (AmI) environments in a flexible and robust way. The development of AmI is very complex because of this technology must often adapt to contextual information as well as unpredictable and changeable behaviours. The concrete simulation is called Ubik and is integrated into the AmISim architecture which is also presented in this paper. This architecture deals with AmI applications in order to discover defects, estimate quality of applications, help to make decisions about the design, etc. The paper shows that Ubik and AmISim provide a simulation framework which can test scenarios that would be impossible in real environments or even with previous AmI simulation approaches.
The agent-based spatial information semantic grid
NASA Astrophysics Data System (ADS)
Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren
2006-10-01
Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid management layer establishes a virtual environment that integrates seamlessly all GIS notes. 2) When the resource management system searches data on different spatial information systems, it transfers the meaning of different Local Ontology Agents rather than access data directly. So the ability of search and query can be said to be on the semantic level. 3) The data access procedure is transparent to guests, that is, they could access the information from remote site as current disk because the General Ontology Agent could automatically link data by the Data Agents that link the Ontology concept to GIS data. 4) The capability of processing massive spatial data. Storing, accessing and managing massive spatial data from TB to PB; efficiently analyzing and processing spatial data to produce model, information and knowledge; and providing 3D and multimedia visualization services. 5) The capability of high performance computing and processing on spatial information. Solving spatial problems with high precision, high quality, and on a large scale; and process spatial information in real time or on time, with high-speed and high efficiency. 6) The capability of sharing spatial resources. The distributed heterogeneous spatial information resources are Shared and realizing integrated and inter-operated on semantic level, so as to make best use of spatial information resources,such as computing resources, storage devices, spatial data (integrating from GIS, RS and GPS), spatial applications and services, GIS platforms, 7) The capability of integrating legacy GIS system. A ASISG can not only be used to construct new advanced spatial application systems, but also integrate legacy GIS system, so as to keep extensibility and inheritance and guarantee investment of users. 8) The capability of collaboration. Large-scale spatial information applications and services always involve different departments in different geographic places, so remote and uniform services are needed. 9) The capability of supporting integration of heterogeneous systems. Large-scale spatial information systems are always synthetically applications, so ASISG should provide interoperation and consistency through adopting open and applied technology standards. 10) The capability of adapting dynamic changes. Business requirements, application patterns, management strategies, and IT products always change endlessly for any departments, so ASISG should be self-adaptive. Two examples are provided in this paper, those examples provide a detailed way on how you design your semantic grid based on Multi-Agent systems and Ontology. In conclusion, the semantic grid of spatial information system could improve the ability of the integration and interoperability of spatial information grid.
Distributed, cooperating knowledge-based systems
NASA Technical Reports Server (NTRS)
Truszkowski, Walt
1991-01-01
Some current research in the development and application of distributed, cooperating knowledge-based systems technology is addressed. The focus of the current research is the spacecraft ground operations environment. The underlying hypothesis is that, because of the increasing size, complexity, and cost of planned systems, conventional procedural approaches to the architecture of automated systems will give way to a more comprehensive knowledge-based approach. A hallmark of these future systems will be the integration of multiple knowledge-based agents which understand the operational goals of the system and cooperate with each other and the humans in the loop to attain the goals. The current work includes the development of a reference model for knowledge-base management, the development of a formal model of cooperating knowledge-based agents, the use of testbed for prototyping and evaluating various knowledge-based concepts, and beginning work on the establishment of an object-oriented model of an intelligent end-to-end (spacecraft to user) system. An introductory discussion of these activities is presented, the major concepts and principles being investigated are highlighted, and their potential use in other application domains is indicated.
NASA Astrophysics Data System (ADS)
Black, Randy; Bai, Haowei; Michalicek, Andrew; Shelton, Blaine; Villela, Mark
2008-01-01
Currently, autonomy in space applications is limited by a variety of technology gaps. Innovative application of wireless technology and avionics architectural principles drawn from the Orion crew exploration vehicle provide solutions for several of these gaps. The Vision for Space Exploration envisions extensive use of autonomous systems. Economic realities preclude continuing the level of operator support currently required of autonomous systems in space. In order to decrease the number of operators, more autonomy must be afforded to automated systems. However, certification authorities have been notoriously reluctant to certify autonomous software in the presence of humans or when costly missions may be jeopardized. The Orion avionics architecture, drawn from advanced commercial aircraft avionics, is based upon several architectural principles including partitioning in software. Robust software partitioning provides "brick wall" separation between software applications executing on a single processor, along with controlled data movement between applications. Taking advantage of these attributes, non-deterministic applications can be placed in one partition and a "Safety" application created in a separate partition. This "Safety" partition can track the position of astronauts or critical equipment and prevent any unsafe command from executing. Only the Safety partition need be certified to a human rated level. As a proof-of-concept demonstration, Honeywell has teamed with the Ultra WideBand (UWB) Working Group at NASA Johnson Space Center to provide tracking of humans, autonomous systems, and critical equipment. Using UWB the NASA team can determine positioning to within less than one inch resolution, allowing a Safety partition to halt operation of autonomous systems in the event that an unplanned collision is imminent. Another challenge facing autonomous systems is the coordination of multiple autonomous agents. Current approaches address the issue as one of networking and coordination of multiple independent units, each with its own mission. As a proof-of-concept Honeywell is developing and testing various algorithms that lead to a deterministic, fault tolerant, reliable wireless backplane. Just as advanced avionics systems control several subsystems, actuators, sensors, displays, etc.; a single "master" autonomous agent (or base station computer) could control multiple autonomous systems. The problem is simplified to controlling a flexible body consisting of several sensors and actuators, rather than one of coordinating multiple independent units. By filling technology gaps associated with space based autonomous system, wireless technology and Orion architectural principles provide the means for decreasing operational costs and simplifying problems associated with collaboration of multiple autonomous systems.
Building distributed rule-based systems using the AI Bus
NASA Technical Reports Server (NTRS)
Schultz, Roger D.; Stobie, Iain C.
1990-01-01
The AI Bus software architecture was designed to support the construction of large-scale, production-quality applications in areas of high technology flux, running heterogeneous distributed environments, utilizing a mix of knowledge-based and conventional components. These goals led to its current development as a layered, object-oriented library for cooperative systems. This paper describes the concepts and design of the AI Bus and its implementation status as a library of reusable and customizable objects, structured by layers from operating system interfaces up to high-level knowledge-based agents. Each agent is a semi-autonomous process with specialized expertise, and consists of a number of knowledge sources (a knowledge base and inference engine). Inter-agent communication mechanisms are based on blackboards and Actors-style acquaintances. As a conservative first implementation, we used C++ on top of Unix, and wrapped an embedded Clips with methods for the knowledge source class. This involved designing standard protocols for communication and functions which use these protocols in rules. Embedding several CLIPS objects within a single process was an unexpected problem because of global variables, whose solution involved constructing and recompiling a C++ version of CLIPS. We are currently working on a more radical approach to incorporating CLIPS, by separating out its pattern matcher, rule and fact representations and other components as true object oriented modules.
Schrodt, Fabian; Kneissler, Jan; Ehrenfeld, Stephan; Butz, Martin V
2017-04-01
In line with Allen Newell's challenge to develop complete cognitive architectures, and motivated by a recent proposal for a unifying subsymbolic computational theory of cognition, we introduce the cognitive control architecture SEMLINCS. SEMLINCS models the development of an embodied cognitive agent that learns discrete production rule-like structures from its own, autonomously gathered, continuous sensorimotor experiences. Moreover, the agent uses the developing knowledge to plan and control environmental interactions in a versatile, goal-directed, and self-motivated manner. Thus, in contrast to several well-known symbolic cognitive architectures, SEMLINCS is not provided with production rules and the involved symbols, but it learns them. In this paper, the actual implementation of SEMLINCS causes learning and self-motivated, autonomous behavioral control of the game figure Mario in a clone of the computer game Super Mario Bros. Our evaluations highlight the successful development of behavioral versatility as well as the learning of suitable production rules and the involved symbols from sensorimotor experiences. Moreover, knowledge- and motivation-dependent individualizations of the agents' behavioral tendencies are shown. Finally, interaction sequences can be planned on the sensorimotor-grounded production rule level. Current limitations directly point toward the need for several further enhancements, which may be integrated into SEMLINCS in the near future. Overall, SEMLINCS may be viewed as an architecture that allows the functional and computational modeling of embodied cognitive development, whereby the current main focus lies on the development of production rules from sensorimotor experiences. Copyright © 2017 Cognitive Science Society, Inc.
2014-06-01
information superiority in Network- centric warfare .34 A brief discussion of the implementation of battlespace awareness is given. The method 3 Figure 2...developing the model used for this study. Lanchester Equations,39 System Dynamics models,40–42 Discrete Event Simulation, and Agent-based models (ABMs) were...popularity in the military modeling community in recent years due to their ability to effectively capture complex interactions in warfare scenarios with many
Student Modeling in an Intelligent Tutoring System
1996-12-17
Multi-Agent Architecture." Advances in Artificial Intelligence : Proceedings of the 12 th Brazilian Symposium on Aritificial Intelligence , edited by...STUDENT MODELING IN AN INTELLIGENT TUTORING SYSTEM THESIS Jeremy E. Thompson Captain, USAF AFIT/GCS/ENG/96D-27 DIMTVMON* fCKAJWINT A Appr"v*d t=i...Air Force Base, Ohio AFIT/GCS/ENG/96D-27 STUDENT MODELING IN AN INTELLIGENT TUTORING SYSTEM THESIS Jeremy E. Thompson Captain, USAF AFIT/GCS/ENG/96D
Future applications of artificial intelligence to Mission Control Centers
NASA Technical Reports Server (NTRS)
Friedland, Peter
1991-01-01
Future applications of artificial intelligence to Mission Control Centers are presented in the form of the viewgraphs. The following subject areas are covered: basic objectives of the NASA-wide AI program; inhouse research program; constraint-based scheduling; learning and performance improvement for scheduling; GEMPLAN multi-agent planner; planning, scheduling, and control; Bayesian learning; efficient learning algorithms; ICARUS (an integrated architecture for learning); design knowledge acquisition and retention; computer-integrated documentation; and some speculation on future applications.
Integrating robotic action with biologic perception: A brain-machine symbiosis theory
NASA Astrophysics Data System (ADS)
Mahmoudi, Babak
In patients with motor disability the natural cyclic flow of information between the brain and external environment is disrupted by their limb impairment. Brain-Machine Interfaces (BMIs) aim to provide new communication channels between the brain and environment by direct translation of brain's internal states into actions. For enabling the user in a wide range of daily life activities, the challenge is designing neural decoders that autonomously adapt to different tasks, environments, and to changes in the pattern of neural activity. In this dissertation, a novel decoding framework for BMIs is developed in which a computational agent autonomously learns how to translate neural states into action based on maximization of a measure of shared goal between user and the agent. Since the agent and brain share the same goal, a symbiotic relationship between them will evolve therefore this decoding paradigm is called a Brain-Machine Symbiosis (BMS) framework. A decoding agent was implemented within the BMS framework based on the Actor-Critic method of Reinforcement Learning. The rule of the Actor as a neural decoder was to find mapping between the neural representation of motor states in the primary motor cortex (MI) and robot actions in order to solve reaching tasks. The Actor learned the optimal control policy using an evaluative feedback that was estimated by the Critic directly from the user's neural activity of the Nucleus Accumbens (NAcc). Through a series of computational neuroscience studies in a cohort of rats it was demonstrated that NAcc could provide a useful evaluative feedback by predicting the increase or decrease in the probability of earning reward based on the environmental conditions. Using a closed-loop BMI simulator it was demonstrated the Actor-Critic decoding architecture was able to adapt to different tasks as well as changes in the pattern of neural activity. The custom design of a dual micro-wire array enabled simultaneous implantation of MI and NAcc for the development of a full closed-loop system. The Actor-Critic decoding architecture was able to solve the brain-controlled reaching task using a robotic arm by capturing the interdependency between the simultaneous action representation in MI and reward expectation in NAcc.
Integrating manufacturing softwares for intelligent planning execution: a CIIMPLEX perspective
NASA Astrophysics Data System (ADS)
Chu, Bei Tseng B.; Tolone, William J.; Wilhelm, Robert G.; Hegedus, M.; Fesko, J.; Finin, T.; Peng, Yun; Jones, Chris H.; Long, Junshen; Matthews, Mike; Mayfield, J.; Shimp, J.; Su, S.
1997-01-01
Recent developments have made it possible to interoperate complex business applications at much lower costs. Application interoperation, along with business process re- engineering can result in significant savings by eliminating work created by disconnected business processes due to isolated business applications. However, we believe much greater productivity benefits can be achieved by facilitating timely decision-making, utilizing information from multiple enterprise perspectives. The CIIMPLEX enterprise integration architecture is designed to enable such productivity gains by helping people to carry out integrated enterprise scenarios. An enterprise scenario is triggered typically by some external event. The goal of an enterprise scenario is to make the right decisions considering the full context of the problem. Enterprise scenarios are difficult for people to carry out because of the interdependencies among various actions. One can easily be overwhelmed by the large amount of information. We propose the use of software agents to help gathering relevant information and present them in the appropriate context of an enterprise scenario. The CIIMPLEX enterprise integration architecture is based on the FAIME methodology for application interoperation and plug-and-play. It also explores the use of software agents in application plug-and- play.
Tello-Leal, Edgar; Chiotti, Omar; Villarreal, Pablo David
2012-12-01
The paper presents a methodology that follows a top-down approach based on a Model-Driven Architecture for integrating and coordinating healthcare services through cross-organizational processes to enable organizations providing high quality healthcare services and continuous process improvements. The methodology provides a modeling language that enables organizations conceptualizing an integration agreement, and identifying and designing cross-organizational process models. These models are used for the automatic generation of: the private view of processes each organization should perform to fulfill its role in cross-organizational processes, and Colored Petri Net specifications to implement these processes. A multi-agent system platform provides agents able to interpret Colored Petri-Nets to enable the communication between the Healthcare Information Systems for executing the cross-organizational processes. Clinical documents are defined using the HL7 Clinical Document Architecture. This methodology guarantees that important requirements for healthcare services integration and coordination are fulfilled: interoperability between heterogeneous Healthcare Information Systems; ability to cope with changes in cross-organizational processes; guarantee of alignment between the integrated healthcare service solution defined at the organizational level and the solution defined at technological level; and the distributed execution of cross-organizational processes keeping the organizations autonomy.
Architecture for Building Conversational Agents that Support Collaborative Learning
ERIC Educational Resources Information Center
Kumar, R.; Rose, C. P.
2011-01-01
Tutorial Dialog Systems that employ Conversational Agents (CAs) to deliver instructional content to learners in one-on-one tutoring settings have been shown to be effective in multiple learning domains by multiple research groups. Our work focuses on extending this successful learning technology to collaborative learning settings involving two or…
A Multi-Agent System Approach for Distance Learning Architecture
ERIC Educational Resources Information Center
Turgay, Safiye
2005-01-01
The goal of this study is to suggest the agent systems by intelligence and adaptability properties in distance learning environment. The suggested system has flexible, agile, intelligence and cooperation features. System components are teachers, students (learners), and resources. Inter component relations are modeled and reviewed by using the…
Do Intelligent Robots Need Emotion?
Pessoa, Luiz
2017-11-01
What is the place of emotion in intelligent robots? Researchers have advocated the inclusion of some emotion-related components in the information-processing architecture of autonomous agents. It is argued here that emotion needs to be merged with all aspects of the architecture: cognitive-emotional integration should be a key design principle. Copyright © 2017 Elsevier Ltd. All rights reserved.
Agent-Based Intelligent Interface for Wheelchair Movement Control
Barriuso, Alberto L.; De Paz, Juan F.
2018-01-01
People who suffer from any kind of motor difficulty face serious complications to autonomously move in their daily lives. However, a growing number research projects which propose different powered wheelchairs control systems are arising. Despite of the interest of the research community in the area, there is no platform that allows an easy integration of various control methods that make use of heterogeneous sensors and computationally demanding algorithms. In this work, an architecture based on virtual organizations of agents is proposed that makes use of a flexible and scalable communication protocol that allows the deployment of embedded agents in computationally limited devices. In order to validate the proper functioning of the proposed system, it has been integrated into a conventional wheelchair and a set of alternative control interfaces have been developed and deployed, including a portable electroencephalography system, a voice interface or as specifically designed smartphone application. A set of tests were conducted to test both the platform adequacy and the accuracy and ease of use of the proposed control systems yielding positive results that can be useful in further wheelchair interfaces design and implementation. PMID:29751603
Fault-tolerant Control of a Cyber-physical System
NASA Astrophysics Data System (ADS)
Roxana, Rusu-Both; Eva-Henrietta, Dulf
2017-10-01
Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.
Smart Caching Based on Mobile Agent of Power WebGIS Platform
Wang, Xiaohui; Wu, Kehe; Chen, Fei
2013-01-01
Power information construction is developing towards intensive, platform, distributed direction with the expansion of power grid and improvement of information technology. In order to meet the trend, power WebGIS was designed and developed. In this paper, we first discuss the architecture and functionality of power WebGIS, and then we study caching technology in detail, which contains dynamic display cache model, caching structure based on mobile agent, and cache data model. We have designed experiments of different data capacity to contrast performance between WebGIS with the proposed caching model and traditional WebGIS. The experimental results showed that, with the same hardware environment, the response time of WebGIS with and without caching model increased as data capacity growing, while the larger the data was, the higher the performance of WebGIS with proposed caching model improved. PMID:24288504
SLAE–CPS: Smart Lean Automation Engine Enabled by Cyber-Physical Systems Technologies
Ma, Jing; Wang, Qiang; Zhao, Zhibiao
2017-01-01
In the context of Industry 4.0, the demand for the mass production of highly customized products will lead to complex products and an increasing demand for production system flexibility. Simply implementing lean production-based human-centered production or high automation to improve system flexibility is insufficient. Currently, lean automation (Jidoka) that utilizes cyber-physical systems (CPS) is considered a cost-efficient and effective approach for improving system flexibility under shrinking global economic conditions. Therefore, a smart lean automation engine enabled by CPS technologies (SLAE–CPS), which is based on an analysis of Jidoka functions and the smart capacity of CPS technologies, is proposed in this study to provide an integrated and standardized approach to design and implement a CPS-based smart Jidoka system. A set of comprehensive architecture and standardized key technologies should be presented to achieve the above-mentioned goal. Therefore, a distributed architecture that joins service-oriented architecture, agent, function block (FB), cloud, and Internet of things is proposed to support the flexible configuration, deployment, and performance of SLAE–CPS. Then, several standardized key techniques are proposed under this architecture. The first one is for converting heterogeneous physical data into uniform services for subsequent abnormality analysis and detection. The second one is a set of Jidoka scene rules, which is abstracted based on the analysis of the operator, machine, material, quality, and other factors in different time dimensions. These Jidoka rules can support executive FBs in performing different Jidoka functions. Finally, supported by the integrated and standardized approach of our proposed engine, a case study is conducted to verify the current research results. The proposed SLAE–CPS can serve as an important reference value for combining the benefits of innovative technology and proper methodology. PMID:28657577
SLAE-CPS: Smart Lean Automation Engine Enabled by Cyber-Physical Systems Technologies.
Ma, Jing; Wang, Qiang; Zhao, Zhibiao
2017-06-28
In the context of Industry 4.0, the demand for the mass production of highly customized products will lead to complex products and an increasing demand for production system flexibility. Simply implementing lean production-based human-centered production or high automation to improve system flexibility is insufficient. Currently, lean automation (Jidoka) that utilizes cyber-physical systems (CPS) is considered a cost-efficient and effective approach for improving system flexibility under shrinking global economic conditions. Therefore, a smart lean automation engine enabled by CPS technologies (SLAE-CPS), which is based on an analysis of Jidoka functions and the smart capacity of CPS technologies, is proposed in this study to provide an integrated and standardized approach to design and implement a CPS-based smart Jidoka system. A set of comprehensive architecture and standardized key technologies should be presented to achieve the above-mentioned goal. Therefore, a distributed architecture that joins service-oriented architecture, agent, function block (FB), cloud, and Internet of things is proposed to support the flexible configuration, deployment, and performance of SLAE-CPS. Then, several standardized key techniques are proposed under this architecture. The first one is for converting heterogeneous physical data into uniform services for subsequent abnormality analysis and detection. The second one is a set of Jidoka scene rules, which is abstracted based on the analysis of the operator, machine, material, quality, and other factors in different time dimensions. These Jidoka rules can support executive FBs in performing different Jidoka functions. Finally, supported by the integrated and standardized approach of our proposed engine, a case study is conducted to verify the current research results. The proposed SLAE-CPS can serve as an important reference value for combining the benefits of innovative technology and proper methodology.
Beyond 10 Years of Evolving the IGSN Architecture: What's Next?
NASA Astrophysics Data System (ADS)
Lehnert, K.; Arko, R. A.
2016-12-01
The IGSN was developed as part of a US NSF-funded project, which started in 2004 to establish a registry for sample metadata, the System for Earth Sample Registration (SESAR). The initial version of the system provided a centralized solution for users to submit information about their samples and obtain IGSNs and bar codes. A new distributed architecture for the IGSN was designed at a workshop in 2011 that aimed to advance the global implementation of the IGSN. The workshop led to the founding of an international non-profit organization, the IGSN e.V., that adopted the governance model of the DataCite consortium as a non-profit membership organization and its architecture with a central registry and a network of distributed Allocating Agents that provide registration services to the users. Recent progress came at a workshop in 2015, where stakeholders from both geoscience and life science disciplines drafted a standard IGSN metadata schema for describing samples with an essential set of properties about the sample's origin and classification, creating a "birth certificate" for the sample. Consensus was reached that the IGSN should also be used to identify sampling features and collection of samples. The IGSN e.V. global network has steadily grown, with now members in 4 continents and 5 Allocating Agents operational in the US, Australia, and Europe. A Central Catalog has been established at the IGSN Management Office that harvests "birth certificate" metadata records from Allocating Agents via the Open Archives Initiative Protocol for Metadata Harvest (OAI-PMH), and publishes them as a Linked Open Data graph using the Resource Description Framework (RDF) and RDF Query Language (SPARQL) for reuse by Semantic Web clients. Next developments will include a web-based validation service that allows journal editors to check the validity of IGSNs and compliance with metadata requirements, and use of community-recommended vocabularies for specific disciplines.
He, Xiao-Peng; Tian, He
2016-01-13
Ever since the discovery of graphene, increasing efforts have been devoted to the use of this stellar material as well as the development of other graphene-like materials such as thin-layer transition metal dichalcogenides and oxides (TMD/Os) for a variety of applications. Because of their large surface area and unique optical properties, these two-dimensional materials with a size ranging from the micro- to the nanoscale have been employed as the substrate to construct photoluminescence architectures for disease diagnosis as well as theranostics. These architectures are built through the simple self-assembly of labeled biomolecular probes with the substrate material, leading to signal quenching. Upon the specific interaction of the architecture with a target biomarker, the signal can be spontaneously restored in a reversible manner. Meanwhile, by co-loading therapeutic agents and employing the inherent photo-thermal properties of the material substrates, a combined disease imaging and therapy (theranostics) can be achieved. This review highlights the latest advances in the construction and application of graphene and TMD/O based thin-layer material composites for single-target and multiplexed detection of a variety of biomarkers and theranostics. These versatile material architectures, owing to their ease in preparation, low cost and flexibility in functionalization, provide promising tools for both basic biochemical research and clinical applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Integrating planning, execution, and learning
NASA Technical Reports Server (NTRS)
Kuokka, Daniel R.
1989-01-01
To achieve the goal of building an autonomous agent, the usually disjoint capabilities of planning, execution, and learning must be used together. An architecture, called MAX, within which cognitive capabilities can be purposefully and intelligently integrated is described. The architecture supports the codification of capabilities as explicit knowledge that can be reasoned about. In addition, specific problem solving, learning, and integration knowledge is developed.
Strategies for the synthesis of the novel antitumor agent peloruside A
Williams, David R; Nag, Partha P; Zorn, Nicolas
2009-01-01
The microtubule-stabilizing agent (+)-peloruside A has emerged as a potential therapeutic agent for the treatment of cancer. Two total syntheses have been published and these reports have stimulated additional studies to advance the methodology and strategies for accessing this molecular architecture. This review details the biological data, modeling and conformation analyses, and synthetic studies toward the synthesis of (+)-peloruside A, that were reported prior to December 2007. PMID:18283613
Berkowitz, Murray R
2013-01-01
Current information systems for use in detecting bioterrorist attacks lack a consistent, overarching information architecture. An overview of the use of biological agents as weapons during a bioterrorist attack is presented. Proposed are the design, development, and implementation of a medical informatics system to mine pertinent databases, retrieve relevant data, invoke appropriate biostatistical and epidemiological software packages, and automatically analyze these data. The top-level information architecture is presented. Systems requirements and functional specifications for this level are presented. Finally, future studies are identified.
Unified Simulation and Analysis Framework for Deep Space Navigation Design
NASA Technical Reports Server (NTRS)
Anzalone, Evan; Chuang, Jason; Olsen, Carrie
2013-01-01
As the technology that enables advanced deep space autonomous navigation continues to develop and the requirements for such capability continues to grow, there is a clear need for a modular expandable simulation framework. This tool's purpose is to address multiple measurement and information sources in order to capture system capability. This is needed to analyze the capability of competing navigation systems as well as to develop system requirements, in order to determine its effect on the sizing of the integrated vehicle. The development for such a framework is built upon Model-Based Systems Engineering techniques to capture the architecture of the navigation system and possible state measurements and observations to feed into the simulation implementation structure. These models also allow a common environment for the capture of an increasingly complex operational architecture, involving multiple spacecraft, ground stations, and communication networks. In order to address these architectural developments, a framework of agent-based modules is implemented to capture the independent operations of individual spacecraft as well as the network interactions amongst spacecraft. This paper describes the development of this framework, and the modeling processes used to capture a deep space navigation system. Additionally, a sample implementation describing a concept of network-based navigation utilizing digitally transmitted data packets is described in detail. This developed package shows the capability of the modeling framework, including its modularity, analysis capabilities, and its unification back to the overall system requirements and definition.
Research on mixed network architecture collaborative application model
NASA Astrophysics Data System (ADS)
Jing, Changfeng; Zhao, Xi'an; Liang, Song
2009-10-01
When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.
Scalable sensor management for automated fusion and tactical reconnaissance
NASA Astrophysics Data System (ADS)
Walls, Thomas J.; Wilson, Michael L.; Partridge, Darin C.; Haws, Jonathan R.; Jensen, Mark D.; Johnson, Troy R.; Petersen, Brad D.; Sullivan, Stephanie W.
2013-05-01
The capabilities of tactical intelligence, surveillance, and reconnaissance (ISR) payloads are expanding from single sensor imagers to integrated systems-of-systems architectures. Increasingly, these systems-of-systems include multiple sensing modalities that can act as force multipliers for the intelligence analyst. Currently, the separate sensing modalities operate largely independent of one another, providing a selection of operating modes but not an integrated intelligence product. We describe here a Sensor Management System (SMS) designed to provide a small, compact processing unit capable of managing multiple collaborative sensor systems on-board an aircraft. Its purpose is to increase sensor cooperation and collaboration to achieve intelligent data collection and exploitation. The SMS architecture is designed to be largely sensor and data agnostic and provide flexible networked access for both data providers and data consumers. It supports pre-planned and ad-hoc missions, with provisions for on-demand tasking and updates from users connected via data links. Management of sensors and user agents takes place over standard network protocols such that any number and combination of sensors and user agents, either on the local network or connected via data link, can register with the SMS at any time during the mission. The SMS provides control over sensor data collection to handle logging and routing of data products to subscribing user agents. It also supports the addition of algorithmic data processing agents for feature/target extraction and provides for subsequent cueing from one sensor to another. The SMS architecture was designed to scale from a small UAV carrying a limited number of payloads to an aircraft carrying a large number of payloads. The SMS system is STANAG 4575 compliant as a removable memory module (RMM) and can act as a vehicle specific module (VSM) to provide STANAG 4586 compliance (level-3 interoperability) to a non-compliant sensor system. The SMS architecture will be described and results from several flight tests and simulations will be shown.
Optimal control in microgrid using multi-agent reinforcement learning.
Li, Fu-Dong; Wu, Min; He, Yong; Chen, Xin
2012-11-01
This paper presents an improved reinforcement learning method to minimize electricity costs on the premise of satisfying the power balance and generation limit of units in a microgrid with grid-connected mode. Firstly, the microgrid control requirements are analyzed and the objective function of optimal control for microgrid is proposed. Then, a state variable "Average Electricity Price Trend" which is used to express the most possible transitions of the system is developed so as to reduce the complexity and randomicity of the microgrid, and a multi-agent architecture including agents, state variables, action variables and reward function is formulated. Furthermore, dynamic hierarchical reinforcement learning, based on change rate of key state variable, is established to carry out optimal policy exploration. The analysis shows that the proposed method is beneficial to handle the problem of "curse of dimensionality" and speed up learning in the unknown large-scale world. Finally, the simulation results under JADE (Java Agent Development Framework) demonstrate the validity of the presented method in optimal control for a microgrid with grid-connected mode. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
An Automated End-To Multi-Agent Qos Based Architecture for Selection of Geospatial Web Services
NASA Astrophysics Data System (ADS)
Shah, M.; Verma, Y.; Nandakumar, R.
2012-07-01
Over the past decade, Service-Oriented Architecture (SOA) and Web services have gained wide popularity and acceptance from researchers and industries all over the world. SOA makes it easy to build business applications with common services, and it provides like: reduced integration expense, better asset reuse, higher business agility, and reduction of business risk. Building of framework for acquiring useful geospatial information for potential users is a crucial problem faced by the GIS domain. Geospatial Web services solve this problem. With the help of web service technology, geospatial web services can provide useful geospatial information to potential users in a better way than traditional geographic information system (GIS). A geospatial Web service is a modular application designed to enable the discovery, access, and chaining of geospatial information and services across the web that are often both computation and data-intensive that involve diverse sources of data and complex processing functions. With the proliferation of web services published over the internet, multiple web services may provide similar functionality, but with different non-functional properties. Thus, Quality of Service (QoS) offers a metric to differentiate the services and their service providers. In a quality-driven selection of web services, it is important to consider non-functional properties of the web service so as to satisfy the constraints or requirements of the end users. The main intent of this paper is to build an automated end-to-end multi-agent based solution to provide the best-fit web service to service requester based on QoS.
The Design of a Polymorphous Cognitive Agent Architecture (PCAA)
2008-05-01
tree, and search agents may search the tree for documents or clusters, depositing pheromones on the way down the tree. 46 19 The quality of SODAS...location in the lattice is a node, connected to its neighbors by links, and agents roam across the lattice, depositing pheromones . 49 21 A possible FPGA...provided by swarming, and also figure out a way for learning in ACT-R to trickle down to swarming computations, e.g., through the pheromones . Integration
Recent advances in dendrimer-based nanovectors for tumor-targeted drug and gene delivery
Kesharwani, Prashant; Iyer, Arun K.
2015-01-01
Advances in the application of nanotechnology in medicine have given rise to multifunctional smart nanocarriers that can be engineered with tunable physicochemical characteristics to deliver one or more therapeutic agent(s) safely and selectively to cancer cells, including intracellular organelle-specific targeting. Dendrimers having properties resembling biomolecules, with well-defined 3D nanopolymeric architectures, are emerging as a highly attractive class of drug and gene delivery vector. The presence of numerous peripheral functional groups on hyperbranched dendrimers affords efficient conjugation of targeting ligands and biomarkers that can recognize and bind to receptors overexpressed on cancer cells for tumor-cell-specific delivery. The present review compiles the recent advances in dendrimer-mediated drug and gene delivery to tumors by passive and active targeting principles with illustrative examples. PMID:25555748
Szulc-Dabrowska, Lidia; Gregorczyk, Karolina P; Struzik, Justyna; Boratynska-Jasinska, Anna; Szczepanowska, Joanna; Wyzewski, Zbigniew; Toka, Felix N; Gierynska, Malgorzata; Ostrowska, Agnieszka; Niemialtowski, Marek G
2016-08-01
Ectromelia virus (ECTV, the causative agent of mousepox), which represents the same genus as variola virus (VARV, the agent responsible for smallpox in humans), has served for years as a model virus for studying mechanisms of poxvirus-induced disease. Despite increasing knowledge on the interaction between ECTV and its natural host-the mouse-surprisingly, still little is known about the cell biology of ECTV infection. Because pathogen interaction with the cytoskeleton is still a growing area of research in the virus-host cell interplay, the aim of the present study was to evaluate the consequences of ECTV infection on the cytoskeleton in a murine fibroblast cell line. The viral effect on the cytoskeleton was reflected by changes in migration of the cells and rearrangement of the architecture of tubulin, vimentin, and actin filaments. The virus-induced cytoskeletal rearrangements observed in these studies contributed to the efficient cell-to-cell spread of infection, which is an important feature of ECTV virulence. Additionally, during later stages of infection L929 cells produced two main types of actin-based cellular protrusions: short (actin tails and "dendrites") and long (cytoplasmic corridors). Due to diversity of filopodial extensions induced by the virus, we suggest that ECTV represents a valuable new model for studying processes and pathways that regulate the formation of cytoskeleton-based cellular structures. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Autonomic and Coevolutionary Sensor Networking
NASA Astrophysics Data System (ADS)
Boonma, Pruet; Suzuki, Junichi
(WSNs) applications are often required to balance the tradeoffs among conflicting operational objectives (e.g., latency and power consumption) and operate at an optimal tradeoff. This chapter proposes and evaluates a architecture, called BiSNET/e, which allows WSN applications to overcome this issue. BiSNET/e is designed to support three major types of WSN applications: , and hybrid applications. Each application is implemented as a decentralized group of, which is analogous to a bee colony (application) consisting of bees (agents). Agents collect sensor data or detect an event (a significant change in sensor reading) on individual nodes, and carry sensor data to base stations. They perform these data collection and event detection functionalities by sensing their surrounding network conditions and adaptively invoking behaviors such as pheromone emission, reproduction, migration, swarming and death. Each agent has its own behavior policy, as a set of genes, which defines how to invoke its behaviors. BiSNET/e allows agents to evolve their behavior policies (genes) across generations and autonomously adapt their performance to given objectives. Simulation results demonstrate that, in all three types of applications, agents evolve to find optimal tradeoffs among conflicting objectives and adapt to dynamic network conditions such as traffic fluctuations and node failures/additions. Simulation results also illustrate that, in hybrid applications, data collection agents and event detection agents coevolve to augment their adaptability and performance.
Cyber Security Research Frameworks For Coevolutionary Network Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rush, George D.; Tauritz, Daniel Remy
Several architectures have been created for developing and testing systems used in network security, but most are meant to provide a platform for running cyber security experiments as opposed to automating experiment processes. In the first paper, we propose a framework termed Distributed Cyber Security Automation Framework for Experiments (DCAFE) that enables experiment automation and control in a distributed environment. Predictive analysis of adversaries is another thorny issue in cyber security. Game theory can be used to mathematically analyze adversary models, but its scalability limitations restrict its use. Computational game theory allows us to scale classical game theory to larger,more » more complex systems. In the second paper, we propose a framework termed Coevolutionary Agent-based Network Defense Lightweight Event System (CANDLES) that can coevolve attacker and defender agent strategies and capabilities and evaluate potential solutions with a custom network defense simulation. The third paper is a continuation of the CANDLES project in which we rewrote key parts of the framework. Attackers and defenders have been redesigned to evolve pure strategy, and a new network security simulation is devised which specifies network architecture and adds a temporal aspect. We also add a hill climber algorithm to evaluate the search space and justify the use of a coevolutionary algorithm.« less
Architectures and Evaluation for Adjustable Control Autonomy for Space-Based Life Support Systems
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra K.
2001-01-01
In the past five years, a number of automation applications for control of crew life support systems have been developed and evaluated in the Adjustable Autonomy Testbed at NASA's Johnson Space Center. This paper surveys progress on an adjustable autonomous control architecture for situations where software and human operators work together to manage anomalies and other system problems. When problems occur, the level of control autonomy can be adjusted, so that operators and software agents can work together on diagnosis and recovery. In 1997 adjustable autonomy software was developed to manage gas transfer and storage in a closed life support test. Four crewmembers lived and worked in a chamber for 91 days, with both air and water recycling. CO2 was converted to O2 by gas processing systems and wheat crops. With the automation software, significantly fewer hours were spent monitoring operations. System-level validation testing of the software by interactive hybrid simulation revealed problems both in software requirements and implementation. Since that time, we have been developing multi-agent approaches for automation software and human operators, to cooperatively control systems and manage problems. Each new capability has been tested and demonstrated in realistic dynamic anomaly scenarios, using the hybrid simulation tool.
Learning other agents` preferences in multiagent negotiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bui, H.H.; Kieronska, D.; Venkatesh, S.
In multiagent systems, an agent does not usually have complete information about the preferences and decision making processes of other agents. This might prevent the agents from making coordinated choices, purely due to their ignorance of what others want. This paper describes the integration of a learning module into a communication-intensive negotiating agent architecture. The learning module gives the agents the ability to learn about other agents` preferences via past interactions. Over time, the agents can incrementally update their models of other agents` preferences and use them to make better coordinated decisions. Combining both communication and learning, as two complementmore » knowledge acquisition methods, helps to reduce the amount of communication needed on average, and is justified in situations where communication is computationally costly or simply not desirable (e.g. to preserve the individual privacy).« less
Waste Management Using Request-Based Virtual Organizations
NASA Astrophysics Data System (ADS)
Katriou, Stamatia Ann; Fragidis, Garyfallos; Ignatiadis, Ioannis; Tolias, Evangelos; Koumpis, Adamantios
Waste management is on top of the political agenda globally as a high priority environmental issue, with billions spent on it each year. This paper proposes an approach for the disposal, transportation, recycling and reuse of waste. This approach incorporates the notion of Request Based Virtual Organizations (RBVOs) using a Service Oriented Architecture (SOA) and an ontology that serves the definition of waste management requirements. The populated ontology is utilized by a Multi-Agent System which performs negotiations and forms RBVOs. The proposed approach could be used by governments and companies searching for a means to perform such activities in an effective and efficient manner.
The autonomous sciencecraft constellations
NASA Technical Reports Server (NTRS)
Sherwood, R. L.; Chien, S.; Castano, R.; Rabideau, G.
2003-01-01
The Autonomous Sciencecraft Experiment (ASE) will fly onboard the Air Force TechSat 21 constellation of three spacecraft scheduled for launch in 2006. ASE uses onboard continuous planning, robust task and goal-based execution, model-based mode identification and reconfiguration, and onboard machine learning and pattern recognition to radically increase science return by enabling intelligent downlink selection and autonomous retargeting. In this paper we discuss how these AI technologies are synergistically integrated in a hybrid multi-layer control architecture to enable a virtual spacecraft science agent. Demonstration of these capabilities in a flight environment will open up tremendous new opportunities in planetary science, space physics, and earth science that would be unreachable without this technology.
Integrating deliberative planning in a robot architecture
NASA Technical Reports Server (NTRS)
Elsaesser, Chris; Slack, Marc G.
1994-01-01
The role of planning and reactive control in an architecture for autonomous agents is discussed. The postulated architecture seperates the general robot intelligence problem into three interacting pieces: (1) robot reactive skills, i.e., grasping, object tracking, etc.; (2) a sequencing capability to differentially ativate the reactive skills; and (3) a delibrative planning capability to reason in depth about goals, preconditions, resources, and timing constraints. Within the sequencing module, caching techniques are used for handling routine activities. The planning system then builds on these cached solutions to routine tasks to build larger grain sized primitives. This eliminates large numbers of essentially linear planning problems. The architecture will be used in the future to incorporate in robots cognitive capabilites normally associated with intelligent behavior.
Distance-Based Behaviors for Low-Complexity Control in Multiagent Robotics
NASA Astrophysics Data System (ADS)
Pierpaoli, Pietro
Several biological examples show that living organisms cooperate to collectively accomplish tasks impossible for single individuals. More importantly, this coordination is often achieved with a very limited set of information. Inspired by these observations, research on autonomous systems has focused on the development of distributed control techniques for control and guidance of groups of autonomous mobile agents, or robots. From an engineering perspective, when coordination and cooperation is sought in large ensembles of robotic vehicles, a reduction in hardware and algorithms' complexity becomes mandatory from the very early stages of the project design. The research for solutions capable of lowering power consumption, cost and increasing reliability are thus worth investigating. In this work, we studied low-complexity techniques to achieve cohesion and control on swarms of autonomous robots. Starting from an inspiring example with two-agents, we introduced effects of neighbors' relative positions on control of an autonomous agent. The extension of this intuition addressed the control of large ensembles of autonomous vehicles, and was applied in the form of a herding-like technique. To this end, a low-complexity distance-based aggregation protocol was defined. We first showed that our protocol produced a cohesion aggregation among the agent while avoiding inter-agent collisions. Then, a feedback leader-follower architecture was introduced for the control of the swarm. We also described how proximity measures and probability of collisions with neighbors can also be used as source of information in highly populated environments.
Crowd Simulation Incorporating Agent Psychological Models, Roles and Communication
2005-01-01
system (PMFserv) that implements human behavior models from a range of ability, stress, emotion , decision theoretic and motivation sources. An...autonomous agents, human behavior models, culture and emotions 1. Introduction There are many applications of computer animation and simulation where...We describe a new architecture to integrate a psychological model into a crowd simulation system in order to obtain believable emergent behaviors
NINJA: a noninvasive framework for internal computer security hardening
NASA Astrophysics Data System (ADS)
Allen, Thomas G.; Thomson, Steve
2004-07-01
Vulnerabilities are a growing problem in both the commercial and government sector. The latest vulnerability information compiled by CERT/CC, for the year ending Dec. 31, 2002 reported 4129 vulnerabilities representing a 100% increase over the 2001 [1] (the 2003 report has not been published at the time of this writing). It doesn"t take long to realize that the growth rate of vulnerabilities greatly exceeds the rate at which the vulnerabilities can be fixed. It also doesn"t take long to realize that our nation"s networks are growing less secure at an accelerating rate. As organizations become aware of vulnerabilities they may initiate efforts to resolve them, but quickly realize that the size of the remediation project is greater than their current resources can handle. In addition, many IT tools that suggest solutions to the problems in reality only address "some" of the vulnerabilities leaving the organization unsecured and back to square one in searching for solutions. This paper proposes an auditing framework called NINJA (acronym for Network Investigation Notification Joint Architecture) for noninvasive daily scanning/auditing based on common security vulnerabilities that repeatedly occur in a network environment. This framework is used for performing regular audits in order to harden an organizations security infrastructure. The framework is based on the results obtained by the Network Security Assessment Team (NSAT) which emulates adversarial computer network operations for US Air Force organizations. Auditing is the most time consuming factor involved in securing an organization's network infrastructure. The framework discussed in this paper uses existing scripting technologies to maintain a security hardened system at a defined level of performance as specified by the computer security audit team. Mobile agents which were under development at the time of this writing are used at a minimum to improve the noninvasiveness of our scans. In general, noninvasive scans with an adequate framework performed on a daily basis reduce the amount of security work load as well as the timeliness in performing remediation, as verified by the NINJA framework. A vulnerability assessment/auditing architecture based on mobile agent technology is proposed and examined at the end of the article as an enhancement to the current NINJA architecture.
Real-Time Biologically Inspired Action Recognition from Key Poses Using a Neuromorphic Architecture.
Layher, Georg; Brosch, Tobias; Neumann, Heiko
2017-01-01
Intelligent agents, such as robots, have to serve a multitude of autonomous functions. Examples are, e.g., collision avoidance, navigation and route planning, active sensing of its environment, or the interaction and non-verbal communication with people in the extended reach space. Here, we focus on the recognition of the action of a human agent based on a biologically inspired visual architecture of analyzing articulated movements. The proposed processing architecture builds upon coarsely segregated streams of sensory processing along different pathways which separately process form and motion information (Layher et al., 2014). Action recognition is performed in an event-based scheme by identifying representations of characteristic pose configurations (key poses) in an image sequence. In line with perceptual studies, key poses are selected unsupervised utilizing a feature-driven criterion which combines extrema in the motion energy with the horizontal and the vertical extendedness of a body shape. Per class representations of key pose frames are learned using a deep convolutional neural network consisting of 15 convolutional layers. The network is trained using the energy-efficient deep neuromorphic networks ( Eedn ) framework (Esser et al., 2016), which realizes the mapping of the trained synaptic weights onto the IBM Neurosynaptic System platform (Merolla et al., 2014). After the mapping, the trained network achieves real-time capabilities for processing input streams and classify input images at about 1,000 frames per second while the computational stages only consume about 70 mW of energy (without spike transduction). Particularly regarding mobile robotic systems, a low energy profile might be crucial in a variety of application scenarios. Cross-validation results are reported for two different datasets and compared to state-of-the-art action recognition approaches. The results demonstrate, that (I) the presented approach is on par with other key pose based methods described in the literature, which select key pose frames by optimizing classification accuracy, (II) compared to the training on the full set of frames, representations trained on key pose frames result in a higher confidence in class assignments, and (III) key pose representations show promising generalization capabilities in a cross-dataset evaluation.
Real-Time Biologically Inspired Action Recognition from Key Poses Using a Neuromorphic Architecture
Layher, Georg; Brosch, Tobias; Neumann, Heiko
2017-01-01
Intelligent agents, such as robots, have to serve a multitude of autonomous functions. Examples are, e.g., collision avoidance, navigation and route planning, active sensing of its environment, or the interaction and non-verbal communication with people in the extended reach space. Here, we focus on the recognition of the action of a human agent based on a biologically inspired visual architecture of analyzing articulated movements. The proposed processing architecture builds upon coarsely segregated streams of sensory processing along different pathways which separately process form and motion information (Layher et al., 2014). Action recognition is performed in an event-based scheme by identifying representations of characteristic pose configurations (key poses) in an image sequence. In line with perceptual studies, key poses are selected unsupervised utilizing a feature-driven criterion which combines extrema in the motion energy with the horizontal and the vertical extendedness of a body shape. Per class representations of key pose frames are learned using a deep convolutional neural network consisting of 15 convolutional layers. The network is trained using the energy-efficient deep neuromorphic networks (Eedn) framework (Esser et al., 2016), which realizes the mapping of the trained synaptic weights onto the IBM Neurosynaptic System platform (Merolla et al., 2014). After the mapping, the trained network achieves real-time capabilities for processing input streams and classify input images at about 1,000 frames per second while the computational stages only consume about 70 mW of energy (without spike transduction). Particularly regarding mobile robotic systems, a low energy profile might be crucial in a variety of application scenarios. Cross-validation results are reported for two different datasets and compared to state-of-the-art action recognition approaches. The results demonstrate, that (I) the presented approach is on par with other key pose based methods described in the literature, which select key pose frames by optimizing classification accuracy, (II) compared to the training on the full set of frames, representations trained on key pose frames result in a higher confidence in class assignments, and (III) key pose representations show promising generalization capabilities in a cross-dataset evaluation. PMID:28381998
Welch, M C; Kwan, P W; Sajeev, A S M
2014-10-01
Agent-based modelling has proven to be a promising approach for developing rich simulations for complex phenomena that provide decision support functions across a broad range of areas including biological, social and agricultural sciences. This paper demonstrates how high performance computing technologies, namely General-Purpose Computing on Graphics Processing Units (GPGPU), and commercial Geographic Information Systems (GIS) can be applied to develop a national scale, agent-based simulation of an incursion of Old World Screwworm fly (OWS fly) into the Australian mainland. The development of this simulation model leverages the combination of massively data-parallel processing capabilities supported by NVidia's Compute Unified Device Architecture (CUDA) and the advanced spatial visualisation capabilities of GIS. These technologies have enabled the implementation of an individual-based, stochastic lifecycle and dispersal algorithm for the OWS fly invasion. The simulation model draws upon a wide range of biological data as input to stochastically determine the reproduction and survival of the OWS fly through the different stages of its lifecycle and dispersal of gravid females. Through this model, a highly efficient computational platform has been developed for studying the effectiveness of control and mitigation strategies and their associated economic impact on livestock industries can be materialised. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.
2012-01-01
defined, to CoJACK (Ritter, Reifers, Klein, & Schoelles, 2007) based on task appraisal theory (e.g., Cannon, 1932; Lazarus & Folkman , 1984; Selye...Cambridge, MA: MIT Press. Lazarus , R. S., & Folkman , S. (1984). Stress, appraisal and coping. New York: Springer Publishing. Lovett, M. C., Daily, L...promising. 0 0.5 1 1.5 2 2.5 3 3.5 4 Java JACK Default CoJack CoJack Caffeine CoJack Challenged CoJack Threatened Agent Type Ta n k s D es tr o y e d
Controlling the autonomy of a reconnaissance robot
NASA Astrophysics Data System (ADS)
Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David
2004-09-01
In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.
Autonomous Agents for Dynamic Process Planning in the Flexible Manufacturing System
NASA Astrophysics Data System (ADS)
Nik Nejad, Hossein Tehrani; Sugimura, Nobuhiro; Iwamura, Koji; Tanimizu, Yoshitaka
Rapid changes of market demands and pressures of competition require manufacturers to maintain highly flexible manufacturing systems to cope with a complex manufacturing environment. This paper deals with development of an agent-based architecture of dynamic systems for incremental process planning in the manufacturing systems. In consideration of alternative manufacturing processes and machine tools, the process plans and the schedules of the manufacturing resources are generated incrementally and dynamically. A negotiation protocol is discussed, in this paper, to generate suitable process plans for the target products real-timely and dynamically, based on the alternative manufacturing processes. The alternative manufacturing processes are presented by the process plan networks discussed in the previous paper, and the suitable process plans are searched and generated to cope with both the dynamic changes of the product specifications and the disturbances of the manufacturing resources. We initiatively combine the heuristic search algorithms of the process plan networks with the negotiation protocols, in order to generate suitable process plans in the dynamic manufacturing environment.
Behavioral networks as a model for intelligent agents
NASA Technical Reports Server (NTRS)
Sliwa, Nancy E.
1990-01-01
On-going work at NASA Langley Research Center in the development and demonstration of a paradigm called behavioral networks as an architecture for intelligent agents is described. This work focuses on the need to identify a methodology for smoothly integrating the characteristics of low-level robotic behavior, including actuation and sensing, with intelligent activities such as planning, scheduling, and learning. This work assumes that all these needs can be met within a single methodology, and attempts to formalize this methodology in a connectionist architecture called behavioral networks. Behavioral networks are networks of task processes arranged in a task decomposition hierarchy. These processes are connected by both command/feedback data flow, and by the forward and reverse propagation of weights which measure the dynamic utility of actions and beliefs.
Organic solar cells with graded absorber layers processed from nanoparticle dispersions.
Gärtner, Stefan; Reich, Stefan; Bruns, Michael; Czolk, Jens; Colsmann, Alexander
2016-03-28
The fabrication of organic solar cells with advanced multi-layer architectures from solution is often limited by the choice of solvents since most organic semiconductors dissolve in the same aromatic agents. In this work, we investigate multi-pass deposition of organic semiconductors from eco-friendly ethanol dispersion. Once applied, the nanoparticles are insoluble in the deposition agent, allowing for the application of further nanoparticulate layers and hence for building poly(3-hexylthiophene-2,5-diyl):indene-C60 bisadduct absorber layers with vertically graded polymer and conversely graded fullerene concentration. Upon thermal annealing, we observe some degrees of polymer/fullerene interdiffusion by means of X-ray photoelectron spectroscopy and Kelvin probe force microscopy. Replacing the common bulk-heterojunction by such a graded photo-active layer yields an enhanced fill factor of the solar cell due to an improved charge carrier extraction, and consequently an overall power conversion efficiency beyond 4%. Wet processing of such advanced device architectures paves the way for a versatile, eco-friendly and industrially feasible future fabrication of organic solar cells with advanced multi-layer architectures.
Planning and Execution: The Spirit of Opportunity for Robust Autonomous Systems
NASA Technical Reports Server (NTRS)
Muscettola, Nicola
2004-01-01
One of the most exciting endeavors pursued by human kind is the search for life in the Solar System and the Universe at large. NASA is leading this effort by designing, deploying and operating robotic systems that will reach planets, planet moons, asteroids and comets searching for water, organic building blocks and signs of past or present microbial life. None of these missions will be achievable without substantial advances in.the design, implementation and validation of autonomous control agents. These agents must be capable of robustly controlling a robotic explorer in a hostile environment with very limited or no communication with Earth. The talk focuses on work pursued at the NASA Ames Research center ranging from basic research on algorithm to deployed mission support systems. We will start by discussing how planning and scheduling technology derived from the Remote Agent experiment is being used daily in the operations of the Spirit and Opportunity rovers. Planning and scheduling is also used as the fundamental paradigm at the core of our research in real-time autonomous agents. In particular, we will describe our efforts in the Intelligent Distributed Execution Architecture (IDEA), a multi-agent real-time architecture that exploits artificial intelligence planning as the core reasoning engine of an autonomous agent. We will also describe how the issue of plan robustness at execution can be addressed by novel constraint propagation algorithms capable of giving the tightest exact bounds on resource consumption or all possible executions of a flexible plan.
Synthesis of Mn-doped ZnS architectures in ternary solution and their optical properties
NASA Astrophysics Data System (ADS)
Wang, Xinjuan; Zhang, Qinglin; Zou, Bingsuo; Lei, Aihua; Ren, Pinyun
2011-10-01
Mn-doped ZnS sea urchin-like architectures were fabricated by a one-pot solvothermal route in a ternary solution made of ethylenediamine, ethanolamine and distilled water. The as-prepared products were characterized by X-ray diffraction (XRD), field-emission scanning electron microscopy (FE-SEM), transmission electron microscopy (TEM) and photoluminescence spectra (PL). It was demonstrated that the as-prepared sea urchin-like architectures with diameter of 0.5-1.5 μm were composed of nanorods, possessing a wurtzite structures. The preferred growth orientation of nanorods was found to be the [0 0 2] direction. The PL spectra of the Mn-doped ZnS sea urchin-like architectures show a strong orange emission at 587 nm, indicating the successful doping of Mn 2+ ions into ZnS host. Ethanolamine played the role of oriented-assembly agent in the formation of sea urchin-like architectures. A possible growth mechanism was proposed to explain the formation of sea urchin-like architectures.
NASA Technical Reports Server (NTRS)
Clancey, William J.; Lowry, Michael R.; Nado, Robert Allen; Sierhuis, Maarten
2011-01-01
We analyzed a series of ten systematically developed surface exploration systems that integrated a variety of hardware and software components. Design, development, and testing data suggest that incremental buildup of an exploration system for long-duration capabilities is facilitated by an open architecture with appropriate-level APIs, specifically designed to facilitate integration of new components. This improves software productivity by reducing changes required for reconfiguring an existing system.
Li, Yongcheng; Sun, Rong; Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei
2016-01-01
We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.
Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei
2016-01-01
We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot’s performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks. PMID:27806074
Using web technology and Java mobile software agents to manage outside referrals.
Murphy, S. N.; Ng, T.; Sittig, D. F.; Barnett, G. O.
1998-01-01
A prototype, web-based referral application was created with the objective of providing outside primary care providers (PCP's) the means to refer patients to the Massachusetts General Hospital and the Brigham and Women's Hospital. The application was designed to achieve the two primary objectives of providing the consultant with enough data to make decisions even at the initial visit, and providing the PCP with a prompt response from the consultant. The system uses a web browser/server to initiate the referral and Java mobile software agents to support the workflow of the referral. This combination provides a light client implementation that can run on a wide variety of hardware and software platforms found in the office of the PCP. The implementation can guarantee a high degree of security for the computer of the PCP. Agents can be adapted to support the wide variety of data types that may be used in referral transactions, including reports with complex presentation needs and scanned (faxed) images Agents can be delivered to the PCP as running applications that can perform ongoing queries and alerts at the office of the PCP. Finally, the agent architecture is designed to scale in a natural and seamless manner for unforeseen future needs. PMID:9929190
NASA Astrophysics Data System (ADS)
Badea, C. T.; Samei, E.; Ghaghada, K.; Saunders, R.; Yuan, H.; Qi, Y.; Hedlund, L. W.; Mukundan, S.
2008-03-01
Imaging tumor angiogenesis in small animals is extremely challenging due to the size of the tumor vessels. Consequently, both dedicated small animal imaging systems and specialized intravascular contrast agents are required. The goal of this study was to investigate the use of a liposomal contrast agent for high-resolution micro-CT imaging of breast tumors in small animals. A liposomal blood pool agent encapsulating iodine with a concentration of 65.5 mg/ml was used with a Duke Center for In Vivo Microscopy (CIVM) prototype micro-computed tomography (micro-CT) system to image the R3230AC mammary carcinoma implanted in rats. The animals were injected with equivalent volume doses (0.02 ml/kg) of contrast agent. Micro-CT with the liposomal blood pool contrast agent ensured a signal difference between the blood and the muscle higher than 450 HU allowing the visualization of the tumors 3D vascular architecture in exquisite detail at 100-micron resolution. The micro-CT data correlated well with the histological examination of tumor tissue. We also studied the ability to detect vascular enhancement with limited angle based reconstruction, i.e. tomosynthesis. Tumor volumes and their regional vascular percentage were estimated. This imaging approach could be used to better understand tumor angiogenesis and be the basis for evaluating anti-angiogenic therapies.
Robust Architectures for Complex Multi-Agent Heterogeneous Systems
2014-07-23
establish the tradeoff between the control performance and the QoS of the communications network . We also derived the performance bound on the difference...accomplished within this time period leveraged the prior accomplishments in the area of networked multi-agent systems. The past work (prior to 2011...distributed control of uncertain networked systems [3]. Additionally, a preliminary collision avoidance algorithm has been developed for a team of
No-hardware-signature cybersecurity-crypto-module: a resilient cyber defense agent
NASA Astrophysics Data System (ADS)
Zaghloul, A. R. M.; Zaghloul, Y. A.
2014-06-01
We present an optical cybersecurity-crypto-module as a resilient cyber defense agent. It has no hardware signature since it is bitstream reconfigurable, where single hardware architecture functions as any selected device of all possible ones of the same number of inputs. For a two-input digital device, a 4-digit bitstream of 0s and 1s determines which device, of a total of 16 devices, the hardware performs as. Accordingly, the hardware itself is not physically reconfigured, but its performance is. Such a defense agent allows the attack to take place, rendering it harmless. On the other hand, if the system is already infected with malware sending out information, the defense agent allows the information to go out, rendering it meaningless. The hardware architecture is immune to side attacks since such an attack would reveal information on the attack itself and not on the hardware. This cyber defense agent can be used to secure a point-to-point, point-to-multipoint, a whole network, and/or a single entity in the cyberspace. Therefore, ensuring trust between cyber resources. It can provide secure communication in an insecure network. We provide the hardware design and explain how it works. Scalability of the design is briefly discussed. (Protected by United States Patents No.: US 8,004,734; US 8,325,404; and other National Patents worldwide.)
Model learning for robot control: a survey.
Nguyen-Tuong, Duy; Peters, Jan
2011-11-01
Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot's own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.
Spherical Nucleic Acids as Intracellular Agents for Nucleic Acid Based Therapeutics
NASA Astrophysics Data System (ADS)
Hao, Liangliang
Recent functional discoveries on the noncoding sequences of human genome and transcriptome could lead to revolutionary treatment modalities because the noncoding RNAs (ncRNAs) can be applied as therapeutic agents to manipulate disease-causing genes. To date few nucleic acid-based therapeutics have been translated into the clinic due to challenges in the delivery of the oligonucleotide agents in an effective, cell specific, and non-toxic fashion. Unmodified oligonucleotide agents are destroyed rapidly in biological fluids by enzymatic degradation and have difficulty crossing the plasma membrane without the aid of transfection reagents, which often cause inflammatory, cytotoxic, or immunogenic side effects. Spherical nucleic acids (SNAs), nanoparticles consisting of densely organized and highly oriented oligonucleotides, pose one possible solution to circumventing these problems in both the antisense and RNA interference (RNAi) pathways. The unique three dimensional architecture of SNAs protects the bioactive oligonucleotides from unspecific degradation during delivery and supports their targeting of class A scavenger receptors and endocytosis via a lipid-raft-dependent, caveolae-mediated pathway. Owing to their unique structure, SNAs are able to cross cell membranes and regulate target genes expression as a single entity, without triggering the cellular innate immune response. Herein, my thesis has focused on understanding the interactions between SNAs and cellular components and developing SNA-based nanostructures to improve therapeutic capabilities. Specifically, I developed a novel SNA-based, nanoscale agent for delivery of therapeutic oligonucleotides to manipulate microRNAs (miRNAs), the endogenous post-transcriptional gene regulators. I investigated the role of SNAs involving miRNAs in anti-cancer or anti-inflammation responses in cells and in in vivo murine disease models via systemic injection. Furthermore, I explored using different strategies to construct novel SNA-based nanomaterials with desired properties and applying targeting moieties to the SNA platform to achieve cell type specific gene regulation effects. Due to the flexibility of the SNA approach, the SNA platform can potentially be applied to many genetic disorders through tailored target specificities.
NASA Astrophysics Data System (ADS)
Delgado, F. J.; Martinez, R.; Finat, J.; Martinez, J.; Puche, J. C.; Finat, F. J.
2013-07-01
In this work we develop a multiply interconnected system which involves objects, agents and interactions between them from the use of ICT applied to open repositories, users communities and web services. Our approach is applied to Architectural Cultural Heritage Environments (ACHE). It includes components relative to digital accessibility (to augmented ACHE repositories), contents management (ontologies for the semantic web), semiautomatic recognition (to ease the reuse of materials) and serious videogames (for interaction in urban environments). Their combination provides a support for local real/remote virtual tourism (including some tools for low-level RT display of rendering in portable devices), mobile-based smart interactions (with a special regard to monitored environments) and CH related games (as extended web services). Main contributions to AR models on usual GIS applied to architectural environments, concern to an interactive support performed directly on digital files which allows to access to CH contents which are referred to GIS of urban districts (involving facades, historical or preindustrial buildings) and/or CH repositories in a ludic and transversal way to acquire cognitive, medial and social abilities in collaborative environments.
Martinez, R; Rozenblit, J; Cook, J F; Chacko, A K; Timboe, H L
1999-05-01
In the Department of Defense (DoD), US Army Medical Command is now embarking on an extremely exciting new project--creating a virtual radiology environment (VRE) for the management of radiology examinations. The business of radiology in the military is therefore being reengineered on several fronts by the VRE Project. In the VRE Project, a set of intelligent agent algorithms determine where examinations are to routed for reading bases on a knowledge base of the entire VRE. The set of algorithms, called the Meta-Manager, is hierarchical and uses object-based communications between medical treatment facilities (MTFs) and medical centers that have digital imaging network picture archiving and communications systems (DIN-PACS) networks. The communications is based on use of common object request broker architecture (CORBA) objects and services to send patient demographics and examination images from DIN-PACS networks in the MTFs to the DIN-PACS networks at the medical centers for diagnosis. The Meta-Manager is also responsible for updating the diagnosis at the originating MTF. CORBA services are used to perform secure message communications between DIN-PACS nodes in the VRE network. The Meta-Manager has a fail-safe architecture that allows the master Meta-Manager function to float to regional Meta-Manager sites in case of server failure. A prototype of the CORBA-based Meta-Manager is being developed by the University of Arizona's Computer Engineering Research Laboratory using the unified modeling language (UML) as a design tool. The prototype will implement the main functions described in the Meta-Manager design specification. The results of this project are expected to reengineer the process of radiology in the military and have extensions to commercial radiology environments.
1993-11-01
Eliezer N. Solomon Steve Sedrel Westinghouse Electronic Systems Group P.O. Box 746, MS 432, Baltimore, Maryland 21203-0746, USA SUMMARY The United States...subset of the Joint Intergrated Avionics NewAgentCollection which has four Working Group (JIAWG), Performance parameters: Acceptor, of type Task._D...Published Noember 1993 Distribution and Availability on Back Cover SAGARD-CP54 ADVISORY GROUP FOR AERSACE RESEARCH & DEVELOPMENT 7 RUE ANCELLE 92200
Mellot, Gaëlle; Beaunier, Patricia; Guigner, Jean-Michel; Bouteiller, Laurent; Rieger, Jutta; Stoffelbach, François
2018-06-20
The influence of the macromolecular reversible addition-fragmentation chain transfer (macro-RAFT) agent architecture on the morphology of the self-assemblies obtained by aqueous RAFT dispersion polymerization in polymerization-induced self-assembly (PISA) is studied by comparing amphiphilic AB diblock, (AB) 2 triblock, and triarm star-shaped (AB) 3 copolymers, constituted of N,N-dimethylacrylamide (DMAc = A) and diacetone acrylamide (DAAm = B). Symmetrical triarm (AB) 3 copolymers could be synthesized for the first time in a PISA process. Spheres and higher order morphologies, such as worms or vesicles, could be obtained for all types of architectures and the parameters that determine their formation have been studied. In particular, we found that the total DP n of the PDMAc and the PDAAm segments, i.e., the same overall molar mass, at the same M n (PDMAc)/M n (PDAAm) ratio, rather than the individual length of the arms determined the morphologies for the linear (AB) 2 and star shaped (AB) 3 copolymers obtained by using the bi- and trifunctional macro-RAFT agents. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
76 FR 67762 - Notice of Intent to Grant Exclusive License
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-02
... Environment For The Brahms Multiagent Language,'' ARC-16160-1B, entitled ``Mobile Agents Architecture,'' ARC... business at 865 Wisconsin Street, San Francisco, CA 94107. The copyright in the software and documentation...
The Unified Behavior Framework for the Simulation of Autonomous Agents
2015-03-01
1980s, researchers have designed a variety of robot control architectures intending to imbue robots with some degree of autonomy. A recently developed ...Identification Friend or Foe viii THE UNIFIED BEHAVIOR FRAMEWORK FOR THE SIMULATION OF AUTONOMOUS AGENTS I. Introduction The development of autonomy has...room for research by utilizing methods like simulation and modeling that consume less time and fewer monetary resources. A recently developed reactive
ICPL: Intelligent Cooperative Planning and Learning for Multi-agent Systems
2012-02-29
objective was to develop a new planning approach for teams!of multiple UAVs that tightly integrates learning and cooperative!control algorithms at... algorithms at multiple levels of the planning architecture. The research results enabled a team of mobile agents to learn to adapt and react to uncertainty in...expressive representation that incorporates feature conjunctions. Our algorithm is simple to implement, fast to execute, and can be combined with any
Energy Optimization Using a Case-Based Reasoning Strategy
Herrera-Viedma, Enrique
2018-01-01
At present, the domotization of homes and public buildings is becoming increasingly popular. Domotization is most commonly applied to the field of energy management, since it gives the possibility of managing the consumption of the devices connected to the electric network, the way in which the users interact with these devices, as well as other external factors that influence consumption. In buildings, Heating, Ventilation and Air Conditioning (HVAC) systems have the highest consumption rates. The systems proposed so far have not succeeded in optimizing the energy consumption associated with a HVAC system because they do not monitor all the variables involved in electricity consumption. For this reason, this article presents an agent approach that benefits from the advantages provided by a Multi-Agent architecture (MAS) deployed in a Cloud environment with a wireless sensor network (WSN) in order to achieve energy savings. The agents of the MAS learn social behavior thanks to the collection of data and the use of an artificial neural network (ANN). The proposed system has been assessed in an office building achieving an average energy savings of 41% in the experimental group offices. PMID:29543729
Energy Optimization Using a Case-Based Reasoning Strategy.
González-Briones, Alfonso; Prieto, Javier; De La Prieta, Fernando; Herrera-Viedma, Enrique; Corchado, Juan M
2018-03-15
At present, the domotization of homes and public buildings is becoming increasingly popular. Domotization is most commonly applied to the field of energy management, since it gives the possibility of managing the consumption of the devices connected to the electric network, the way in which the users interact with these devices, as well as other external factors that influence consumption. In buildings, Heating, Ventilation and Air Conditioning (HVAC) systems have the highest consumption rates. The systems proposed so far have not succeeded in optimizing the energy consumption associated with a HVAC system because they do not monitor all the variables involved in electricity consumption. For this reason, this article presents an agent approach that benefits from the advantages provided by a Multi-Agent architecture (MAS) deployed in a Cloud environment with a wireless sensor network (WSN) in order to achieve energy savings. The agents of the MAS learn social behavior thanks to the collection of data and the use of an artificial neural network (ANN). The proposed system has been assessed in an office building achieving an average energy savings of 41% in the experimental group offices.
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
DualTrust: A Distributed Trust Model for Swarm-Based Autonomic Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiden, Wendy M.; Dionysiou, Ioanna; Frincke, Deborah A.
2011-02-01
For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, trust management is important for the acceptance of the mobile agent sensors and to protect the system from malicious behavior by insiders and entities that have penetrated network defenses. This paper examines the trust relationships, evidence, and decisions in a representative system and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. We then propose the DualTrust conceptual trust model. By addressing themore » autonomic manager’s bi-directional primary relationships in the ACS architecture, DualTrust is able to monitor the trustworthiness of the autonomic managers, protect the sensor swarm in a scalable manner, and provide global trust awareness for the orchestrating autonomic manager.« less
Belief-desire reasoning in the explanation of behavior: do actions speak louder than words?
Wertz, Annie E; German, Timsin C
2007-10-01
The mechanisms underwriting our commonsense psychology, or 'theory of mind', have been extensively investigated via reasoning tasks that require participants to predict the action of agents based on information about beliefs and desires. However, relatively few studies have investigated the processes contributing to a central component of 'theory of mind' - our ability to explain the action of agents in terms of underlying beliefs and desires. In two studies, we demonstrate a novel phenomenon in adult belief-desire reasoning, capturing the folk notion that 'actions speak louder than words'. When story characters were described as searching in the wrong place for a target object, adult subjects often endorsed mental state explanations referencing a distracter object, but only when that object was approached. We discuss how this phenomenon, alongside other reasoning "errors" (e.g., hindsight bias; the curse of knowledge) can be used to illuminate the architecture of domain specific belief-desire reasoning processes.
Quantum-enhanced deliberation of learning agents using trapped ions
NASA Astrophysics Data System (ADS)
Dunjko, V.; Friis, N.; Briegel, H. J.
2015-02-01
A scheme that successfully employs quantum mechanics in the design of autonomous learning agents has recently been reported in the context of the projective simulation (PS) model for artificial intelligence. In that approach, the key feature of a PS agent, a specific type of memory which is explored via random walks, was shown to be amenable to quantization, allowing for a speed-up. In this work we propose an implementation of such classical and quantum agents in systems of trapped ions. We employ a generic construction by which the classical agents are ‘upgraded’ to their quantum counterparts by a nested process of adding coherent control, and we outline how this construction can be realized in ion traps. Our results provide a flexible modular architecture for the design of PS agents. Furthermore, we present numerical simulations of simple PS agents which analyze the robustness of our proposal under certain noise models.
Prevent and cure disuse bone loss
NASA Technical Reports Server (NTRS)
Jee, Webster S. S.
1994-01-01
Anabolic agents like parathyroid hormone and postagladin E-like substances were studied in dogs and rats to determine their effectiveness in the prevention and cure of bone loss due to immobilization. It was determined that postagladin E2 administration prevented immobilization while at the same time it added extra bone in a dose responsive manner. Although bone mass returns, poor trabecular architecture remains after normal ambulation recovery from immobilization. Disuse related bone loss and poor trabecular architecture were cured by post-immobilization postagladin E2 treatment.
Continual planning and scheduling for managing patient tests in hospital laboratories.
Marinagi, C C; Spyropoulos, C D; Papatheodorou, C; Kokkotos, S
2000-10-01
Hospital laboratories perform examination tests upon patients, in order to assist medical diagnosis or therapy progress. Planning and scheduling patient requests for examination tests is a complicated problem because it concerns both minimization of patient stay in hospital and maximization of laboratory resources utilization. In the present paper, we propose an integrated patient-wise planning and scheduling system which supports the dynamic and continual nature of the problem. The proposed combination of multiagent and blackboard architecture allows the dynamic creation of agents that share a set of knowledge sources and a knowledge base to service patient test requests.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
The Design of a Multi-Agent NDE Inspection Qualification System
NASA Astrophysics Data System (ADS)
McLean, N.; McKenna, J. P.; Gachagan, A.; McArthur, S.; Hayward, G.
2007-03-01
A novel Multi-Agent system (MAS) for NDE inspection qualification is being developed to facilitate a scalable environment allowing integration and automation of new and existing inspection qualification tools. This paper discusses the advantages of using a MAS approach to integrate the large number of disparate NDE software tools. The design and implementation of the system architecture is described, including the development of an ontology to describe the NDE domain.
Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Knox, Lenora A.
The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.
Conceptual Modeling in the Time of the Revolution: Part II
NASA Astrophysics Data System (ADS)
Mylopoulos, John
Conceptual Modeling was a marginal research topic at the very fringes of Computer Science in the 60s and 70s, when the discipline was dominated by topics focusing on programs, systems and hardware architectures. Over the years, however, the field has moved to centre stage and has come to claim a central role both in Computer Science research and practice in diverse areas, such as Software Engineering, Databases, Information Systems, the Semantic Web, Business Process Management, Service-Oriented Computing, Multi-Agent Systems, Knowledge Management, and more. The transformation was greatly aided by the adoption of standards in modeling languages (e.g., UML), and model-based methodologies (e.g., Model-Driven Architectures) by the Object Management Group (OMG) and other standards organizations. We briefly review the history of the field over the past 40 years, focusing on the evolution of key ideas. We then note some open challenges and report on-going research, covering topics such as the representation of variability in conceptual models, capturing model intentions, and models of laws.
Design of Hybrid Mobile Communication Networks for Planetary Exploration
NASA Technical Reports Server (NTRS)
Alena, Richard L.; Ossenfort, John; Lee, Charles; Walker, Edward; Stone, Thom
2004-01-01
The Mobile Exploration System Project (MEX) at NASA Ames Research Center has been conducting studies into hybrid communication networks for future planetary missions. These networks consist of space-based communication assets connected to ground-based Internets and planetary surface-based mobile wireless networks. These hybrid mobile networks have been deployed in rugged field locations in the American desert and the Canadian arctic for support of science and simulation activities on at least six occasions. This work has been conducted over the past five years resulting in evolving architectural complexity, improved component characteristics and better analysis and test methods. A rich set of data and techniques have resulted from the development and field testing of the communication network during field expeditions such as the Haughton Mars Project and NASA Mobile Agents Project.
Virtualization - A Key Cost Saver in NASA Multi-Mission Ground System Architecture
NASA Technical Reports Server (NTRS)
Swenson, Paul; Kreisler, Stephen; Sager, Jennifer A.; Smith, Dan
2014-01-01
With science team budgets being slashed, and a lack of adequate facilities for science payload teams to operate their instruments, there is a strong need for innovative new ground systems that are able to provide necessary levels of capability processing power, system availability and redundancy while maintaining a small footprint in terms of physical space, power utilization and cooling.The ground system architecture being presented is based off of heritage from several other projects currently in development or operations at Goddard, but was designed and built specifically to meet the needs of the Science and Planetary Operations Control Center (SPOCC) as a low-cost payload command, control, planning and analysis operations center. However, this SPOCC architecture was designed to be generic enough to be re-used partially or in whole by other labs and missions (since its inception that has already happened in several cases!)The SPOCC architecture leverages a highly available VMware-based virtualization cluster with shared SAS Direct-Attached Storage (DAS) to provide an extremely high-performing, low-power-utilization and small-footprint compute environment that provides Virtual Machine resources shared among the various tenant missions in the SPOCC. The storage is also expandable, allowing future missions to chain up to 7 additional 2U chassis of storage at an extremely competitive cost if they require additional archive or virtual machine storage space.The software architecture provides a fully-redundant GMSEC-based message bus architecture based on the ActiveMQ middleware to track all health and safety status within the SPOCC ground system. All virtual machines utilize the GMSEC system agents to report system host health over the GMSEC bus, and spacecraft payload health is monitored using the Hammers Integrated Test and Operations System (ITOS) Galaxy Telemetry and Command (TC) system, which performs near-real-time limit checking and data processing on the downlinked data stream and injects messages into the GMSEC bus that are monitored to automatically page the on-call operator or Systems Administrator (SA) when an off-nominal condition is detected. This architecture, like the LTSP thin clients, are shared across all tenant missions.Other required IT security controls are implemented at the ground system level, including physical access controls, logical system-level authentication authorization management, auditing and reporting, network management and a NIST 800-53 FISMA-Moderate IT Security plan Risk Assessment Contingency Plan, helping multiple missions share the cost of compliance with agency-mandated directives.The SPOCC architecture provides science payload control centers and backup mission operations centers with a cost-effective, standardized approach to virtualizing and monitoring resources that were traditionally multiple racks full of physical machines. The increased agility in deploying new virtual systems and thin client workstations can provide significant savings in personnel costs for maintaining the ground system. The cost savings in procurement, power, rack footprint and cooling as well as the shared multi-mission design greatly reduces upfront cost for missions moving into the facility. Overall, the authors hope that this architecture will become a model for how future NASA operations centers are constructed!
Agent Based Software for the Autonomous Control of Formation Flying Spacecraft
NASA Technical Reports Server (NTRS)
How, Jonathan P.; Campbell, Mark; Dennehy, Neil (Technical Monitor)
2003-01-01
Distributed satellite systems is an enabling technology for many future NASA/DoD earth and space science missions, such as MMS, MAXIM, Leonardo, and LISA [1, 2, 3]. While formation flying offers significant science benefits, to reduce the operating costs for these missions it will be essential that these multiple vehicles effectively act as a single spacecraft by performing coordinated observations. Autonomous guidance, navigation, and control as part of a coordinated fleet-autonomy is a key technology that will help accomplish this complex goal. This is no small task, as most current space missions require significant input from the ground for even relatively simple decisions such as thruster burns. Work for the NMP DS1 mission focused on the development of the New Millennium Remote Agent (NMRA) architecture for autonomous spacecraft control systems. NMRA integrates traditional real-time monitoring and control with components for constraint-based planning, robust multi-threaded execution, and model-based diagnosis and reconfiguration. The complexity of using an autonomous approach for space flight software was evident when most of its capabilities were stripped off prior to launch (although more capability was uplinked subsequently, and the resulting demonstration was very successful).
Adams, Annmarie; Theodore, David; Goldenberg, Ellie; McLaren, Coralee; McKeever, Patricia
2010-03-01
The study reported here adopts an interdisciplinary focus to elicit children's views about hospital environments. Based at the Hospital for Sick Children (SickKids), Toronto, the research explores the ways in which designers and patients understand and use the eight-storey lobby, The Atrium, a monumental addition constructed in 1993. It is a public place that never closes; hundreds of children pass through the namesake atrium every day. Combining methodological approaches from architectural history and health sociology, the intentions and uses of central features of the hospital atrium are examined. Data were collected from observations, focused interviews, and textual and visual documents. We locate the contemporary atrium in a historical context of building typologies rarely connected to hospital design, such as shopping malls, hotels and airports. We link the design of these multi-storey, glass-roofed spaces to other urban experiences especially consumption as normalizing forces in the everyday lives of Canadian children. Seeking to uncover children's self-identified, self-articulated place within contemporary pediatric hospitals, we assess how the atrium--by providing important, but difficult-to-measure functions such as comfort, socialization, interface, wayfinding, contact with nature and diurnal rhythms, and respite from adjacent medicalized spaces--contributes to the well-being of young patients. We used theoretical underpinnings from architecture and humanistic geography, and participatory methods advocated by child researchers and theorists. Our findings begin to address the significant gap in understanding about the relationship between the perceptions of children and the settings where their healthcare occurs. The study also underlines children's potential to serve as agents of architectural knowledge, reporting on and recording their observations of hospital architecture with remarkable sophistication. 2009 Elsevier Ltd. All rights reserved.
Multifunctional ferritin cage nanostructures for fluorescence and MR imaging of tumor cells
NASA Astrophysics Data System (ADS)
Li, Ke; Zhang, Zhi-Ping; Luo, Ming; Yu, Xiang; Han, Yu; Wei, Hong-Ping; Cui, Zong-Qiang; Zhang, Xian-En
2011-12-01
Bionanoparticles and nanostructures have attracted increasing interest as versatile and promising tools in many applications including biosensing and bioimaging. In this study, to image and detect tumor cells, ferritin cage-based multifunctional hybrid nanostructures were constructed that: (i) displayed both the green fluorescent protein and an Arg-Gly-Asp peptide on the exterior surface of the ferritin cages; and (ii) incorporated ferrimagnetic iron oxide nanoparticles into the ferritin interior cavity. The overall architecture of ferritin cages did not change after being integrated with fusion proteins and ferrimagnetic iron oxide nanoparticles. These multifunctional nanostructures were successfully used as a fluorescent imaging probe and an MRI contrast agent for specifically probing and imaging αvβ3 integrin upregulated tumor cells. The work provides a promising strategy for tumor cell detection by simultaneous fluorescence and MR imaging.Bionanoparticles and nanostructures have attracted increasing interest as versatile and promising tools in many applications including biosensing and bioimaging. In this study, to image and detect tumor cells, ferritin cage-based multifunctional hybrid nanostructures were constructed that: (i) displayed both the green fluorescent protein and an Arg-Gly-Asp peptide on the exterior surface of the ferritin cages; and (ii) incorporated ferrimagnetic iron oxide nanoparticles into the ferritin interior cavity. The overall architecture of ferritin cages did not change after being integrated with fusion proteins and ferrimagnetic iron oxide nanoparticles. These multifunctional nanostructures were successfully used as a fluorescent imaging probe and an MRI contrast agent for specifically probing and imaging αvβ3 integrin upregulated tumor cells. The work provides a promising strategy for tumor cell detection by simultaneous fluorescence and MR imaging. Electronic supplementary information (ESI) available. See DOI: 10.1039/c1nr11132a
IAServ: an intelligent home care web services platform in a cloud for aging-in-place.
Su, Chuan-Jun; Chiang, Chang-Yu
2013-11-12
As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients' needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.
IAServ: An Intelligent Home Care Web Services Platform in a Cloud for Aging-in-Place
Su, Chuan-Jun; Chiang, Chang-Yu
2013-01-01
As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients’ needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet. PMID:24225647
IDEA: Planning at the Core of Autonomous Reactive Agents
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Clancy, Daniel (Technical Monitor)
2002-01-01
Several successful autonomous systems are separated into technologically diverse functional layers operating at different levels of abstraction. This diversity makes them difficult to implement and validate. In this paper, we present IDEA (Intelligent Distributed Execution Architecture), a unified planning and execution framework. In IDEA a layered system can be implemented as separate agents, one per layer, each representing its interactions with the world in a model. At all levels, the model representation primitives and their semantics is the same. Moreover, each agent relies on a single model, plan database, plan runner and on a variety of planners, both reactive and deliberative. The framework allows the specification of agents that operate, within a guaranteed reaction time and supports flexible specification of reactive vs. deliberative agent behavior. Within the IDEA framework we are working to fully duplicate the functionalities of the DS1 Remote Agent and extend it to domains of higher complexity than autonomous spacecraft control.
INFORM Lab: a testbed for high-level information fusion and resource management
NASA Astrophysics Data System (ADS)
Valin, Pierre; Guitouni, Adel; Bossé, Eloi; Wehn, Hans; Happe, Jens
2011-05-01
DRDC Valcartier and MDA have created an advanced simulation testbed for the purpose of evaluating the effectiveness of Network Enabled Operations in a Coastal Wide Area Surveillance situation, with algorithms provided by several universities. This INFORM Lab testbed allows experimenting with high-level distributed information fusion, dynamic resource management and configuration management, given multiple constraints on the resources and their communications networks. This paper describes the architecture of INFORM Lab, the essential concepts of goals and situation evidence, a selected set of algorithms for distributed information fusion and dynamic resource management, as well as auto-configurable information fusion architectures. The testbed provides general services which include a multilayer plug-and-play architecture, and a general multi-agent framework based on John Boyd's OODA loop. The testbed's performance is demonstrated on 2 types of scenarios/vignettes for 1) cooperative search-and-rescue efforts, and 2) a noncooperative smuggling scenario involving many target ships and various methods of deceit. For each mission, an appropriate subset of Canadian airborne and naval platforms are dispatched to collect situation evidence, which is fused, and then used to modify the platform trajectories for the most efficient collection of further situation evidence. These platforms are fusion nodes which obey a Command and Control node hierarchy.
Hu, Xiangen; Graesser, Arthur C
2004-05-01
The Human Use Regulatory Affairs Advisor (HURAA) is a Web-based facility that provides help and training on the ethical use of human subjects in research, based on documents and regulations in United States federal agencies. HURAA has a number of standard features of conventional Web facilities and computer-based training, such as hypertext, multimedia, help modules, glossaries, archives, links to other sites, and page-turning didactic instruction. HURAA also has these intelligent features: (1) an animated conversational agent that serves as a navigational guide for the Web facility, (2) lessons with case-based and explanation-based reasoning, (3) document retrieval through natural language queries, and (4) a context-sensitive Frequently Asked Questions segment, called Point & Query. This article describes the functional learning components of HURAA, specifies its computational architecture, and summarizes empirical tests of the facility on learners.
Imparting the unique properties of DNA into complex material architectures and functions.
Xu, Phyllis F; Noh, Hyunwoo; Lee, Ju Hun; Domaille, Dylan W; Nakatsuka, Matthew A; Goodwin, Andrew P; Cha, Jennifer N
2013-07-01
While the remarkable chemical and biological properties of DNA have been known for decades, these properties have only been imparted into materials with unprecedented function much more recently. The inimitable ability of DNA to form programmable, complex assemblies through stable, specific, and reversible molecular recognition has allowed the creation of new materials through DNA's ability to control a material's architecture and properties. In this review we discuss recent progress in how DNA has brought unmatched function to materials, focusing specifically on new advances in delivery agents, devices, and sensors.
Emergent Aerospace Designs Using Negotiating Autonomous Agents
NASA Technical Reports Server (NTRS)
Deshmukh, Abhijit; Middelkoop, Timothy; Krothapalli, Anjaneyulu; Smith, Charles
2000-01-01
This paper presents a distributed design methodology where designs emerge as a result of the negotiations between different stake holders in the process, such as cost, performance, reliability, etc. The proposed methodology uses autonomous agents to represent design decision makers. Each agent influences specific design parameters in order to maximize their utility. Since the design parameters depend on the aggregate demand of all the agents in the system, design agents need to negotiate with others in the market economy in order to reach an acceptable utility value. This paper addresses several interesting research issues related to distributed design architectures. First, we present a flexible framework which facilitates decomposition of the design problem. Second, we present overview of a market mechanism for generating acceptable design configurations. Finally, we integrate learning mechanisms in the design process to reduce the computational overhead.
NanoDesign: Concepts and Software for a Nanotechnology Based on Functionalized Fullerenes
NASA Technical Reports Server (NTRS)
Globus, Al; Jaffe, Richard; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Eric Drexler has proposed a hypothetical nanotechnology based on diamond and investigated the properties of such molecular systems. While attractive, diamonoid nanotechnology is not physically accessible with straightforward extensions of current laboratory techniques. We propose a nanotechnology based on functionalized fullerenes and investigate carbon nanotube based gears with teeth added via a benzyne reaction known to occur with C60. The gears are single-walled carbon nanotubes with appended coenzyme groups for teeth. Fullerenes are in widespread laboratory use and can be functionalized in many ways. Companion papers computationally demonstrate the properties of these gears (they appear to work) and the accessibility of the benzyne/nanotube reaction. This paper describes the molecular design techniques and rationale as well as the software that implements these design techniques. The software is a set of persistent C++ objects controlled by TCL command scripts. The c++/tcl interface is automatically generated by a software system called tcl_c++ developed by the author and described here. The objects keep track of different portions of the molecular machinery to allow different simulation techniques and boundary conditions to be applied as appropriate. This capability has been required to demonstrate (computationally) our gear's feasibility. A new distributed software architecture featuring a WWW universal client, CORBA distributed objects, and agent software is under consideration. The software architecture is intended to eventually enable a widely disbursed group to develop complex simulated molecular machines.
Designing protein-based biomaterials for medical applications.
Gagner, Jennifer E; Kim, Wookhyun; Chaikof, Elliot L
2014-04-01
Biomaterials produced by nature have been honed through billions of years, evolving exquisitely precise structure-function relationships that scientists strive to emulate. Advances in genetic engineering have facilitated extensive investigations to determine how changes in even a single peptide within a protein sequence can produce biomaterials with unique thermal, mechanical and biological properties. Elastin, a naturally occurring protein polymer, serves as a model protein to determine the relationship between specific structural elements and desirable material characteristics. The modular, repetitive nature of the protein facilitates the formation of well-defined secondary structures with the ability to self-assemble into complex three-dimensional architectures on a variety of length scales. Furthermore, many opportunities exist to incorporate other protein-based motifs and inorganic materials into recombinant protein-based materials, extending the range and usefulness of these materials in potential biomedical applications. Elastin-like polypeptides (ELPs) can be assembled into 3-D architectures with precise control over payload encapsulation, mechanical and thermal properties, as well as unique functionalization opportunities through both genetic and enzymatic means. An overview of current protein-based materials, their properties and uses in biomedicine will be provided, with a focus on the advantages of ELPs. Applications of these biomaterials as imaging and therapeutic delivery agents will be discussed. Finally, broader implications and future directions of these materials as diagnostic and therapeutic systems will be explored. Copyright © 2013 Elsevier Ltd. All rights reserved.
Designing Protein-Based Biomaterials for Medical Applications
Gagner, Jennifer E.; Kim, Wookhyun; Chaikof, Elliot L.
2013-01-01
Biomaterials produced by nature have been honed through billions of years, evolving exquisitely precise structure-function relationships that scientists strive to emulate. Advances in genetic engineering have facilitated extensive investigations to determine how changes in even a single peptide within a protein sequence can produce biomaterials with unique thermal, mechanical and biological properties. Elastin, a naturally occurring protein polymer, serves as a model protein to determine the relationship between specific structural elements and desirable material characteristics. The modular, repetitive nature of the protein facilitates the formation of well-defined secondary structures with the ability to self-assemble into complex three-dimensional architectures on a variety of length scales. Furthermore, many opportunities exist to incorporate other protein-based motifs and inorganic materials into recombinant protein-based materials, extending the range and usefulness of these materials in potential biomedical applications. Elastin-like polypeptides can be assembled into 3D architectures with precise control over payload encapsulation, mechanical and thermal properties, as well as unique functionalization opportunities through both genetic and enzymatic means. An overview of current protein-based materials, their properties and uses in biomedicine will be provided, with a focus on the advantages of elastin-like polypeptides. Applications of these biomaterials as imaging and therapeutic delivery agents will be discussed. Finally, broader implications and future directions of these materials as diagnostic and therapeutic systems will be explored. PMID:24121196
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Huajuan; Zhao, Yanbao, E-mail: zhaoyb902@henu.edu.cn; Sun, Lei
Graphical abstract: A simple method for the synthesis of novel micrometer flower-like Cu/PVP architectures was introduced. Highlights: {yields} Micrometer flower-like copper/polyvinylpyrrolidone architectures were obtained by a simple chemical route. {yields} The amount of N{sub 2}H{sub 4}{center_dot}H{sub 2}O, the reaction temperature, the molar ratio of CuCl{sub 2} to PVP and different molecular weights of PVP play an important role in the controlling the morphology of the Cu/PVP architectures. {yields} A possible mechanism of the formation of Cu/PVP architectures was discussed. -- Abstract: Micrometer-sized flower-like Cu/polyvinylpyrrolidone (PVP) architectures are synthesized by the reduction of copper (II) salt with hydrazine hydrate in aqueousmore » solution in the presence of PVP capping agent. The resulting Cu/PVP architectures are investigated by UV-vis spectroscopy, transmission electron microscopy (TEM), X-ray powder diffraction (XRD), and scanning electron microscopy (SEM). The Cu/PVP flowers have uniform morphologies with an average diameter of 10 {mu}m, made of several intercrossing plates. The formation of Cu/PVP flowers is a new kinetic control process, and the factors such as the amount of N{sub 2}H{sub 4}{center_dot}H{sub 2}O, reaction temperature, molar ratio of CuCl{sub 2} to PVP and molecular weight of PVP have significant effect on the morphology of Cu/PVP architectures. A possible mechanism of the formation of micrometer Cu/PVP architectures was discussed.« less
The Role of Intelligent Agents in Advanced Information Systems
NASA Technical Reports Server (NTRS)
Kerschberg, Larry
1999-01-01
In this presentation we review the current ongoing research within George Mason University's (GMU) Center for Information Systems Integration and Evolution (CISE). We define characteristics of advanced information systems, discuss a family of agents for such systems, and show how GMU's Domain modeling tools and techniques can be used to define a product line Architecture for configuring NASA missions. These concepts can be used to define Advanced Engineering Environments such as those envisioned for NASA's new initiative for intelligent design and synthesis environments.
Corbacho, Fernando; Nishikawa, Kiisa C; Weerasuriya, Ananda; Liaw, Jim-Shih; Arbib, Michael A
2005-12-01
The previous companion paper describes the initial (seed) schema architecture that gives rise to the observed prey-catching behavior. In this second paper in the series we describe the fundamental adaptive processes required during learning after lesioning. Following bilateral transections of the hypoglossal nerve, anurans lunge toward mealworms with no accompanying tongue or jaw movement. Nevertheless anurans with permanent hypoglossal transections eventually learn to catch their prey by first learning to open their mouth again and then lunging their body further and increasing their head angle. In this paper we present a new learning framework, called schema-based learning (SBL). SBL emphasizes the importance of the current existent structure (schemas), that defines a functioning system, for the incremental and autonomous construction of ever more complex structure to achieve ever more complex levels of functioning. We may rephrase this statement into the language of Schema Theory (Arbib 1992, for a comprehensive review) as the learning of new schemas based on the stock of current schemas. SBL emphasizes a fundamental principle of organization called coherence maximization, that deals with the maximization of congruence between the results of an interaction (external or internal) and the expectations generated for that interaction. A central hypothesis consists of the existence of a hierarchy of predictive internal models (predictive schemas) all over the control center-brain-of the agent. Hence, we will include predictive models in the perceptual, sensorimotor, and motor components of the autonomous agent architecture. We will then show that predictive models are fundamental for structural learning. In particular we will show how a system can learn a new structural component (augment the overall network topology) after being lesioned in order to recover (or even improve) its original functionality. Learning after lesioning is a special case of structural learning but clearly shows that solutions cannot be known/hardwired a priori since it cannot be known, in advance, which substructure is going to break down.
Lane, D D; Chiu, D Y; Su, F Y; Srinivasan, S; Kern, H B; Press, O W; Stayton, P S; Convertine, A J
2015-02-28
Aqueous reversible addition-fragmentation chain transfer (RAFT) polymerization was employed to prepare a series of linear copolymers of N,N-dimethylacrylamide (DMA) and 2-hydroxyethylacrylamide (HEAm) with narrow Đ values over a molecular weight range spanning three orders of magnitude (10 3 to 10 6 Da). Trithiocarbonate-based RAFT chain transfer agents (CTAs) were grafted onto these scaffolds using carbodiimide chemistry catalyzed with DMAP. The resultant graft chain transfer agent (gCTA) was subsequently employed to synthesize polymeric brushes with a number of important vinyl monomer classes including acrylamido, methacrylamido, and methacrylate. Brush polymerization kinetics were evaluated for the aqueous RAFT polymerization of DMA from a 10 arm gCTA. Polymeric brushes containing hydroxyl functionality were further functionalized in order to prepare 2nd generation gCTAs which were subsequently employed to prepare polymers with a brushed-brush architecture with molecular weights in excess of 10 6 Da. These resultant single particle nanoparticles (SNPs) were employed as drug delivery vehicles for the anthracycline-based drug doxorubicin via copolymerization of DMA with a protected carbazate monomer (bocSMA). Cell-specific targeting functionality was also introduced via copolymerization with a biotin-functional monomer (bioHEMA). Drug release of the hydrazone linked doxorubicin was evaluated as function of pH and serum and chemotherapeutic activity was evaluated in SKOV3 ovarian cancer cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Ling-Bao; Hou, Shu-Fen; Zhou, Jin
In present work, we demonstrate an efficient and facile strategy to fabricate three-dimensional (3D) nitrogen-doped graphene aerogels (NGAs) based on melamine, which serves as reducing and functionalizing agent of graphene oxide (GO) in an aqueous medium with ammonia. Benefiting from well-defined and cross-linked 3D porous network architectures, the supercapacitor based on the NGAs exhibited a high specific capacitance of 170.5 F g{sup −1} at 0.2 A g{sup −1}, and this capacitance also showed good electrochemical stability and a high degree of reversibility in the repetitive charge/discharge cycling test. More interestingly, the prepared NGAs further exhibited high adsorption capacities and highmore » recycling performance toward several metal ions such as Pb{sup 2+}, Cu{sup 2+} and Cd{sup 2+}. Moreover, the hydrophobic carbonized nitrogen-doped graphene aerogels (CNGAs) showed outstanding adsorption and recycling performance for the removal of various oils and organic solvents. - Graphical abstract: Three-dimensional nitrogen-doped graphene aerogels were prepared by using melamine as reducing and functionalizing agent in an aqueous medium with ammonia, which showed multifunctional applications in supercapacitors and adsorption. - Highlights: • Three-dimensional nitrogen-doped graphene aerogels (NGAs) were prepared. • Melamine was used as reducing and functionalizing agent. • NGAs exhibited relatively good electrochemical properties in supercapacitor. • NGAs exhibited high adsorption performance toward several metal ions. • CNGAs showed outstanding adsorption capacities for various oils and solvents.« less
A Novel Computer-Based Set-Up to Study Movement Coordination in Human Ensembles
Alderisio, Francesco; Lombardi, Maria; Fiore, Gianfranco; di Bernardo, Mario
2017-01-01
Existing experimental works on movement coordination in human ensembles mostly investigate situations where each subject is connected to all the others through direct visual and auditory coupling, so that unavoidable social interaction affects their coordination level. Here, we present a novel computer-based set-up to study movement coordination in human groups so as to minimize the influence of social interaction among participants and implement different visual pairings between them. In so doing, players can only take into consideration the motion of a designated subset of the others. This allows the evaluation of the exclusive effects on coordination of the structure of interconnections among the players in the group and their own dynamics. In addition, our set-up enables the deployment of virtual computer players to investigate dyadic interaction between a human and a virtual agent, as well as group synchronization in mixed teams of human and virtual agents. We show how this novel set-up can be employed to study coordination both in dyads and in groups over different structures of interconnections, in the presence as well as in the absence of virtual agents acting as followers or leaders. Finally, in order to illustrate the capabilities of the architecture, we describe some preliminary results. The platform is available to any researcher who wishes to unfold the mechanisms underlying group synchronization in human ensembles and shed light on its socio-psychological aspects. PMID:28649217
NASA Astrophysics Data System (ADS)
Mamy Rakotoarisoa, Mahefa; Fleurant, Cyril; Taibi, Nuscia; Razakamanana, Théodore
2016-04-01
Hydrological risks, especially for floods, are recurrent on the Fiherenana watershed - southwest of Madagascar. The city of Toliara, which is located at the outlet of the river basin, is subjected each year to hurricane hazards and floods. The stakes are of major importance in this part of the island. This study begins with the analysis of hazard by collecting all existing hydro-climatic data on the catchment. It then seeks to determine trends, despite the significant lack of data, using simple statistical models (decomposition of time series). Then, two approaches are conducted to assess the vulnerability of the city of Toliara and the surrounding villages. First, a static approach, from surveys of land and the use of GIS are used. Then, the second method is the use of a multi-agent-based simulation model. The first step is the mapping of a vulnerability index which is the arrangement of several static criteria. This is a microscale indicator (the scale used is the housing). For each House, there are several criteria of vulnerability, which are the potential water depth, the flow rate, or the architectural typology of the buildings. For the second part, simulations involving scenes of agents are used in order to evaluate the degree of vulnerability of homes from flooding. Agents are individual entities to which we can assign behaviours on purpose to simulate a given phenomenon. The aim is not to give a criterion to the house as physical building, such as its architectural typology or its strength. The model wants to know the chances of the occupants of the house to escape from a catastrophic flood. For this purpose, we compare various settings and scenarios. Some scenarios are conducted to take into account the effect of certain decision made by the responsible entities (Information and awareness of the villagers for example). The simulation consists of two essential parts taking place simultaneously in time: simulation of the rise of water and the flow using classical hydrological functions and multi agent system (transfer function and production function) and the simulation of the behaviour of the people facing the arrival of hazard.
NASA Astrophysics Data System (ADS)
Mériaux, Sébastien; Conti, Allegra; Larrat, Benoît
2018-05-01
The characterization of extracellular space (ECS) architecture represents valuable information for the understanding of transport mechanisms occurring in brain parenchyma. ECS tortuosity reflects the hindrance imposed by cell membranes to molecular diffusion. Numerous strategies have been proposed to measure the diffusion through ECS and to estimate its tortuosity. The first method implies the perfusion for several hours of a radiotracer which effective diffusion coefficient D* is determined after post mortem processing. The most well-established techniques are real-time iontophoresis that measures the concentration of a specific ion at known distance from its release point, and integrative optical imaging that relies on acquiring microscopy images of macromolecules labelled with fluorophore. After presenting these methods, we focus on a recent Magnetic Resonance Imaging (MRI)-based technique that consists in acquiring concentration maps of a contrast agent diffusing within ECS. Thanks to MRI properties, molecular diffusion and tortuosity can be estimated in 3D for deep brain regions. To further discuss the reliability of this technique, we point out the influence of the delivery method on the estimation of D*. We compare the value of D* for a contrast agent intracerebrally injected, with its value when the agent is delivered to the brain after an ultrasound-induced blood-brain barrier (BBB) permeabilization. Several studies have already shown that tortuosity may be modified in pathological conditions. Therefore, we believe that MRI-based techniques could be useful in a clinical context for characterizing the diffusion properties of pathological ECS and thus predicting the drug biodistribution into the targeted area.
Jelić, Andrea; Tieri, Gaetano; De Matteis, Federico; Babiloni, Fabio; Vecchiato, Giovanni
2016-01-01
Over the last few years, the efforts to reveal through neuroscientific lens the relations between the mind, body, and built environment have set a promising direction of using neuroscience for architecture. However, little has been achieved thus far in developing a systematic account that could be employed for interpreting current results and providing a consistent framework for subsequent scientific experimentation. In this context, the enactive perspective is proposed as a guide to studying architectural experience for two key reasons. Firstly, the enactive approach is specifically selected for its capacity to account for the profound connectedness of the organism and the world in an active and dynamic relationship, which is primarily shaped by the features of the body. Thus, particular emphasis is placed on the issues of embodiment and motivational factors as underlying constituents of the body-architecture interactions. Moreover, enactive understanding of the relational coupling between body schema and affordances of architectural spaces singles out the two-way bodily communication between architecture and its inhabitants, which can be also explored in immersive virtual reality settings. Secondly, enactivism has a strong foothold in phenomenological thinking that corresponds to the existing phenomenological discourse in architectural theory and qualitative design approaches. In this way, the enactive approach acknowledges the available common ground between neuroscience and architecture and thus allows a more accurate definition of investigative goals. Accordingly, the outlined model of architectural subject in enactive terms—that is, a model of a human being as embodied, enactive, and situated agent, is proposed as a basis of neuroscientific and phenomenological interpretation of architectural experience. PMID:27065937
Jelić, Andrea; Tieri, Gaetano; De Matteis, Federico; Babiloni, Fabio; Vecchiato, Giovanni
2016-01-01
Over the last few years, the efforts to reveal through neuroscientific lens the relations between the mind, body, and built environment have set a promising direction of using neuroscience for architecture. However, little has been achieved thus far in developing a systematic account that could be employed for interpreting current results and providing a consistent framework for subsequent scientific experimentation. In this context, the enactive perspective is proposed as a guide to studying architectural experience for two key reasons. Firstly, the enactive approach is specifically selected for its capacity to account for the profound connectedness of the organism and the world in an active and dynamic relationship, which is primarily shaped by the features of the body. Thus, particular emphasis is placed on the issues of embodiment and motivational factors as underlying constituents of the body-architecture interactions. Moreover, enactive understanding of the relational coupling between body schema and affordances of architectural spaces singles out the two-way bodily communication between architecture and its inhabitants, which can be also explored in immersive virtual reality settings. Secondly, enactivism has a strong foothold in phenomenological thinking that corresponds to the existing phenomenological discourse in architectural theory and qualitative design approaches. In this way, the enactive approach acknowledges the available common ground between neuroscience and architecture and thus allows a more accurate definition of investigative goals. Accordingly, the outlined model of architectural subject in enactive terms-that is, a model of a human being as embodied, enactive, and situated agent, is proposed as a basis of neuroscientific and phenomenological interpretation of architectural experience.
77 FR 38071 - Statement of Organization, Functions and Delegations of Authority
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-26
... countermeasures against chemical, biological, radiological and nuclear agents of terrorism, epidemics, and... engineering services and ensuring compliance with historic preservation and other laws and regulations related...) provides architectural and engineering services to other Agencies such as the Administration for Children...
Dual-energy micro-CT imaging for differentiation of iodine- and gold-based nanoparticles
NASA Astrophysics Data System (ADS)
Badea, C. T.; Johnston, S. M.; Qi, Y.; Ghaghada, K.; Johnson, G. A.
2011-03-01
Spectral CT imaging is expected to play a major role in the diagnostic arena as it provides material decomposition on an elemental basis. One fascinating possibility is the ability to discriminate multiple contrast agents targeting different biological sites. We investigate the feasibility of dual energy micro-CT for discrimination of iodine (I) and gold (Au) contrast agents when simultaneously present in the body. Simulations and experiments were performed to measure the CT enhancement for I and Au over a range of voltages from 40-to-150 kVp using a dual source micro-CT system. The selected voltages for dual energy micro-CT imaging of Au and I were 40 kVp and 80 kVp. On a massconcentration basis, the relative average enhancement of Au to I was 2.75 at 40 kVp and 1.58 at 80 kVp. We have demonstrated the method in a preclinical model of colon cancer to differentiate vascular architecture and extravasation. The concentration maps of Au and I allow quantitative measure of the bio-distribution of both agents. In conclusion, dual energy micro-CT can be used to discriminate probes containing I and Au with immediate impact in pre-clinical research.
Design for interaction between humans and intelligent systems during real-time fault management
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra L.; Thronesbery, Carroll G.
1992-01-01
Initial results are reported to provide guidance and assistance for designers of intelligent systems and their human interfaces. The objective is to achieve more effective human-computer interaction (HCI) for real time fault management support systems. Studies of the development of intelligent fault management systems within NASA have resulted in a new perspective of the user. If the user is viewed as one of the subsystems in a heterogeneous, distributed system, system design becomes the design of a flexible architecture for accomplishing system tasks with both human and computer agents. HCI requirements and design should be distinguished from user interface (displays and controls) requirements and design. Effective HCI design for multi-agent systems requires explicit identification of activities and information that support coordination and communication between agents. The effects are characterized of HCI design on overall system design and approaches are identified to addressing HCI requirements in system design. The results include definition of (1) guidance based on information level requirements analysis of HCI, (2) high level requirements for a design methodology that integrates the HCI perspective into system design, and (3) requirements for embedding HCI design tools into intelligent system development environments.
Carlson, Alicia L.; Gillenwater, Ann M.; Williams, Michelle D.; El-Naggar, Adel K.; Richards-Kortum, R. R.
2009-01-01
Using current clinical diagnostic techniques, it is difficult to visualize tumor morphology and architecture at the cellular level, which is necessary for diagnostic localization of pathologic lesions. Optical imaging techniques have the potential to address this clinical need by providing real-time, sub-cellular resolution images. This paper describes the use of dual mode confocal microscopy and optical molecular-specific contrast agents to image tissue architecture, cellular morphology, and sub-cellular molecular features of normal and neoplastic oral tissues. Fresh tissue slices were prepared from 33 biopsies of clinically normal and abnormal oral mucosa obtained from 14 patients. Reflectance confocal images were acquired after the application of 6% acetic acid, and fluorescence confocal images were acquired after the application of a fluorescence contrast agent targeting the epidermal growth factor receptor (EGFR). The dual imaging modes provided images similar to light microscopy of hematoxylin and eosin and immunohistochemistry staining, but from thick fresh tissue slices. Reflectance images provided information on the architecture of the tissue and the cellular morphology. The nuclear-to-cytoplasmic (N/C) ratio from the reflectance images was at least 7.5 times greater for the carcinoma than the corresponding normal samples, except for one case of highly keratinized carcinoma. Separation of carcinoma from normal and mild dysplasia was achieved using this ratio (p<0.01). Fluorescence images of EGFR expression yielded a mean fluorescence labeling intensity (FLI) that was at least 2.7 times higher for severe dysplasia and carcinoma samples than for the corresponding normal sample, and could be used to distinguish carcinoma from normal and mild dysplasia (p<0.01). Analyzed together, the N/C ratio and the mean FLI may improve the ability to distinguish carcinoma from normal squamous epithelium. PMID:17877424
Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform
Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B.
2016-01-01
Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks. PMID:26909015
Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform.
Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B
2016-01-01
Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks.
NASA Technical Reports Server (NTRS)
Srivastava, Sadanand; deLamadrid, James
1998-01-01
The User System Interface Agent (USIA) is a special type of software agent which acts as the "middle man" between a human user and an information processing environment. USIA consists of a group of cooperating agents which are responsible for assisting users in obtaining information processing services intuitively and efficiently. Some of the main features of USIA include: (1) multiple interaction modes and (2) user-specific and stereotype modeling and adaptation. This prototype system provides us with a development platform towards the realization of an operational information ecology. In the first phase of this project we focus on the design and implementation of prototype system of the User-System Interface Agent (USIA). The second face of USIA allows user interaction via a restricted query language as well as through a taxonomy of windows. In third phase the USIA system architecture was revised.
A Demand-Driven Approach for a Multi-Agent System in Supply Chain Management
NASA Astrophysics Data System (ADS)
Kovalchuk, Yevgeniya; Fasli, Maria
This paper presents the architecture of a multi-agent decision support system for Supply Chain Management (SCM) which has been designed to compete in the TAC SCM game. The behaviour of the system is demand-driven and the agents plan, predict, and react dynamically to changes in the market. The main strength of the system lies in the ability of the Demand agent to predict customer winning bid prices - the highest prices the agent can offer customers and still obtain their orders. This paper investigates the effect of the ability to predict customer order prices on the overall performance of the system. Four strategies are proposed and compared for predicting such prices. The experimental results reveal which strategies are better and show that there is a correlation between the accuracy of the models' predictions and the overall system performance: the more accurate the prediction of customer order prices, the higher the profit.
Theoretical backgrounds of non-tempered materials production based on new raw materials
NASA Astrophysics Data System (ADS)
Lesovik, V. S.; Volodchenko, A. A.; Glagolev, E. S.; Chernysheva, N. V.; Lashina, I. V.; Feduk, R. S.
2018-03-01
One of the trends in construction material science is development and implementation of highly effective finish materials which improve architectural exterior of cities. Silicate materials widely-used in the construction today have rather low decorative properties. Different coloring agents are used in order to produce competitive materials, but due to the peculiarities of the production, process very strict specifications are applied to them. The use of industrial wastes or variety of rock materials as coloring agents is of great interest nowadays. The article shows that clay rock can be used as raw material in production of finish materials of non-autoclaved solidification. This raw material due to its material composition actively interacts with cementing component in steam treatment at 90–95 °C with formation of cementing joints that form a firm coagulative-cristalized and crystallization structure of material providing high physic-mechanical properties of silicate goods. It is determined that energy-saving, colored finish materials with compression strength up to 16 MPa can be produced from clay rocks.
Agreement Technologies for Energy Optimization at Home.
González-Briones, Alfonso; Chamoso, Pablo; De La Prieta, Fernando; Demazeau, Yves; Corchado, Juan M
2018-05-19
Nowadays, it is becoming increasingly common to deploy sensors in public buildings or homes with the aim of obtaining data from the environment and taking decisions that help to save energy. Many of the current state-of-the-art systems make decisions considering solely the environmental factors that cause the consumption of energy. These systems are successful at optimizing energy consumption; however, they do not adapt to the preferences of users and their comfort. Any system that is to be used by end-users should consider factors that affect their wellbeing. Thus, this article proposes an energy-saving system, which apart from considering the environmental conditions also adapts to the preferences of inhabitants. The architecture is based on a Multi-Agent System (MAS), its agents use Agreement Technologies (AT) to perform a negotiation process between the comfort preferences of the users and the degree of optimization that the system can achieve according to these preferences. A case study was conducted in an office building, showing that the proposed system achieved average energy savings of 17.15%.
A Review of Norms and Normative Multiagent Systems
Mahmoud, Moamin A.; Ahmad, Mohd Sharifuddin; Mustapha, Aida
2014-01-01
Norms and normative multiagent systems have become the subjects of interest for many researchers. Such interest is caused by the need for agents to exploit the norms in enhancing their performance in a community. The term norm is used to characterize the behaviours of community members. The concept of normative multiagent systems is used to facilitate collaboration and coordination among social groups of agents. Many researches have been conducted on norms that investigate the fundamental concepts, definitions, classification, and types of norms and normative multiagent systems including normative architectures and normative processes. However, very few researches have been found to comprehensively study and analyze the literature in advancing the current state of norms and normative multiagent systems. Consequently, this paper attempts to present the current state of research on norms and normative multiagent systems and propose a norm's life cycle model based on the review of the literature. Subsequently, this paper highlights the significant areas for future work. PMID:25110739
NASA Astrophysics Data System (ADS)
Li, Qing; Wang, Ze-yuan; Cao, Zhi-chao; Du, Rui-yang; Luo, Hao
2015-08-01
With the process of globalisation and the development of management models and information technology, enterprise cooperation and collaboration has developed from intra-enterprise integration, outsourcing and inter-enterprise integration, and supply chain management, to virtual enterprises and enterprise networks. Some midfielder enterprises begin to serve for different supply chains. Therefore, they combine related supply chains into a complex enterprise network. The main challenges for enterprise network's integration and collaboration are business process and data fragmentation beyond organisational boundaries. This paper reviews the requirements of enterprise network's integration and collaboration, as well as the development of new information technologies. Based on service-oriented architecture (SOA), collaboration modelling and collaboration agents are introduced to solve problems of collaborative management for service convergence under the condition of process and data fragmentation. A model-driven methodology is developed to design and deploy the integrating framework. An industrial experiment is designed and implemented to illustrate the usage of developed technologies in this paper.
Gupta, Anuradha; Meena, Jairam; Sharma, Deepak; Gupta, Pushpa; Gupta, Umesh Dutta; Kumar, Sadan; Sharma, Sharad; Panda, Amulya K; Misra, Amit
2016-09-06
Nitazoxanide (NTZ) has moderate mycobactericidal activity and is also an inducer of autophagy in mammalian cells. High-payload (40-50% w/w) inhalable particles containing NTZ alone or in combination with antituberculosis (TB) agents isoniazid (INH) and rifabutin (RFB) were prepared with high incorporation efficiency of 92%. In vitro drug release was corrected for drug degradation during the course of study and revealed first-order controlled release. Particles were efficiently taken up in vitro by macrophages and maintained intracellular drug concentrations at one order of magnitude higher than NTZ in solution for 6 h. Dose-dependent killing of Mtb and restoration of lung and spleen architecture were observed in experimentally infected mice treated with inhalations containing NTZ. Adjunct NTZ with INH and RFB cleared culturable bacteria from the lung and spleen and markedly healed tissue architecture. NTZ can be used in combination with INH-RFB to kill the pathogen and heal the host.
Agent-based re-engineering of ErbB signaling: a modeling pipeline for integrative systems biology.
Das, Arya A; Ajayakumar Darsana, T; Jacob, Elizabeth
2017-03-01
Experiments in systems biology are generally supported by a computational model which quantitatively estimates the parameters of the system by finding the best fit to the experiment. Mathematical models have proved to be successful in reverse engineering the system. The data generated is interpreted to understand the dynamics of the underlying phenomena. The question we have sought to answer is that - is it possible to use an agent-based approach to re-engineer a biological process, making use of the available knowledge from experimental and modelling efforts? Can the bottom-up approach benefit from the top-down exercise so as to create an integrated modelling formalism for systems biology? We propose a modelling pipeline that learns from the data given by reverse engineering, and uses it for re-engineering the system, to carry out in-silico experiments. A mathematical model that quantitatively predicts co-expression of EGFR-HER2 receptors in activation and trafficking has been taken for this study. The pipeline architecture takes cues from the population model that gives the rates of biochemical reactions, to formulate knowledge-based rules for the particle model. Agent-based simulations using these rules, support the existing facts on EGFR-HER2 dynamics. We conclude that, re-engineering models, built using the results of reverse engineering, opens up the possibility of harnessing the power pack of data which now lies scattered in literature. Virtual experiments could then become more realistic when empowered with the findings of empirical cell biology and modelling studies. Implemented on the Agent Modelling Framework developed in-house. C ++ code templates available in Supplementary material . liz.csir@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Evolution of cooperative behavior in simulation agents
NASA Astrophysics Data System (ADS)
Stroud, Phillip D.
1998-03-01
A simulated automobile factory paint shop is used as a testbed for exploring the emulation of human decision-making behavior. A discrete-events simulation of the paint shop as a collection of interacting Java actors is described. An evolutionary cognitive architecture is under development for building software actors to emulate humans in simulations of human- dominated complex systems. In this paper, the cognitive architecture is extended by implementing a persistent population of trial behaviors with an incremental fitness valuation update strategy, and by allowing a group of cognitive actors to share information. A proof-of-principle demonstration is presented.
Envisioning Cognitive Robots for Future Space Exploration
NASA Technical Reports Server (NTRS)
Huntsberger, Terry; Stoica, Adrian
2010-01-01
Cognitive robots in the context of space exploration are envisioned with advanced capabilities of model building, continuous planning/re-planning, self-diagnosis, as well as the ability to exhibit a level of 'understanding' of new situations. An overview of some JPL components (e.g. CASPER, CAMPOUT) and a description of the architecture CARACaS (Control Architecture for Robotic Agent Command and Sensing) that combines these in the context of a cognitive robotic system operating in a various scenarios are presented. Finally, two examples of typical scenarios of a multi-robot construction mission and a human-robot mission, involving direct collaboration with humans is given.
A distributed reasoning engine ecosystem for semantic context-management in smart environments.
Almeida, Aitor; López-de-Ipiña, Diego
2012-01-01
To be able to react adequately a smart environment must be aware of the context and its changes. Modeling the context allows applications to better understand it and to adapt to its changes. In order to do this an appropriate formal representation method is needed. Ontologies have proven themselves to be one of the best tools to do it. Semantic inference provides a powerful framework to reason over the context data. But there are some problems with this approach. The inference over semantic context information can be cumbersome when working with a large amount of data. This situation has become more common in modern smart environments where there are a lot sensors and devices available. In order to tackle this problem we have developed a mechanism to distribute the context reasoning problem into smaller parts in order to reduce the inference time. In this paper we describe a distributed peer-to-peer agent architecture of context consumers and context providers. We explain how this inference sharing process works, partitioning the context information according to the interests of the agents, location and a certainty factor. We also discuss the system architecture, analyzing the negotiation process between the agents. Finally we compare the distributed reasoning with the centralized one, analyzing in which situations is more suitable each approach.
Grounding the Meanings in Sensorimotor Behavior using Reinforcement Learning
Farkaš, Igor; Malík, Tomáš; Rebrová, Kristína
2012-01-01
The recent outburst of interest in cognitive developmental robotics is fueled by the ambition to propose ecologically plausible mechanisms of how, among other things, a learning agent/robot could ground linguistic meanings in its sensorimotor behavior. Along this stream, we propose a model that allows the simulated iCub robot to learn the meanings of actions (point, touch, and push) oriented toward objects in robot’s peripersonal space. In our experiments, the iCub learns to execute motor actions and comment on them. Architecturally, the model is composed of three neural-network-based modules that are trained in different ways. The first module, a two-layer perceptron, is trained by back-propagation to attend to the target position in the visual scene, given the low-level visual information and the feature-based target information. The second module, having the form of an actor-critic architecture, is the most distinguishing part of our model, and is trained by a continuous version of reinforcement learning to execute actions as sequences, based on a linguistic command. The third module, an echo-state network, is trained to provide the linguistic description of the executed actions. The trained model generalizes well in case of novel action-target combinations with randomized initial arm positions. It can also promptly adapt its behavior if the action/target suddenly changes during motor execution. PMID:22393319
An expert system for planning and scheduling in a telerobotic environment
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.; Park, Eui H.
1991-01-01
A knowledge based approach to assigning tasks to multi-agents working cooperatively in jobs that require a telerobot in the loop was developed. The generality of the approach allows for such a concept to be applied in a nonteleoperational domain. The planning architecture known as the task oriented planner (TOP) uses the principle of flow mechanism and the concept of planning by deliberation to preserve and use knowledge about a particular task. The TOP is an open ended architecture developed with a NEXPERT expert system shell and its knowledge organization allows for indirect consultation at various levels of task abstraction. Considering that a telerobot operates in a hostile and nonstructured environment, task scheduling should respond to environmental changes. A general heuristic was developed for scheduling jobs with the TOP system. The technique is not to optimize a given scheduling criterion as in classical job and/or flow shop problems. For a teleoperation job schedule, criteria are situation dependent. A criterion selection is fuzzily embedded in the task-skill matrix computation. However, goal achievement with minimum expected risk to the human operator is emphasized.
NASA Astrophysics Data System (ADS)
Breger, Joyce C.; Buckhout-White, Susan; Walper, Scott A.; Oh, Eunkeu; Susumu, Kimihiro; Ancona, Mario G.; Medintz, Igor L.
2017-06-01
Nanoparticle (NP) display potentially offers a new way to both stabilize and, in many cases, enhance enzyme activity over that seen for native protein in solution. However, the large, globular and sometimes multimeric nature of many enzymes limits their ability to attach directly to the surface of NPs, especially when the latter are colloidally stabilized with bulky PEGylated ligands. Engineering extended protein linkers into the enzymes to achieve direct attachment through the PEG surface often detrimentally alters the enzymes catalytic ability. Here, we demonstrate an alternate, hybrid biomaterials-based approach to achieving directed enzyme assembly on PEGylated NPs. We self-assemble a unique architecture consisting of a central semiconductor quantum dot (QD) scaffold displaying controlled ratios of extended peptide-DNA linkers which penetrate through the PEG surface to directly couple enzymes to the QD surface. As a test case, we utilize phosphotriesterase (PTE), an enzyme of bio-defense interest due to its ability to hydrolyze organophosphate nerve agents. Moreover, this unique approach still allows PTE to maintain enhanced activity while also suggesting the ability of DNA to enhance enzyme activity in and of itself.
NASA Astrophysics Data System (ADS)
Fatimah, S.; Wiharto, W.
2017-02-01
Acid Orange 7 (AO7) is one of the synthetic dye in the dyeing process in the textile industry. The use of this dye can produce wastewater which will be endangered if not treated well. Ozonation method is one technique to solve this problem. Ozonation is a waste processing techniques using ozone as an oxidizing agent. Variables used in this research is the ozone concentration, the initial concentration of AO7, temperature, and pH. Based on the experimental result that the optimum value decolourization percentage is 80% when the ozone concentration is 560 mg/L, the initial concentration AO7 is 14 mg/L, the temperature is 390 °C, and pH is 7,6. Decolourization efficiency of experimental results and predictions successfully modelled by the neural network architecture. The data used to construct a neural network architecture quasi newton one step secant as many as 31 data. A comparison between the predicted results of the designed ANN models and experiment was conducted. From the modeling results obtained MAPE value of 0.7763%. From the results of this artificial neural network architecture obtained the optimum value decolourization percentage in 80,64% when the concentration of ozone is 550 mg/L, the initial concentration AO7 is 11 mg/L, the temperature is 41 °C, and the pH is 7.9.
Hori, Hitoshi; Uto, Yoshihiro; Nakata, Eiji
2010-09-01
We describe herein for the first time our medicinal electronomics bricolage design of hypoxia-targeting antineoplastic drugs and boron tracedrugs as newly emerging drug classes. A new area of antineoplastic drugs and treatments has recently focused on neoplastic cells of the tumor environment/microenvironment involving accessory cells. This tumor hypoxic environment is now considered as a major factor that influences not only the response to antineoplastic therapies but also the potential for malignant progression and metastasis. We review our medicinal electronomics bricolage design of hypoxia-targeting drugs, antiangiogenic hypoxic cell radiosensitizers, sugar-hybrid hypoxic cell radiosensitizers, and hypoxia-targeting 10B delivery agents, in which we design drug candidates based on their electronic structures obtained by molecular orbital calculations, not based solely on pharmacophore development. These drugs include an antiangiogenic hypoxic cell radiosensitizer TX-2036, a sugar-hybrid hypoxic cell radiosensitizer TX-2244, new hypoxia-targeting indoleamine 2,3-dioxygenase (IDO) inhibitors, and a hypoxia-targeting BNCT agent, BSH (sodium borocaptate-10B)-hypoxic cytotoxin tirapazamine (TPZ) hybrid drug TX-2100. We then discuss the concept of boron tracedrugs as a new drug class having broad potential in many areas.
A game theoretic framework for incentive-based models of intrinsic motivation in artificial systems
Merrick, Kathryn E.; Shafi, Kamran
2013-01-01
An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players' optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots. PMID:24198797
A game theoretic framework for incentive-based models of intrinsic motivation in artificial systems.
Merrick, Kathryn E; Shafi, Kamran
2013-01-01
An emerging body of research is focusing on understanding and building artificial systems that can achieve open-ended development influenced by intrinsic motivations. In particular, research in robotics and machine learning is yielding systems and algorithms with increasing capacity for self-directed learning and autonomy. Traditional software architectures and algorithms are being augmented with intrinsic motivations to drive cumulative acquisition of knowledge and skills. Intrinsic motivations have recently been considered in reinforcement learning, active learning and supervised learning settings among others. This paper considers game theory as a novel setting for intrinsic motivation. A game theoretic framework for intrinsic motivation is formulated by introducing the concept of optimally motivating incentive as a lens through which players perceive a game. Transformations of four well-known mixed-motive games are presented to demonstrate the perceived games when players' optimally motivating incentive falls in three cases corresponding to strong power, affiliation and achievement motivation. We use agent-based simulations to demonstrate that players with different optimally motivating incentive act differently as a result of their altered perception of the game. We discuss the implications of these results both for modeling human behavior and for designing artificial agents or robots.
Memristor-Based Computing Architecture: Design Methodologies and Circuit Techniques
2013-03-01
MEMRISTOR-BASED COMPUTING ARCHITECTURE : DESIGN METHODOLOGIES AND CIRCUIT TECHNIQUES POLYTECHNIC INSTITUTE OF NEW YORK UNIVERSITY...TECHNICAL REPORT 3. DATES COVERED (From - To) OCT 2010 – OCT 2012 4. TITLE AND SUBTITLE MEMRISTOR-BASED COMPUTING ARCHITECTURE : DESIGN METHODOLOGIES...schemes for a memristor-based reconfigurable architecture design have not been fully explored yet. Therefore, in this project, we investigated
Mechanical behaviour of degradable phosphate glass fibres and composites-a review.
Colquhoun, R; Tanner, K E
2015-12-23
Biodegradable materials are potentially an advantageous alternative to the traditional metallic fracture fixation devices used in the reconstruction of bone tissue defects. This is due to the occurrence of stress shielding in the surrounding bone tissue that arises from the absence of mechanical stimulus to the regenerating bone due to the mismatch between the elastic modulus of bone and the metal implant. However although degradable polymers may alleviate such issues, these inert materials possess insufficient mechanical properties to be considered as a suitable alternative to current metallic devices at sites of sufficient mechanical loading. Phosphate based glasses are an advantageous group of materials for tissue regenerative applications due to their ability to completely degrade in vivo at highly controllable rates based on the specific glass composition. Furthermore the release of the glass's constituent ions can evoke a therapeutic stimulus in vivo (i.e. osteoinduction) whilst also generating a bioactive response. The processing of these materials into fibres subsequently allows them to act as reinforcing agents in degradable polymers to simultaneously increase its mechanical properties and enhance its in vivo response. However despite the various review articles relating to the compositional influences of different phosphate glass systems, there has been limited work summarising the mechanical properties of different phosphate based glass fibres and their subsequent incorporation as a reinforcing agent in degradable composite materials. As a result, this review article examines the compositional influences behind the development of different phosphate based glass fibre compositions intended as composite reinforcing agents along with an analysis of different potential composite configurations. This includes variations in the fibre content, matrix material and fibre architecture as well as other novel composites designs.
Memristor-Based Synapse Design and Training Scheme for Neuromorphic Computing Architecture
2012-06-01
system level built upon the conventional Von Neumann computer architecture [2][3]. Developing the neuromorphic architecture at chip level by...SCHEME FOR NEUROMORPHIC COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-11-2-0046 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6...creation of memristor-based neuromorphic computing architecture. Rather than the existing crossbar-based neuron network designs, we focus on memristor
Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba
2013-02-01
Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.
A framework for building real-time expert systems
NASA Technical Reports Server (NTRS)
Lee, S. Daniel
1991-01-01
The Space Station Freedom is an example of complex systems that require both traditional and artificial intelligence (AI) real-time methodologies. It was mandated that Ada should be used for all new software development projects. The station also requires distributed processing. Catastrophic failures on the station can cause the transmission system to malfunction for a long period of time, during which ground-based expert systems cannot provide any assistance to the crisis situation on the station. This is even more critical for other NASA projects that would have longer transmission delays (e.g., the lunar base, Mars missions, etc.). To address these issues, a distributed agent architecture (DAA) is proposed that can support a variety of paradigms based on both traditional real-time computing and AI. The proposed testbed for DAA is an autonomous power expert (APEX) which is a real-time monitoring and diagnosis expert system for the electrical power distribution system of the space station.
Nanofluidic structures for coupled sensing and remediation of toxins
NASA Astrophysics Data System (ADS)
Shaw, K.; Contento, N. M.; Xu, Wei; Bohn, P. W.
2014-05-01
One foundational motivation for chemical sensing is that knowledge of the presence and level of a chemical agent informs decisions about treatment of the agent, for example by sequestration, separation or chemical conversion to a less harmful substance. Commonly the sensing and treatment steps are separate. However, the disjoint detection/treatment approach is neither optimal, nor required. Thus, we are investigating how nanostructured architectures can be constructed so that molecular transport (analyte/reagent delivery), chemical sensing (optical or electrochemical) and subsequent treatment can all be coupled in the same physical space during the same translocation event. Chemical sensors that are uniquely well-poised for integration into 3-D micro-/nanofluidic architectures include those based on plasmonics and impedance. Following detection, treatment can be substantially enhanced if mass transport limitations can be overcome. In this context, in situ generation of reactive species within confined geometries, such as nanopores or nanochannels, is of significant interest, because of its potential utility in overcoming mass transport limitations in chemical reactivity. Solvent electrolysis in electrochemically coupled nanochannels supporting electrokinetic flow for the generation of reactive species, can produce arbitrarily tunable quantities of reagents, such as O2 or H2, in situ in close proximity to the site of a hydrogenation catalyst, for example. Semi-quantitative estimates of the local H2 concentration are obtained by comparing the spatiotemporal fluorescence behavior and current measurements with finite element simulations accounting for electrolysis and subsequent convection and diffusion within the confined geometry. H2 saturation can easily be achieved at modest overpotentials.
Situation Awareness of Onboard System Autonomy
NASA Technical Reports Server (NTRS)
Schreckenghost, Debra; Thronesbery, Carroll; Hudson, Mary Beth
2005-01-01
We have developed intelligent agent software for onboard system autonomy. Our approach is to provide control agents that automate crew and vehicle systems, and operations assistants that aid humans in working with these autonomous systems. We use the 3 Tier control architecture to develop the control agent software that automates system reconfiguration and routine fault management. We use the Distributed Collaboration and Interaction (DCI) System to develop the operations assistants that provide human services, including situation summarization, event notification, activity management, and support for manual commanding of autonomous system. In this paper we describe how the operations assistants aid situation awareness of the autonomous control agents. We also describe our evaluation of the DCI System to support control engineers during a ground test at Johnson Space Center (JSC) of the Post Processing System (PPS) for regenerative water recovery.
Label-free tissue scanner for colorectal cancer screening
NASA Astrophysics Data System (ADS)
Kandel, Mikhail E.; Sridharan, Shamira; Liang, Jon; Luo, Zelun; Han, Kevin; Macias, Virgilia; Shah, Anish; Patel, Roshan; Tangella, Krishnarao; Kajdacsy-Balla, Andre; Guzman, Grace; Popescu, Gabriel
2017-06-01
The current practice of surgical pathology relies on external contrast agents to reveal tissue architecture, which is then qualitatively examined by a trained pathologist. The diagnosis is based on the comparison with standardized empirical, qualitative assessments of limited objectivity. We propose an approach to pathology based on interferometric imaging of "unstained" biopsies, which provides unique capabilities for quantitative diagnosis and automation. We developed a label-free tissue scanner based on "quantitative phase imaging," which maps out optical path length at each point in the field of view and, thus, yields images that are sensitive to the "nanoscale" tissue architecture. Unlike analysis of stained tissue, which is qualitative in nature and affected by color balance, staining strength and imaging conditions, optical path length measurements are intrinsically quantitative, i.e., images can be compared across different instruments and clinical sites. These critical features allow us to automate the diagnosis process. We paired our interferometric optical system with highly parallelized, dedicated software algorithms for data acquisition, allowing us to image at a throughput comparable to that of commercial tissue scanners while maintaining the nanoscale sensitivity to morphology. Based on the measured phase information, we implemented software tools for autofocusing during imaging, as well as image archiving and data access. To illustrate the potential of our technology for large volume pathology screening, we established an "intrinsic marker" for colorectal disease that detects tissue with dysplasia or colorectal cancer and flags specific areas for further examination, potentially improving the efficiency of existing pathology workflows.
Multi-agent systems: effective approach for cancer care information management.
Mohammadzadeh, Niloofar; Safdari, Reza; Rahimi, Azin
2013-01-01
Physicians, in order to study the causes of cancer, detect cancer earlier, prevent or determine the effectiveness of treatment, and specify the reasons for the treatment ineffectiveness, need to access accurate, comprehensive, and timely cancer data. The cancer care environment has become more complex because of the need for coordination and communication among health care professionals with different skills in a variety of roles and the existence of large amounts of data with various formats. The goals of health care systems in such a complex environment are correct health data management, providing appropriate information needs of users to enhance the integrity and quality of health care, timely access to accurate information and reducing medical errors. These roles in new systems with use of agents efficiently perform well. Because of the potential capability of agent systems to solve complex and dynamic health problems, health care system, in order to gain full advantage of E- health, steps must be taken to make use of this technology. Multi-agent systems have effective roles in health service quality improvement especially in telemedicine, emergency situations and management of chronic diseases such as cancer. In the design and implementation of agent based systems, planning items such as information confidentiality and privacy, architecture, communication standards, ethical and legal aspects, identification opportunities and barriers should be considered. It should be noted that usage of agent systems only with a technical view is associated with many problems such as lack of user acceptance. The aim of this commentary is to survey applications, opportunities and barriers of this new artificial intelligence tool for cancer care information as an approach to improve cancer care management.
Framework of distributed coupled atmosphere-ocean-wave modeling system
NASA Astrophysics Data System (ADS)
Wen, Yuanqiao; Huang, Liwen; Deng, Jian; Zhang, Jinfeng; Wang, Sisi; Wang, Lijun
2006-05-01
In order to research the interactions between the atmosphere and ocean as well as their important role in the intensive weather systems of coastal areas, and to improve the forecasting ability of the hazardous weather processes of coastal areas, a coupled atmosphere-ocean-wave modeling system has been developed. The agent-based environment framework for linking models allows flexible and dynamic information exchange between models. For the purpose of flexibility, portability and scalability, the framework of the whole system takes a multi-layer architecture that includes a user interface layer, computational layer and service-enabling layer. The numerical experiment presented in this paper demonstrates the performance of the distributed coupled modeling system.
Metrics of a Paradigm for Intelligent Control
NASA Technical Reports Server (NTRS)
Hexmoor, Henry
1999-01-01
We present metrics for quantifying organizational structures of complex control systems intended for controlling long-lived robotic or other autonomous applications commonly found in space applications. Such advanced control systems are often called integration platforms or agent architectures. Reported metrics span concerns about time, resources, software engineering, and complexities in the world.
NASA Astrophysics Data System (ADS)
Jain, Manish; Wicks, Gary; Marshall, Andrew; Craig, Adam; Golding, Terry; Hossain, Khalid; McEwan, Ken; Howle, Chris
2014-05-01
Laser-based stand-off sensing of threat agents (e.g. explosives, toxic industrial chemicals or chemical warfare agents), by detection of distinct infrared spectral absorption signature of these materials, has made significant advances recently. This is due in part to the availability of infrared and terahertz laser sources with significantly improved power and tunability. However, there is a pressing need for a versatile, high performance infrared sensor that can complement and enhance the recent advances achieved in laser technology. This work presents new, high performance infrared detectors based on III-V barrier diodes. Unipolar barrier diodes, such as the nBn, have been very successful in the MWIR using InAs(Sb)-based materials, and in the MWIR and LWIR using type-II InAsSb/InAs superlattice-based materials. This work addresses the extension of the barrier diode architecture into the SWIR region, using GaSb-based and InAs-based materials. The program has resulted in detectors with unmatched performance in the 2-3 μm spectral range. Temperature dependent characterization has shown dark currents to be diffusion limited and equal to, or within a factor of 5, of the Rule 07 expression for Auger-limited HgCdTe detectors. Furthermore, D* values are superior to those of existing detectors in the 2-3 μm band. Of particular significance to spectroscopic sensing systems is the ability to have near-background limited performance at operation temperatures compatible with robust and reliable solid state thermoelectric coolers.
Multiagent pursuit-evasion games: Algorithms and experiments
NASA Astrophysics Data System (ADS)
Kim, Hyounjin
Deployment of intelligent agents has been made possible through advances in control software, microprocessors, sensor/actuator technology, communication technology, and artificial intelligence. Intelligent agents now play important roles in many applications where human operation is too dangerous or inefficient. There is little doubt that the world of the future will be filled with intelligent robotic agents employed to autonomously perform tasks, or embedded in systems all around us, extending our capabilities to perceive, reason and act, and replacing human efforts. There are numerous real-world applications in which a single autonomous agent is not suitable and multiple agents are required. However, after years of active research in multi-agent systems, current technology is still far from achieving many of these real-world applications. Here, we consider the problem of deploying a team of unmanned ground vehicles (UGV) and unmanned aerial vehicles (UAV) to pursue a second team of UGV evaders while concurrently building a map in an unknown environment. This pursuit-evasion game encompasses many of the challenging issues that arise in operations using intelligent multi-agent systems. We cast the problem in a probabilistic game theoretic framework and consider two computationally feasible pursuit policies: greedy and global-max. We also formulate this probabilistic pursuit-evasion game as a partially observable Markov decision process and employ a policy search algorithm to obtain a good pursuit policy from a restricted class of policies. The estimated value of this policy is guaranteed to be uniformly close to the optimal value in the given policy class under mild conditions. To implement this scenario on real UAVs and UGVs, we propose a distributed hierarchical hybrid system architecture which emphasizes the autonomy of each agent yet allows for coordinated team efforts. We then describe our implementation on a fleet of UGVs and UAVs, detailing components such as high level pursuit policy computation, inter-agent communication, navigation, sensing, and regulation. We present both simulation and experimental results on real pursuit-evasion games between our fleet of UAVs and UGVs and evaluate the pursuit policies, relating expected capture times to the speed and intelligence of the evaders and the sensing capabilities of the pursuers. The architecture and algorithmsis described in this dissertation are general enough to be applied to many real-world applications.
Towards Time Automata and Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Hutzler, G.; Klaudel, H.; Wang, D. Y.
2004-01-01
The design of reactive systems must comply with logical correctness (the system does what it is supposed to do) and timeliness (the system has to satisfy a set of temporal constraints) criteria. In this paper, we propose a global approach for the design of adaptive reactive systems, i.e., systems that dynamically adapt their architecture depending on the context. We use the timed automata formalism for the design of the agents' behavior. This allows evaluating beforehand the properties of the system (regarding logical correctness and timeliness), thanks to model-checking and simulation techniques. This model is enhanced with tools that we developed for the automatic generation of code, allowing to produce very quickly a running multi-agent prototype satisfying the properties of the model.
Software Agents Applications Using Real-Time CORBA
NASA Astrophysics Data System (ADS)
Fowell, S.; Ward, R.; Nielsen, M.
This paper describes current projects being performed by SciSys in the area of the use of software agents, built using CORBA middleware, to improve operations within autonomous satellite/ground systems. These concepts have been developed and demonstrated in a series of experiments variously funded by ESA's Technology Flight Opportunity Initiative (TFO) and Leading Edge Technology for SMEs (LET-SME), and the British National Space Centre's (BNSC) National Technology Programme. Some of this earlier work has already been reported in [1]. This paper will address the trends, issues and solutions associated with this software agent architecture concept, together with its implementation using CORBA within an on-board environment, that is to say taking account of its real- time and resource constrained nature.
A modeling process to understand complex system architectures
NASA Astrophysics Data System (ADS)
Robinson, Santiago Balestrini
2009-12-01
In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two architectures is better more than 98% of the time. The second objective led to Hypothesis B, the second of the main hypotheses. This hypothesis stated that by studying the functional relations, the most critical entities composing the architecture could be identified. The critical entities are those that when their behavior varies slightly, the behavior of the overall architecture varies greatly. These are the entities that must be modeled more carefully and where modeling effort should be expended. This hypothesis was tested by simplifying agent-based models to the non-trivial minimum, and executing a large number of different simulations in order to obtain statistically significant results. The tests were conducted by evolving the complex model without any error induced, and then evolving the model once again for each ranking and assigning error to any of the nodes with a probability inversely proportional to the ranking. The results from this hypothesis test indicate that depending on the structural characteristics of the functional relations, it is useful to use one of two of the intelligent rankings tested, or it is best to expend effort equally amongst all the entities. Random ranking always performed worse than uniform ranking, indicating that if modeling effort is to be prioritized amongst the entities composing the large-scale system architecture, it should be prioritized intelligently. The benefit threshold between intelligent prioritization and no prioritization lays on the large-scale system's chaotic boundary. If the large-scale system behaves chaotically, small variations in any of the entities tends to have a great impact on the behavior of the entire system. Therefore, even low ranking entities can still affect the behavior of the model greatly, and error should not be concentrated in any one entity. It was discovered that the threshold can be identified from studying the structure of the networks, in particular the cyclicity, the Off-diagonal Complexity, and the Digraph Algebraic Connectivity. (Abstract shortened by UMI.)
A coupled duration-focused architecture for real-time music-to-score alignment.
Cont, Arshia
2010-06-01
The capacity for real-time synchronization and coordination is a common ability among trained musicians performing a music score that presents an interesting challenge for machine intelligence. Compared to speech recognition, which has influenced many music information retrieval systems, music's temporal dynamics and complexity pose challenging problems to common approximations regarding time modeling of data streams. In this paper, we propose a design for a real-time music-to-score alignment system. Given a live recording of a musician playing a music score, the system is capable of following the musician in real time within the score and decoding the tempo (or pace) of its performance. The proposed design features two coupled audio and tempo agents within a unique probabilistic inference framework that adaptively updates its parameters based on the real-time context. Online decoding is achieved through the collaboration of the coupled agents in a Hidden Hybrid Markov/semi-Markov framework, where prediction feedback of one agent affects the behavior of the other. We perform evaluations for both real-time alignment and the proposed temporal model. An implementation of the presented system has been widely used in real concert situations worldwide and the readers are encouraged to access the actual system and experiment the results.
Architecture for spacecraft operations planning
NASA Technical Reports Server (NTRS)
Davis, William S.
1991-01-01
A system which generates plans for the dynamic environment of space operations is discussed. This system synthesizes plans by combining known operations under a set of physical, functional, and temperal constraints from various plan entities, which are modeled independently but combine in a flexible manner to suit dynamic planning needs. This independence allows the generation of a single plan source which can be compiled and applied to a variety of agents. The architecture blends elements of temperal logic, nonlinear planning, and object oriented constraint modeling to achieve its flexibility. This system was applied to the domain of the Intravehicular Activity (IVA) maintenance and repair aboard Space Station Freedom testbed.
Multi-agent robotic systems and applications for satellite missions
NASA Astrophysics Data System (ADS)
Nunes, Miguel A.
A revolution in the space sector is happening. It is expected that in the next decade there will be more satellites launched than in the previous sixty years of space exploration. Major challenges are associated with this growth of space assets such as the autonomy and management of large groups of satellites, in particular with small satellites. There are two main objectives for this work. First, a flexible and distributed software architecture is presented to expand the possibilities of spacecraft autonomy and in particular autonomous motion in attitude and position. The approach taken is based on the concept of distributed software agents, also referred to as multi-agent robotic system. Agents are defined as software programs that are social, reactive and proactive to autonomously maximize the chances of achieving the set goals. Part of the work is to demonstrate that a multi-agent robotic system is a feasible approach for different problems of autonomy such as satellite attitude determination and control and autonomous rendezvous and docking. The second main objective is to develop a method to optimize multi-satellite configurations in space, also known as satellite constellations. This automated method generates new optimal mega-constellations designs for Earth observations and fast revisit times on large ground areas. The optimal satellite constellation can be used by researchers as the baseline for new missions. The first contribution of this work is the development of a new multi-agent robotic system for distributing the attitude determination and control subsystem for HiakaSat. The multi-agent robotic system is implemented and tested on the satellite hardware-in-the-loop testbed that simulates a representative space environment. The results show that the newly proposed system for this particular case achieves an equivalent control performance when compared to the monolithic implementation. In terms on computational efficiency it is found that the multi-agent robotic system has a consistent lower CPU load of 0.29 +/- 0.03 compared to 0.35 +/- 0.04 for the monolithic implementation, a 17.1 % reduction. The second contribution of this work is the development of a multi-agent robotic system for the autonomous rendezvous and docking of multiple spacecraft. To compute the maneuvers guidance, navigation and control algorithms are implemented as part of the multi-agent robotic system. The navigation and control functions are implemented using existing algorithms, but one important contribution of this section is the introduction of a new six degrees of freedom guidance method which is part of the guidance, navigation and control architecture. This new method is an explicit solution to the guidance problem, and is particularly useful for real time guidance for attitude and position, as opposed to typical guidance methods which are based on numerical solutions, and therefore are computationally intensive. A simulation scenario is run for docking four CubeSats deployed radially from a launch vehicle. Considering fully actuated CubeSats, the simulations show docking maneuvers that are successfully completed within 25 minutes which is approximately 30% of a full orbital period in low earth orbit. The final section investigates the problem of optimization of satellite constellations for fast revisit time, and introduces a new method to generate different constellation configurations that are evaluated with a genetic algorithm. Two case studies are presented. The first is the optimization of a constellation for rapid coverage of the oceans of the globe in 24 hours or less. Results show that for an 80 km sensor swath width 50 satellites are required to cover the oceans with a 24 hour revisit time. The second constellation configuration study focuses on the optimization for the rapid coverage of the North Atlantic Tracks for air traffic monitoring in 3 hours or less. The results show that for a fixed swath width of 160 km and for a 3 hour revisit time 52 satellites are required.
The WorkQueue project - a task queue for the CMS workload management system
NASA Astrophysics Data System (ADS)
Ryu, S.; Wakefield, S.
2012-12-01
We present the development and first experience of a new component (termed WorkQueue) in the CMS workload management system. This component provides a link between a global request system (Request Manager) and agents (WMAgents) which process requests at compute and storage resources (known as sites). These requests typically consist of creation or processing of a data sample (possibly terabytes in size). Unlike the standard concept of a task queue, the WorkQueue does not contain fully resolved work units (known typically as jobs in HEP). This would require the WorkQueue to run computationally heavy algorithms that are better suited to run in the WMAgents. Instead the request specifies an algorithm that the WorkQueue uses to split the request into reasonable size chunks (known as elements). An advantage of performing lazy evaluation of an element is that expanding datasets can be accommodated by having job details resolved as late as possible. The WorkQueue architecture consists of a global WorkQueue which obtains requests from the request system, expands them and forms an element ordering based on the request priority. Each WMAgent contains a local WorkQueue which buffers work close to the agent, this overcomes temporary unavailability of the global WorkQueue and reduces latency for an agent to begin processing. Elements are pulled from the global WorkQueue to the local WorkQueue and into the WMAgent based on the estimate of the amount of work within the element and the resources available to the agent. WorkQueue is based on CouchDB, a document oriented NoSQL database. The WorkQueue uses the features of CouchDB (map/reduce views and bi-directional replication between distributed instances) to provide a scalable distributed system for managing large queues of work. The project described here represents an improvement over the old approach to workload management in CMS which involved individual operators feeding requests into agents. This new approach allows for a system where individual WMAgents are transient and can be added or removed from the system as needed.
Candida Biofilms: Development, Architecture, and Resistance
CHANDRA, JYOTSNA; MUKHERJEE, PRANAB K.
2015-01-01
Intravascular device–related infections are often associated with biofilms (microbial communities encased within a polysaccharide-rich extracellular matrix) formed by pathogens on the surfaces of these devices. Candida species are the most common fungi isolated from catheter-, denture-, and voice prosthesis–associated infections and also are commonly isolated from contact lens–related infections (e.g., fungal keratitis). These biofilms exhibit decreased susceptibility to most antimicrobial agents, which contributes to the persistence of infection. Recent technological advances have facilitated the development of novel approaches to investigate the formation of biofilms and identify specific markers for biofilms. These studies have provided extensive knowledge of the effect of different variables, including growth time, nutrients, and physiological conditions, on biofilm formation, morphology, and architecture. In this article, we will focus on fungal biofilms (mainly Candida biofilms) and provide an update on the development, architecture, and resistance mechanisms of biofilms. PMID:26350306
NASA Technical Reports Server (NTRS)
Albus, James S.
1996-01-01
The Real-time Control System (RCS) developed at NIST and elsewhere over the past two decades defines a reference model architecture for design and analysis of complex intelligent control systems. The RCS architecture consists of a hierarchically layered set of functional processing modules connected by a network of communication pathways. The primary distinguishing feature of the layers is the bandwidth of the control loops. The characteristic bandwidth of each level is determined by the spatial and temporal integration window of filters, the temporal frequency of signals and events, the spatial frequency of patterns, and the planning horizon and granularity of the planners that operate at each level. At each level, tasks are decomposed into sequential subtasks, to be performed by cooperating sets of subordinate agents. At each level, signals from sensors are filtered and correlated with spatial and temporal features that are relevant to the control function being implemented at that level.
Experience Using Formal Methods for Specifying a Multi-Agent System
NASA Technical Reports Server (NTRS)
Rouff, Christopher; Rash, James; Hinchey, Michael; Szczur, Martha R. (Technical Monitor)
2000-01-01
The process and results of using formal methods to specify the Lights Out Ground Operations System (LOGOS) is presented in this paper. LOGOS is a prototype multi-agent system developed to show the feasibility of providing autonomy to satellite ground operations functions at NASA Goddard Space Flight Center (GSFC). After the initial implementation of LOGOS the development team decided to use formal methods to check for race conditions, deadlocks and omissions. The specification exercise revealed several omissions as well as race conditions. After completing the specification, the team concluded that certain tools would have made the specification process easier. This paper gives a sample specification of two of the agents in the LOGOS system and examples of omissions and race conditions found. It concludes with describing an architecture of tools that would better support the future specification of agents and other concurrent systems.
C3PO - A Dynamic Data Placement Agent for ATLAS Distributed Data Management
NASA Astrophysics Data System (ADS)
Beermann, T.; Lassnig, M.; Barisits, M.; Serfon, C.; Garonne, V.; ATLAS Collaboration
2017-10-01
This paper introduces a new dynamic data placement agent for the ATLAS distributed data management system. This agent is designed to pre-place potentially popular data to make it more widely available. It therefore incorporates information from a variety of sources. Those include input datasets and sites workload information from the ATLAS workload management system, network metrics from different sources like FTS and PerfSonar, historical popularity data collected through a tracer mechanism and more. With this data it decides if, when and where to place new replicas that then can be used by the WMS to distribute the workload more evenly over available computing resources and then ultimately reduce job waiting times. This paper gives an overview of the architecture and the final implementation of this new agent. The paper also includes an evaluation of the placement algorithm by comparing the transfer times and the new replica usage.
A computational neural model of goal-directed utterance selection.
Klein, Michael; Kamp, Hans; Palm, Guenther; Doya, Kenji
2010-06-01
It is generally agreed that much of human communication is motivated by extra-linguistic goals: we often make utterances in order to get others to do something, or to make them support our cause, or adopt our point of view, etc. However, thus far a computational foundation for this view on language use has been lacking. In this paper we propose such a foundation using Markov Decision Processes. We borrow computational components from the field of action selection and motor control, where a neurobiological basis of these components has been established. In particular, we make use of internal models (i.e., next-state transition functions defined on current state action pairs). The internal model is coupled with reinforcement learning of a value function that is used to assess the desirability of any state that utterances (as well as certain non-verbal actions) can bring about. This cognitive architecture is tested in a number of multi-agent game simulations. In these computational experiments an agent learns to predict the context-dependent effects of utterances by interacting with other agents that are already competent speakers. We show that the cognitive architecture can account for acquiring the capability of deciding when to speak in order to achieve a certain goal (instead of performing a non-verbal action or simply doing nothing), whom to address and what to say. Copyright 2010 Elsevier Ltd. All rights reserved.
Smart Systems for Logistics Command and Control (SSLC2)
2004-06-01
design options 12 AFRL Risk Abatement (continued) • Awareness of key development projects: • AF Portal • GCSS-AF • TBMCS-UL • Enterprise Data Warehouse ... Logistics Enterprise Architecture • Early identification of Transition Agents 13 Collaboration Partners • AF-ILMM • AMC/A-4 • AFC2ISRC • AFMC LSO
Reverse-Scaffolding Algebra: Empirical Evaluation of Design Architecture
ERIC Educational Resources Information Center
Chase, Kiera; Abrahamson, Dor
2015-01-01
Scaffolding is the asymmetrical social co-enactment of natural or cultural practice, wherein a more able agent implements or performs for a novice elements of a challenging activity. What the novice may not learn, however, is how the expert's co-enactments support the activity. Granted, in many cultural practices novices need not understand…
An Approach for Autonomy: A Collaborative Communication Framework for Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Dufrene, Warren Russell, Jr.
2005-01-01
Research done during the last three years has studied the emersion properties of Complex Adaptive Systems (CAS). The deployment of Artificial Intelligence (AI) techniques applied to remote Unmanned Aerial Vehicles has led the author to investigate applications of CAS within the field of Autonomous Multi-Agent Systems. The core objective of current research efforts is focused on the simplicity of Intelligent Agents (IA) and the modeling of these agents within complex systems. This research effort looks at the communication, interaction, and adaptability of multi-agents as applied to complex systems control. The embodiment concept applied to robotics has application possibilities within multi-agent frameworks. A new framework for agent awareness within a virtual 3D world concept is possible where the vehicle is composed of collaborative agents. This approach has many possibilities for applications to complex systems. This paper describes the development of an approach to apply this virtual framework to the NASA Goddard Space Flight Center (GSFC) tetrahedron structure developed under the Autonomous Nano Technology Swarm (ANTS) program and the Super Miniaturized Addressable Reconfigurable Technology (SMART) architecture program. These projects represent an innovative set of novel concepts deploying adaptable, self-organizing structures composed of many tetrahedrons. This technology is pushing current applied Agents Concepts to new levels of requirements and adaptability.
2011-11-01
data. s to make time rations. TITA lish and Subs ge messages i y FBCB2 and y gathering an rt tactical deci ontinuous asse s, including ions through...well as one ad from data-t y spans multip and control command (M red, Fused, a eports. (BCW) tric agent-bas y from TITA sts to dismoun...new ong all TITA current state of nit, and proces emination Sup ge transport s echelons a ort mechanis mentation of X le dynamic n alized or a pur
2015-05-01
Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web-Based and Mobile Devices Walt Scacchi and Thomas...2015 to 00-00-2015 4. TITLE AND SUBTITLE Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web-Based and...architecture (OA) software systems Emerging challenges in achieving Better Buying Power (BBP) via OA software systems for Web- based and Mobile devices
Panmictic and Clonal Evolution on a Single Patchy Resource Produces Polymorphic Foraging Guilds
Getz, Wayne M.; Salter, Richard; Lyons, Andrew J.; Sippl-Swezey, Nicolas
2015-01-01
We develop a stochastic, agent-based model to study how genetic traits and experiential changes in the state of agents and available resources influence individuals’ foraging and movement behaviors. These behaviors are manifest as decisions on when to stay and exploit a current resource patch or move to a particular neighboring patch, based on information of the resource qualities of the patches and the anticipated level of intraspecific competition within patches. We use a genetic algorithm approach and an individual’s biomass as a fitness surrogate to explore the foraging strategy diversity of evolving guilds under clonal versus hermaphroditic sexual reproduction. We first present the resource exploitation processes, movement on cellular arrays, and genetic algorithm components of the model. We then discuss their implementation on the Nova software platform. This platform seamlessly combines the dynamical systems modeling of consumer-resource interactions with agent-based modeling of individuals moving over a landscapes, using an architecture that lays transparent the following four hierarchical simulation levels: 1.) within-patch consumer-resource dynamics, 2.) within-generation movement and competition mitigation processes, 3.) across-generation evolutionary processes, and 4.) multiple runs to generate the statistics needed for comparative analyses. The focus of our analysis is on the question of how the biomass production efficiency and the diversity of guilds of foraging strategy types, exploiting resources over a patchy landscape, evolve under clonal versus random hermaphroditic sexual reproduction. Our results indicate greater biomass production efficiency under clonal reproduction only at higher population densities, and demonstrate that polymorphisms evolve and are maintained under random mating systems. The latter result questions the notion that some type of associative mating structure is needed to maintain genetic polymorphisms among individuals exploiting a common patchy resource on an otherwise spatially homogeneous landscape. PMID:26274613
Nissan, Noam; Furman-Haran, Edna; Feinberg-Shapiro, Myra; Grobgeld, Dov; Eyal, Erez; Zehavi, Tania; Degani, Hadassa
2014-12-15
Breast cancer is the most common cause of cancer among women worldwide. Early detection of breast cancer has a critical role in improving the quality of life and survival of breast cancer patients. In this paper a new approach for the detection of breast cancer is described, based on tracking the mammary architectural elements using diffusion tensor imaging (DTI). The paper focuses on the scanning protocols and image processing algorithms and software that were designed to fit the diffusion properties of the mammary fibroglandular tissue and its changes during malignant transformation. The final output yields pixel by pixel vector maps that track the architecture of the entire mammary ductal glandular trees and parametric maps of the diffusion tensor coefficients and anisotropy indices. The efficiency of the method to detect breast cancer was tested by scanning women volunteers including 68 patients with breast cancer confirmed by histopathology findings. Regions with cancer cells exhibited a marked reduction in the diffusion coefficients and in the maximal anisotropy index as compared to the normal breast tissue, providing an intrinsic contrast for delineating the boundaries of malignant growth. Overall, the sensitivity of the DTI parameters to detect breast cancer was found to be high, particularly in dense breasts, and comparable to the current standard breast MRI method that requires injection of a contrast agent. Thus, this method offers a completely non-invasive, safe and sensitive tool for breast cancer detection.
Demographic management in a federated healthcare environment.
Román, I; Roa, L M; Reina-Tosina, J; Madinabeitia, G
2006-09-01
The purpose of this paper is to provide a further step toward the decentralization of identification and demographic information about persons by solving issues related to the integration of demographic agents in a federated healthcare environment. The aim is to identify a particular person in every system of a federation and to obtain a unified view of his/her demographic information stored in different locations. This work is based on semantic models and techniques, and pursues the reconciliation of several current standardization works including ITU-T's Open Distributed Processing, CEN's prEN 12967, OpenEHR's dual and reference models, CEN's General Purpose Information Components and CORBAmed's PID service. We propose a new paradigm for the management of person identification and demographic data, based on the development of an open architecture of specialized distributed components together with the incorporation of techniques for the efficient management of domain ontologies, in order to have a federated demographic service. This new service enhances previous correlation solutions sharing ideas with different standards and domains like semantic techniques and database systems. The federation philosophy enforces us to devise solutions to the semantic, functional and instance incompatibilities in our approach. Although this work is based on several models and standards, we have improved them by combining their contributions and developing a federated architecture that does not require the centralization of demographic information. The solution is thus a good approach to face integration problems and the applied methodology can be easily extended to other tasks involved in the healthcare organization.
NASA Astrophysics Data System (ADS)
Lucon, Janice; Qazi, Shefah; Uchida, Masaki; Bedwell, Gregory J.; Lafrance, Ben; Prevelige, Peter E.; Douglas, Trevor
2012-10-01
Virus-like particles (VLPs) have emerged as important and versatile architectures for chemical manipulation in the development of functional hybrid nanostructures. Here we demonstrate a successful site-selective initiation of atom-transfer radical polymerization reactions to form an addressable polymer constrained within the interior cavity of a VLP. Potentially, this protein-polymer hybrid of P22 and cross-linked poly(2-aminoethyl methacrylate) could be useful as a new high-density delivery vehicle for the encapsulation and delivery of small-molecule cargos. In particular, the encapsulated polymer can act as a scaffold for the attachment of small functional molecules, such as fluorescein dye or the magnetic resonance imaging (MRI) contrast agent Gd-diethylenetriaminepentacetate, through reactions with its pendant primary amine groups. Using this approach, a significant increase in the labelling density of the VLP, compared to that of previous modifications of VLPs, can be achieved. These results highlight the use of multimeric protein-polymer conjugates for their potential utility in the development of VLP-based MRI contrast agents with the possibility of loading other cargos.
Paratala, Bhavna S; Jacobson, Barry D; Kanakia, Shruti; Francis, Leonard Deepak; Sitharaman, Balaji
2012-01-01
The chemistry of high-performance magnetic resonance imaging contrast agents remains an active area of research. In this work, we demonstrate that the potassium permanganate-based oxidative chemical procedures used to synthesize graphite oxide or graphene nanoparticles leads to the confinement (intercalation) of trace amounts of Mn(2+) ions between the graphene sheets, and that these manganese intercalated graphitic and graphene structures show disparate structural, chemical and magnetic properties, and high relaxivity (up to 2 order) and distinctly different nuclear magnetic resonance dispersion profiles compared to paramagnetic chelate compounds. The results taken together with other published reports on confinement of paramagnetic metal ions within single-walled carbon nanotubes (a rolled up graphene sheet) show that confinement (encapsulation or intercalation) of paramagnetic metal ions within graphene sheets, and not the size, shape or architecture of the graphitic carbon particles is the key determinant for increasing relaxivity, and thus, identifies nano confinement of paramagnetic ions as novel general strategy to develop paramagnetic metal-ion graphitic-carbon complexes as high relaxivity MRI contrast agents.
Paratala, Bhavna S.; Jacobson, Barry D.; Kanakia, Shruti; Francis, Leonard Deepak; Sitharaman, Balaji
2012-01-01
The chemistry of high-performance magnetic resonance imaging contrast agents remains an active area of research. In this work, we demonstrate that the potassium permanganate-based oxidative chemical procedures used to synthesize graphite oxide or graphene nanoparticles leads to the confinement (intercalation) of trace amounts of Mn2+ ions between the graphene sheets, and that these manganese intercalated graphitic and graphene structures show disparate structural, chemical and magnetic properties, and high relaxivity (up to 2 order) and distinctly different nuclear magnetic resonance dispersion profiles compared to paramagnetic chelate compounds. The results taken together with other published reports on confinement of paramagnetic metal ions within single-walled carbon nanotubes (a rolled up graphene sheet) show that confinement (encapsulation or intercalation) of paramagnetic metal ions within graphene sheets, and not the size, shape or architecture of the graphitic carbon particles is the key determinant for increasing relaxivity, and thus, identifies nano confinement of paramagnetic ions as novel general strategy to develop paramagnetic metal-ion graphitic-carbon complexes as high relaxivity MRI contrast agents. PMID:22685555
Functional polymers as therapeutic agents: concept to market place.
Dhal, Pradeep K; Polomoscanik, Steven C; Avila, Louis Z; Holmes-Farley, S Randall; Miller, Robert J
2009-11-12
Biologically active synthetic polymers have received considerable scientific interest and attention in recent years for their potential as promising novel therapeutic agents to treat human diseases. Although a significant amount of research has been carried out involving polymer-linked drugs as targeted and sustained release drug delivery systems and prodrugs, examples on bioactive polymers that exhibit intrinsic therapeutic properties are relatively less. Several appealing characteristics of synthetic polymers including high molecular weight, molecular architecture, and controlled polydispersity can all be utilized to discover a new generation of therapies. For example, high molecular weight bioactive polymers can be restricted to gastrointestinal tract, where they can selectively recognize, bind, and remove target disease causing substances from the body. The appealing features of GI tract restriction and stability in biological environment render these polymeric drugs to be devoid of systemic toxicity that are generally associated with small molecule systemic drugs. The present article highlights recent developments in the rational design and synthesis of appropriate functional polymers that have resulted in a number of promising polymer based therapies and biomaterials, including some marketed products.
Bee Swarm Optimization for Medical Web Information Foraging.
Drias, Yassine; Kechid, Samir; Pasi, Gabriella
2016-02-01
The present work is related to Web intelligence and more precisely to medical information foraging. We present here a novel approach based on agents technology for information foraging. An architecture is proposed, in which we distinguish two important phases. The first one is a learning process for localizing the most relevant pages that might interest the user. This is performed on a fixed instance of the Web. The second takes into account the openness and the dynamicity of the Web. It consists on an incremental learning starting from the result of the first phase and reshaping the outcomes taking into account the changes that undergoes the Web. The whole system offers a tool to help the user undertaking information foraging. We implemented the system using a group of cooperative reactive agents and more precisely a colony of artificial bees. In order to validate our proposal, experiments were conducted on MedlinePlus, a benchmark dedicated for research in the domain of Health. The results are promising either for those related to Web regularities and for the response time, which is very short and hence complies the real time constraint.
Agreement Technologies for Energy Optimization at Home
2018-01-01
Nowadays, it is becoming increasingly common to deploy sensors in public buildings or homes with the aim of obtaining data from the environment and taking decisions that help to save energy. Many of the current state-of-the-art systems make decisions considering solely the environmental factors that cause the consumption of energy. These systems are successful at optimizing energy consumption; however, they do not adapt to the preferences of users and their comfort. Any system that is to be used by end-users should consider factors that affect their wellbeing. Thus, this article proposes an energy-saving system, which apart from considering the environmental conditions also adapts to the preferences of inhabitants. The architecture is based on a Multi-Agent System (MAS), its agents use Agreement Technologies (AT) to perform a negotiation process between the comfort preferences of the users and the degree of optimization that the system can achieve according to these preferences. A case study was conducted in an office building, showing that the proposed system achieved average energy savings of 17.15%. PMID:29783768
Generating a Corpus of Mobile Forensic Images for Masquerading user Experimentation.
Guido, Mark; Brooks, Marc; Grover, Justin; Katz, Eric; Ondricek, Jared; Rogers, Marcus; Sharpe, Lauren
2016-11-01
The Periodic Mobile Forensics (PMF) system investigates user behavior on mobile devices. It applies forensic techniques to an enterprise mobile infrastructure, utilizing an on-device agent named TractorBeam. The agent collects changed storage locations for later acquisition, reconstruction, and analysis. TractorBeam provides its data to an enterprise infrastructure that consists of a cloud-based queuing service, relational database, and analytical framework for running forensic processes. During a 3-month experiment with Purdue University, TractorBeam was utilized in a simulated operational setting across 34 users to evaluate techniques to identify masquerading users (i.e., users other than the intended device user). The research team surmises that all masqueraders are undesirable to an enterprise, even when a masquerader lacks malicious intent. The PMF system reconstructed 821 forensic images, extracted one million audit events, and accurately detected masqueraders. Evaluation revealed that developed methods reduced storage requirements 50-fold. This paper describes the PMF architecture, performance of TractorBeam throughout the protocol, and results of the masquerading user analysis. © 2016 American Academy of Forensic Sciences.
Recognition and localization of relevant human behavior in videos
NASA Astrophysics Data System (ADS)
Bouma, Henri; Burghouts, Gertjan; de Penning, Leo; Hanckmann, Patrick; ten Hove, Johan-Martijn; Korzec, Sanne; Kruithof, Maarten; Landsmeer, Sander; van Leeuwen, Coen; van den Broek, Sebastiaan; Halma, Arvid; den Hollander, Richard; Schutte, Klamer
2013-06-01
Ground surveillance is normally performed by human assets, since it requires visual intelligence. However, especially for military operations, this can be dangerous and is very resource intensive. Therefore, unmanned autonomous visualintelligence systems are desired. In this paper, we present an improved system that can recognize actions of a human and interactions between multiple humans. Central to the new system is our agent-based architecture. The system is trained on thousands of videos and evaluated on realistic persistent surveillance data in the DARPA Mind's Eye program, with hours of videos of challenging scenes. The results show that our system is able to track the people, detect and localize events, and discriminate between different behaviors, and it performs 3.4 times better than our previous system.
Marsh, J. N.; Wallace, K. D.; McCarthy, J. E.; Wickerhauser, M. V.; Maurizi, B. N.; Lanza, G. M.; Wickline, S. A.; Hughes, M. S.
2011-01-01
Previously, we reported new methods for ultrasound signal characterization using entropy, Hf; a generalized entropy, the Renyi entropy, If(r); and a limiting form of Renyi entropy suitable for real-time calculation, If,∞. All of these quantities demonstrated significantly more sensitivity to subtle changes in scattering architecture than energy-based methods in certain settings. In this study, the real-time calculable limit of the Renyi entropy, If,∞, is applied for the imaging of angiogenic murine neovasculature in a breast cancer xenograft using a targeted contrast agent. It is shown that this approach may be used to detect reliably the accumulation of targeted nanoparticles at five minutes post-injection in this in vivo model. PMID:20679020
39 CFR 501.7 - Postage Evidencing System requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Performance Criteria for Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance Criteria for Information-Based Indicia and Security Architecture for Closed IBI... Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance...
39 CFR 501.7 - Postage Evidencing System requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Performance Criteria for Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance Criteria for Information-Based Indicia and Security Architecture for Closed IBI... Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance...
Evaluation of Radioresponse and Radiosensitizers in Glioblastoma Organotypic Cultures.
Bayin, N Sumru; Ma, Lin; Placantonakis, Dimitris G; Barcellos-Hoff, Mary Helen
2018-01-01
Glioblastoma (GBM), a deadly primary brain malignancy, manifests pronounced radioresistance. Identifying agents that improve the sensitivity of tumor tissue to radiotherapy is critical for improving patient outcomes. The response to ionizing radiation is regulated by both cell-intrinsic and -extrinsic mechanisms. In particular, the tumor microenvironment is known to promote radioresistance in GBM. Therefore, model systems used to test radiosensitizing agents need to take into account the tumor microenvironment. We recently showed that GBM explant cultures represent an adaptable ex vivo platform for rapid and personalized testing of radiosensitizers. These explants preserve the cellular composition and tissue architecture of parental patient tumors and therefore capture the microenvironmental context that critically determines the response to radiotherapy. This chapter focuses on the detailed protocol for testing candidate radiosensitizing agents in GBM explants.
A Robust Scalable Transportation System Concept
NASA Technical Reports Server (NTRS)
Hahn, Andrew; DeLaurentis, Daniel
2006-01-01
This report documents the 2005 Revolutionary System Concept for Aeronautics (RSCA) study entitled "A Robust, Scalable Transportation System Concept". The objective of the study was to generate, at a high-level of abstraction, characteristics of a new concept for the National Airspace System, or the new NAS, under which transportation goals such as increased throughput, delay reduction, and improved robustness could be realized. Since such an objective can be overwhelmingly complex if pursued at the lowest levels of detail, instead a System-of-Systems (SoS) approach was adopted to model alternative air transportation architectures at a high level. The SoS approach allows the consideration of not only the technical aspects of the NAS", but also incorporates policy, socio-economic, and alternative transportation system considerations into one architecture. While the representations of the individual systems are basic, the higher level approach allows for ways to optimize the SoS at the network level, determining the best topology (i.e. configuration of nodes and links). The final product (concept) is a set of rules of behavior and network structure that not only satisfies national transportation goals, but represents the high impact rules that accomplish those goals by getting the agents to "do the right thing" naturally. The novel combination of Agent Based Modeling and Network Theory provides the core analysis methodology in the System-of-Systems approach. Our method of approach is non-deterministic which means, fundamentally, it asks and answers different questions than deterministic models. The nondeterministic method is necessary primarily due to our marriage of human systems with technological ones in a partially unknown set of future worlds. Our goal is to understand and simulate how the SoS, human and technological components combined, evolve.
Decision insight into stakeholder conflict for ERN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siirola, John; Tidwell, Vincent Carroll; Benz, Zachary O.
Participatory modeling has become an important tool in facilitating resource decision making and dispute resolution. Approaches to modeling that are commonly used in this context often do not adequately account for important human factors. Current techniques provide insights into how certain human activities and variables affect resource outcomes; however, they do not directly simulate the complex variables that shape how, why, and under what conditions different human agents behave in ways that affect resources and human interactions related to them. Current approaches also do not adequately reveal how the effects of individual decisions scale up to have systemic level effectsmore » in complex resource systems. This lack of integration prevents the development of more robust models to support decision making and dispute resolution processes. Development of integrated tools is further hampered by the fact that collection of primary data for decision-making modeling is costly and time consuming. This project seeks to develop a new approach to resource modeling that incorporates both technical and behavioral modeling techniques into a single decision-making architecture. The modeling platform is enhanced by use of traditional and advanced processes and tools for expedited data capture. Specific objectives of the project are: (1) Develop a proof of concept for a new technical approach to resource modeling that combines the computational techniques of system dynamics and agent based modeling, (2) Develop an iterative, participatory modeling process supported with traditional and advance data capture techniques that may be utilized to facilitate decision making, dispute resolution, and collaborative learning processes, and (3) Examine potential applications of this technology and process. The development of this decision support architecture included both the engineering of the technology and the development of a participatory method to build and apply the technology. Stakeholder interaction with the model and associated data capture was facilitated through two very different modes of engagement, one a standard interface involving radio buttons, slider bars, graphs and plots, while the other utilized an immersive serious gaming interface. The decision support architecture developed through this project was piloted in the Middle Rio Grande Basin to examine how these tools might be utilized to promote enhanced understanding and decision-making in the context of complex water resource management issues. Potential applications of this architecture and its capacity to lead to enhanced understanding and decision-making was assessed through qualitative interviews with study participants who represented key stakeholders in the basin.« less
The research-design interaction: lessons learned from an evidence-based design studio.
Haq, Saif; Pati, Debajyoti
2010-01-01
As evidence-based design (EBD) emerges as a model of design practice, considerable attention has been given to its research component. However, this overshadows another essential component of EBD-the change agent, namely the designer. EBD introduced a new skill set to the practitioner: the ability to interact with scientific evidence. Industry sources suggest adoption of the EBD approach across a large number of design firms. How comfortable are these designers in integrating research with design decision making? Optimizing the interaction between the primary change agent (the designer) and the evidence is crucial to producing the desired outcomes. Preliminary to examining this question, an architectural design studio was used as a surrogate environment to examine how designers interact with evidence. Twelve students enrolled in a healthcare EBD studio during the spring of 2009. A three-phase didactic structure was adopted: knowing a hospital, knowing the evidence, and designing with knowledge and evidence. Products of the studio and questionnaire responses from the students were used as the data for analysis. The data suggest that optimization of the research-design relationship warrants consideration in four domains: (1) a knowledge structure that is easy to comprehend; (2) phase-complemented representation of evidence; (3) access to context and precedence information; and (4) a designer-friendly vocabulary.
A confocal microscopy-based atlas of tissue architecture in the tapeworm Hymenolepis diminuta.
Rozario, Tania; Newmark, Phillip A
2015-11-01
Tapeworms are pervasive and globally distributed parasites that infect millions of humans and livestock every year, and are the causative agents of two of the 17 neglected tropical diseases prioritized by the World Health Organization. Studies of tapeworm biology and pathology are often encumbered by the complex life cycles of disease-relevant tapeworm species that infect hosts such as foxes, dogs, cattle, pigs, and humans. Thus, studies of laboratory models can help overcome the practical, ethical, and cost-related difficulties faced by tapeworm parasitologists. The rat intestinal tapeworm Hymenolepis diminuta is easily reared in the laboratory and has the potential to enable modern molecular-based experiments that will greatly contribute to our understanding of multiple aspects of tapeworm biology, such as growth and reproduction. As part of our efforts to develop molecular tools for experiments on H. diminuta, we have characterized a battery of lectins, antibodies, and common stains that label different tapeworm tissues and organ structures. Using confocal microscopy, we have assembled an "atlas" of H. diminuta organ architecture that will be a useful resource for helminthologists. The methodologies we describe will facilitate characterization of loss-of-function perturbations using H. diminuta. This toolkit will enable a greater understanding of fundamental tapeworm biology that may elucidate new therapeutic targets toward the eradication of these parasites. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
A standardized SOA for clinical data interchange in a cardiac telemonitoring environment.
Gazzarata, Roberta; Vergari, Fabio; Cinotti, Tullio Salmon; Giacomini, Mauro
2014-11-01
Care of chronic cardiac patients requires information interchange between patients' homes, clinical environments, and the electronic health record. Standards are emerging to support clinical information collection, exchange and management and to overcome information fragmentation and actors delocalization. Heterogeneity of information sources at patients' homes calls for open solutions to collect and accommodate multidomain information, including environmental data. Based on the experience gained in a European Research Program, this paper presents an integrated and open approach for clinical data interchange in cardiac telemonitoring applications. This interchange is supported by the use of standards following the indications provided by the national authorities of the countries involved. Taking into account the requirements provided by the medical staff involved in the project, the authors designed and implemented a prototypal middleware, based on a service-oriented architecture approach, to give a structured and robust tool to congestive heart failure patients for their personalized telemonitoring. The middleware is represented by a health record management service, whose interface is compliant to the healthcare services specification project Retrieve, Locate and Update Service standard (Level 0), which allows communication between the agents involved through the exchange of Clinical Document Architecture Release 2 documents. Three performance tests were carried out and showed that the prototype completely fulfilled all requirements indicated by the medical staff; however, certain aspects, such as authentication, security and scalability, should be deeply analyzed within a future engineering phase.
Milde, Moritz B.; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia
2017-01-01
Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware. PMID:28747883
Mano, J F; Vaz, C M; Mendes, S C; Reis, R L; Cunha, A M
1999-12-01
It has been shown that blends of starch with a poly(ethylene-vinyl-alcohol) copolymer, EVOH, designated as SEVA-C, present an interesting combination of mechanical, degradation and biocompatible properties, specially when filled with hydroxyapatite (HA). Consequently, they may find a range of applications in the biomaterials field. This work evaluated the influence of HA fillers and of blowing agents (used to produce porous architectures) over the viscoelastic properties of SEVA-C polymers, as seen by dynamic mechanical analysis (DMA), in order to speculate on their performances when withstanding cyclic loading in the body. The composite materials presented a promising performance under dynamic mechanical solicitation conditions. Two relaxations were found being attributed to the starch and EVOH phases. The EVOH relaxation process may be very useful in vivo improving the implants performance under cyclic loading. DMA results also showed that it is possible to produce SEVA-C compact surface/porous core architectures with a mechanical performance similar to that of SEVA-C dense materials. This may allow for the use of these materials as bone replacements or scaffolds that must withstand loads when implanted. Copyright 1999 Kluwer Academic Publishers
Milde, Moritz B; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia
2017-01-01
Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware.
Jupiter Europa Orbiter Architecture Definition Process
NASA Technical Reports Server (NTRS)
Rasmussen, Robert; Shishko, Robert
2011-01-01
The proposed Jupiter Europa Orbiter mission, planned for launch in 2020, is using a new architectural process and framework tool to drive its model-based systems engineering effort. The process focuses on getting the architecture right before writing requirements and developing a point design. A new architecture framework tool provides for the structured entry and retrieval of architecture artifacts based on an emerging architecture meta-model. This paper describes the relationships among these artifacts and how they are used in the systems engineering effort. Some early lessons learned are discussed.
Practical Application of Model-based Programming and State-based Architecture to Space Missions
NASA Technical Reports Server (NTRS)
Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian
2006-01-01
A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps
Cognitive architectures and autonomy: Commentary and Response
NASA Astrophysics Data System (ADS)
2012-11-01
Editors: Włodzisław Duch / Ah-Hwee Tan / Stan Franklin Autonomy for AGI Cristiano Castelfranchi 31 Are Disembodied Agents Really Autonomous? Antonio Chella 33 The Perception-…-Action Cycle Cognitive Architecture and Autonomy: the View from the Brain Vassilis Cutsuridis 36 Autonomy Requires Creativity and Meta-Learning Włodzisław Duch 39 Meta Learning, Change of Internal Workings, and LIDA Ryan McCall / Stan Franklin 42 An Appeal for Declaring Research Goals Brandon Rohrer 45 The Development of Cognition as the Basis for Autonomy Frank van der Velde 47 Autonomy and Intelligence Pei Wang 49 Autonomy, Isolation, and Collective Intelligence Nikolaos Mavridis 51 Response to Comments Kristinn R. Thórisson / Helgi Páll Helgasson 56
NASA Astrophysics Data System (ADS)
Xing, Ling-Bao; Hou, Shu-Fen; Zhou, Jin; Zhang, Jing-Li; Si, Weijiang; Dong, Yunhui; Zhuo, Shuping
2015-10-01
In present work, we demonstrate an efficient and facile strategy to fabricate three-dimensional (3D) nitrogen-doped graphene aerogels (NGAs) based on melamine, which serves as reducing and functionalizing agent of graphene oxide (GO) in an aqueous medium with ammonia. Benefiting from well-defined and cross-linked 3D porous network architectures, the supercapacitor based on the NGAs exhibited a high specific capacitance of 170.5 F g-1 at 0.2 A g-1, and this capacitance also showed good electrochemical stability and a high degree of reversibility in the repetitive charge/discharge cycling test. More interestingly, the prepared NGAs further exhibited high adsorption capacities and high recycling performance toward several metal ions such as Pb2+, Cu2+ and Cd2+. Moreover, the hydrophobic carbonized nitrogen-doped graphene aerogels (CNGAs) showed outstanding adsorption and recycling performance for the removal of various oils and organic solvents.
Girard, B; Tabareau, N; Pham, Q C; Berthoz, A; Slotine, J-J
2008-05-01
Action selection, the problem of choosing what to do next, is central to any autonomous agent architecture. We use here a multi-disciplinary approach at the convergence of neuroscience, dynamical system theory and autonomous robotics, in order to propose an efficient action selection mechanism based on a new model of the basal ganglia. We first describe new developments of contraction theory regarding locally projected dynamical systems. We exploit these results to design a stable computational model of the cortico-baso-thalamo-cortical loops. Based on recent anatomical data, we include usually neglected neural projections, which participate in performing accurate selection. Finally, the efficiency of this model as an autonomous robot action selection mechanism is assessed in a standard survival task. The model exhibits valuable dithering avoidance and energy-saving properties, when compared with a simple if-then-else decision rule.
Evolving neural networks for strategic decision-making problems.
Kohl, Nate; Miikkulainen, Risto
2009-04-01
Evolution of neural networks, or neuroevolution, has been a successful approach to many low-level control problems such as pole balancing, vehicle control, and collision warning. However, certain types of problems-such as those involving strategic decision-making-have remained difficult for neuroevolution to solve. This paper evaluates the hypothesis that such problems are difficult because they are fractured: The correct action varies discontinuously as the agent moves from state to state. A method for measuring fracture using the concept of function variation is proposed and, based on this concept, two methods for dealing with fracture are examined: neurons with local receptive fields, and refinement based on a cascaded network architecture. Experiments in several benchmark domains are performed to evaluate how different levels of fracture affect the performance of neuroevolution methods, demonstrating that these two modifications improve performance significantly. These results form a promising starting point for expanding neuroevolution to strategic tasks.
Retrospective revaluation in sequential decision making: a tale of two systems.
Gershman, Samuel J; Markman, Arthur B; Otto, A Ross
2014-02-01
Recent computational theories of decision making in humans and animals have portrayed 2 systems locked in a battle for control of behavior. One system--variously termed model-free or habitual--favors actions that have previously led to reward, whereas a second--called the model-based or goal-directed system--favors actions that causally lead to reward according to the agent's internal model of the environment. Some evidence suggests that control can be shifted between these systems using neural or behavioral manipulations, but other evidence suggests that the systems are more intertwined than a competitive account would imply. In 4 behavioral experiments, using a retrospective revaluation design and a cognitive load manipulation, we show that human decisions are more consistent with a cooperative architecture in which the model-free system controls behavior, whereas the model-based system trains the model-free system by replaying and simulating experience.
Toward Realism in Human Performance Simulation
2004-01-01
toward the development of improved human-like performance of synthetic agents. However, several serious problems continue to challenge researchers and... developers . Developers have insufficient behavioral knowledge. To date, models of emotivity and behavior that have been commercialized still tend...Bindiganavale, 1999). There has even been significant development of architectures to produce animated characters that react appropriately to a small
An agent architecture for an integrated forest ecosystem management decision support system
Donald Nute; Walter D. Potter; Mayukh Dass; Astrid Glende; Frederick Maier; Hajime Uchiyama; Jin Wang; Mark Twery; Peter Knopp; Scott Thomasma; H. Michael Rauscher
2003-01-01
A wide variety of software tools are available to support decision in the management of forest ecosystems. These tools include databases, growth and yield models, wildlife models, silvicultural expert systems, financial models, geographical informations systems, and visualization tools. Typically, each of these tools has its own complex interface and data format. To...
USDA-ARS?s Scientific Manuscript database
The current research was directed at determining the impact of light intensity on the architecture, amino acid content and trichome density and characteristics of Tropical Soda Apple (TSA), Solanum viarum (Solanaceae). TSA plants were grown in a greenhouse either covered with a shade cloth (75% bloc...
Parallel Logic Programming Architecture
1990-04-01
Section 3.1. 3.1. A STATIC ALLOCATION SCHEME (SAS) Methods that have been used for decomposing distributed problems in artificial intelligence...multiple agents, knowledge organization and allocation, and cooperative parallel execution. These difficulties are common to distributed artificial ...for the following reasons. First, intellegent backtracking requires much more bookkeeping and is therefore more costly during consult-time and during
Designing a Women's Refuge: An Interdisciplinary Health, Architecture and Landscape Collaboration
ERIC Educational Resources Information Center
Dean, Suzanne; Williams, Claire; Donnelly, Samantha; Levett-Jones, Tracy
2017-01-01
University programs are currently faced with a number of challenges: how to engage students as active learners, how to ensure graduates are "work ready" with broad and relevant professional skills, and how to support students to see their potential as agents of social change and contributors to social good. This paper presents the…
Engineering High Assurance Distributed Cyber Physical Systems
2015-01-15
decisions: number of interacting agents and co-dependent decisions made in real-time without causing interference . To engineer a high assurance DART...environment specification, architecture definition, domain-specific languages, design patterns, code - generation, analysis, test-generation, and simulation...include synchronization between the models and source code , debugging at the model level, expression of the design intent, and quality of service
Managing Communications with Experts in Geographically Distributed Collaborative Networks
2009-03-01
agent architectures, and management of sensor-unmanned vehicle decision maker self organizing environments . Although CENETIX has its beginnings...understanding how everything in a complex system is interconnected. Additionally, environmental factors that impact the management of communications with...unrestricted warfare environment . In “Unconventional Insights for Managing Stakeholder Trust”, Pirson, et al. (2008) emphasizes the challenges of managing
Overcoming Navigational Design in a VLE: Students as Agents of Change
ERIC Educational Resources Information Center
Sadoux, Marion; Rzycka, Dorota; Jones, Mizuho; Lopez, Joaquin
2016-01-01
This paper focuses on the outcomes of a project funded by the Teaching and Learning Enhancement Office at the University of Nottingham Ningbo China (UNNC). Students were recruited to design a new navigational architecture for the Moodle pages of the Language Centre. They received some training on the key principles of distributive learning and…
Study on the contract characteristics of Internet architecture
NASA Astrophysics Data System (ADS)
Fu, Chuan; Zhang, Guoqing; Yang, Jing; Liu, Xiaona
2011-11-01
The importance of Internet architecture goes beyond the technical aspects. The architecture of Internet has a profound influence on the Internet-based economy in term of how the profits are shared by different market participants (Internet Server Provider, Internet Content Provider), since it is the physical foundation upon which the profit-sharing contracts are derived. In order to facilitate the continuing growth of the Internet, it is necessary to systematically study factors that curtail the Internet-based economy including the existing Internet architecture. In this paper, we used transaction cost economics and contract economics as new tools to analyse the contracts derived from the current Internet architecture. This study sheds light on how the macro characteristics of Internet architecture effect the microeconomical decisions of market participants. Based on the existing Internet architecture, we discuss the possibility of promoting Internet-based economy by encouraging user to connect their private stub network to the Internet and giving the user more right of self-governing.
Marshall Application Realignment System (MARS) Architecture
NASA Technical Reports Server (NTRS)
Belshe, Andrea; Sutton, Mandy
2010-01-01
The Marshall Application Realignment System (MARS) Architecture project was established to meet the certification requirements of the Department of Defense Architecture Framework (DoDAF) V2.0 Federal Enterprise Architecture Certification (FEAC) Institute program and to provide added value to the Marshall Space Flight Center (MSFC) Application Portfolio Management process. The MARS Architecture aims to: (1) address the NASA MSFC Chief Information Officer (CIO) strategic initiative to improve Application Portfolio Management (APM) by optimizing investments and improving portfolio performance, and (2) develop a decision-aiding capability by which applications registered within the MSFC application portfolio can be analyzed and considered for retirement or decommission. The MARS Architecture describes a to-be target capability that supports application portfolio analysis against scoring measures (based on value) and overall portfolio performance objectives (based on enterprise needs and policies). This scoring and decision-aiding capability supports the process by which MSFC application investments are realigned or retired from the application portfolio. The MARS Architecture is a multi-phase effort to: (1) conduct strategic architecture planning and knowledge development based on the DoDAF V2.0 six-step methodology, (2) describe one architecture through multiple viewpoints, (3) conduct portfolio analyses based on a defined operational concept, and (4) enable a new capability to support the MSFC enterprise IT management mission, vision, and goals. This report documents Phase 1 (Strategy and Design), which includes discovery, planning, and development of initial architecture viewpoints. Phase 2 will move forward the process of building the architecture, widening the scope to include application realignment (in addition to application retirement), and validating the underlying architecture logic before moving into Phase 3. The MARS Architecture key stakeholders are most interested in Phase 3 because this is where the data analysis, scoring, and recommendation capability is realized. Stakeholders want to see the benefits derived from reducing the steady-state application base and identify opportunities for portfolio performance improvement and application realignment.
Formal Foundations for the Specification of Software Architecture.
1995-03-01
Architectures For- mally: A Case-Study Using KWIC." Kestrel Institute, Palo Alto, CA 94304, April 1994. 58. Kang, Kyo C. Feature-Oriented Domain Analysis ( FODA ...6.3.5 Constraint-Based Architectures ................. 6-60 6.4 Summary ......... ............................. 6-63 VII. Analysis of Process-Based...between these architec- ture theories were investigated. A feasibility analysis on an image processing application demonstrated that architecture theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-01-11
GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.
Oubbati, Mohamed; Kord, Bahram; Koprinkova-Hristova, Petia; Palm, Günther
2014-04-01
The new tendency of artificial intelligence suggests that intelligence must be seen as a result of the interaction between brains, bodies and environments. This view implies that designing sophisticated behaviour requires a primary focus on how agents are functionally coupled to their environments. Under this perspective, we present early results with the application of reservoir computing as an efficient tool to understand how behaviour emerges from interaction. Specifically, we present reservoir computing models, that are inspired by imitation learning designs, to extract the essential components of behaviour that results from agent-environment interaction dynamics. Experimental results using a mobile robot are reported to validate the learning architectures.
NASA Astrophysics Data System (ADS)
Oubbati, Mohamed; Kord, Bahram; Koprinkova-Hristova, Petia; Palm, Günther
2014-04-01
The new tendency of artificial intelligence suggests that intelligence must be seen as a result of the interaction between brains, bodies and environments. This view implies that designing sophisticated behaviour requires a primary focus on how agents are functionally coupled to their environments. Under this perspective, we present early results with the application of reservoir computing as an efficient tool to understand how behaviour emerges from interaction. Specifically, we present reservoir computing models, that are inspired by imitation learning designs, to extract the essential components of behaviour that results from agent-environment interaction dynamics. Experimental results using a mobile robot are reported to validate the learning architectures.
Behavioral plasticity through the modulation of switch neurons.
Vassiliades, Vassilis; Christodoulou, Chris
2016-02-01
A central question in artificial intelligence is how to design agents capable of switching between different behaviors in response to environmental changes. Taking inspiration from neuroscience, we address this problem by utilizing artificial neural networks (NNs) as agent controllers, and mechanisms such as neuromodulation and synaptic gating. The novel aspect of this work is the introduction of a type of artificial neuron we call "switch neuron". A switch neuron regulates the flow of information in NNs by selectively gating all but one of its incoming synaptic connections, effectively allowing only one signal to propagate forward. The allowed connection is determined by the switch neuron's level of modulatory activation which is affected by modulatory signals, such as signals that encode some information about the reward received by the agent. An important aspect of the switch neuron is that it can be used in appropriate "switch modules" in order to modulate other switch neurons. As we show, the introduction of the switch modules enables the creation of sequences of gating events. This is achieved through the design of a modulatory pathway capable of exploring in a principled manner all permutations of the connections arriving on the switch neurons. We test the model by presenting appropriate architectures in nonstationary binary association problems and T-maze tasks. The results show that for all tasks, the switch neuron architectures generate optimal adaptive behaviors, providing evidence that the switch neuron model could be a valuable tool in simulations where behavioral plasticity is required. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fairbanks, Benjamin D; Gunatillake, Pathiraja A; Meagher, Laurence
2015-08-30
RAFT- mediated polymerization, providing control over polymer length and architecture as well as facilitating post polymerization modification of end groups, has been applied to virtually every facet of biomedical materials research. RAFT polymers have seen particularly extensive use in drug delivery research. Facile generation of functional and telechelic polymers permits straightforward conjugation to many therapeutic compounds while synthesis of amphiphilic block copolymers via RAFT allows for the generation of self-assembled structures capable of carrying therapeutic payloads. With the large and growing body of literature employing RAFT polymers as drug delivery aids and vehicles, concern over the potential toxicity of RAFT derived polymers has been raised. While literature exploring this complication is relatively limited, the emerging consensus may be summed up in three parts: toxicity of polymers generated with dithiobenzoate RAFT agents is observed at high concentrations but not with polymers generated with trithiocarbonate RAFT agents; even for polymers generated with dithiobenzoate RAFT agents, most reported applications call for concentrations well below the toxicity threshold; and RAFT end-groups may be easily removed via any of a variety of techniques that leave the polymer with no intrinsic toxicity attributable to the mechanism of polymerization. The low toxicity of RAFT-derived polymers and the ability to remove end groups via straightforward and scalable processes make RAFT technology a valuable tool for practically any application in which a polymer of defined molecular weight and architecture is desired. Copyright © 2015. Published by Elsevier B.V.
Mission Operations with an Autonomous Agent
NASA Technical Reports Server (NTRS)
Pell, Barney; Sawyer, Scott R.; Muscettola, Nicola; Smith, Benjamin; Bernard, Douglas E.
1998-01-01
The Remote Agent (RA) is an Artificial Intelligence (AI) system which automates some of the tasks normally reserved for human mission operators and performs these tasks autonomously on-board the spacecraft. These tasks include activity generation, sequencing, spacecraft analysis, and failure recovery. The RA will be demonstrated as a flight experiment on Deep Space One (DSI), the first deep space mission of the NASA's New Millennium Program (NMP). As we moved from prototyping into actual flight code development and teamed with ground operators, we made several major extensions to the RA architecture to address the broader operational context in which PA would be used. These extensions support ground operators and the RA sharing a long-range mission profile with facilities for asynchronous ground updates; support ground operators monitoring and commanding the spacecraft at multiple levels of detail simultaneously; and enable ground operators to provide additional knowledge to the RA, such as parameter updates, model updates, and diagnostic information, without interfering with the activities of the RA or leaving the system in an inconsistent state. The resulting architecture supports incremental autonomy, in which a basic agent can be delivered early and then used in an increasingly autonomous manner over the lifetime of the mission. It also supports variable autonomy, as it enables ground operators to benefit from autonomy when L'@ey want it, but does not inhibit them from obtaining a detailed understanding and exercising tighter control when necessary. These issues are critical to the successful development and operation of autonomous spacecraft.
Zhong, Ziyi; Ng, Vivien; Luo, Jizhong; Teh, Siew-Pheng; Teo, Jaclyn; Gedanken, Aharon
2007-05-22
Copper oxide with various morphologies was synthesized by the hydrolysis of Cu(ac)2 with urea under mild hydrothermal conditions. In the synthesis, a series of organic amines with one or two amine groups (monoamine and diamine), including isobutylamine, octylamine (OLA), dodecylamine, octadecylamine (monoamines), ethylenediamine dihydrochloride, and hexamethylenediamine (diamines), was used as the "structure-directing agent". The monoamines led to the formation of one-dimensional (1D) aggregates of the copper oxide precursor particles (Pre-CuO), while the diamines led to the formation of two-dimensional (2D) aggregates. In both cases, the shorter carbon-chain amine molecules showed a stronger structure-directing function than that of the longer carbon-chain amine molecules. Next, in a series of syntheses, OLA was selected for further study, and the experimental parameters were systematically manipulated. When the hydrolysis was adjusted to a very slow rate by coupling the hydrolysis reaction with an esterification reaction, 1D aggregates of Pre-CuO were formed; when the hydrolysis rate was in the middle range, spherical Pre-CuO architectures composed of smaller linear aggregates were formed. However, under the high hydrolysis rates achieved by increasing the precipitation agent (urea) or by conducting the reaction at high temperatures (>/=120 degrees C), only Pre-CuO nanoparticles with a featureless morphology were formed. The formed spherical Pre-CuO architectures can be converted to a porous structure (CuOx) after removing the OLA molecules via calcination. Compared to the 1D and 2D aggregates, this porous architecture is highly thermally stable and did not collapse even after calcination at 500 degrees C. Preliminary results showed that the porous structure can be used both as a catalyst support and as a catalyst for the oxidation of CO at low temperatures.
Canino-Rodríguez, José M; García-Herrero, Jesús; Besada-Portas, Juan; Ravelo-García, Antonio G; Travieso-González, Carlos; Alonso-Hernández, Jesús B
2015-03-04
The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers' indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications.
Canino-Rodríguez, José M.; García-Herrero, Jesús; Besada-Portas, Juan; Ravelo-García, Antonio G.; Travieso-González, Carlos; Alonso-Hernández, Jesús B.
2015-01-01
The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers’ indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications. PMID:25746092
Hardware Architecture Study for NASA's Space Software Defined Radios
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Scardelletti, Maximilian C.; Mortensen, Dale J.; Kacpura, Thomas J.; Andro, Monty; Smith, Carl; Liebetreu, John
2008-01-01
This study defines a hardware architecture approach for software defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general purpose processors, digital signal processors, field programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) in addition to flexible and tunable radio frequency (RF) front-ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and and interfaces. The modules are a logical division of common radio functions that comprise a typical communication radio. This paper describes the architecture details, module definitions, and the typical functions on each module as well as the module interfaces. Trade-offs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify the internal physical implementation within each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Kacpura, Thomas J.; Smith, Carl R.; Liebetreu, John; Hill, Gary; Mortensen, Dale J.; Andro, Monty; Scardelletti, Maximilian C.; Farrington, Allen
2008-01-01
This report defines a hardware architecture approach for software-defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general-purpose processors, digital signal processors, field programmable gate arrays, and application-specific integrated circuits (ASICs) in addition to flexible and tunable radiofrequency front ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and interfaces. The modules are a logical division of common radio functions that compose a typical communication radio. This report describes the architecture details, the module definitions, the typical functions on each module, and the module interfaces. Tradeoffs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify a physical implementation internally on each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.
A reference architecture for integrated EHR in Colombia.
de la Cruz, Edgar; Lopez, Diego M; Uribe, Gustavo; Gonzalez, Carolina; Blobel, Bernd
2011-01-01
The implementation of national EHR infrastructures has to start by a detailed definition of the overall structure and behavior of the EHR system (system architecture). Architectures have to be open, scalable, flexible, user accepted and user friendly, trustworthy, based on standards including terminologies and ontologies. The GCM provides an architectural framework created with the purpose of analyzing any kind of system, including EHR system´s architectures. The objective of this paper is to propose a reference architecture for the implementation of an integrated EHR in Colombia, based on the current state of system´s architectural models, and EHR standards. The proposed EHR architecture defines a set of services (elements) and their interfaces, to support the exchange of clinical documents, offering an open, scalable, flexible and semantically interoperable infrastructure. The architecture was tested in a pilot tele-consultation project in Colombia, where dental EHR are exchanged.
Integrated multimedia medical data agent in E-health.
di Giacomo, P; Ricci, Fabrizio L; Bocchi, Leonardo
2006-01-01
E-Health is producing a great impact in the field of information distribution of the health services to the intra-hospital and the public. Previous researches have addressed the development of system architectures in the aim of integrating the distributed and heterogeneous medical information systems. The easing of difficulties in the sharing and management of medical data and the timely accessibility to these data is a critical need for health care providers. We have proposed a client-server agent that allows a portal to the every permitted Information System of the Hospital that consists of PACS, RIS and HIS via the Intranet and the Internet. Our proposed agent enables remote access into the usually closed information system of the hospital and a server that indexes all the medical data which allows for in-depth and complex search queries for data retrieval.
DyHAP: Dynamic Hybrid ANFIS-PSO Approach for Predicting Mobile Malware.
Afifi, Firdaus; Anuar, Nor Badrul; Shamshirband, Shahaboddin; Choo, Kim-Kwang Raymond
2016-01-01
To deal with the large number of malicious mobile applications (e.g. mobile malware), a number of malware detection systems have been proposed in the literature. In this paper, we propose a hybrid method to find the optimum parameters that can be used to facilitate mobile malware identification. We also present a multi agent system architecture comprising three system agents (i.e. sniffer, extraction and selection agent) to capture and manage the pcap file for data preparation phase. In our hybrid approach, we combine an adaptive neuro fuzzy inference system (ANFIS) and particle swarm optimization (PSO). Evaluations using data captured on a real-world Android device and the MalGenome dataset demonstrate the effectiveness of our approach, in comparison to two hybrid optimization methods which are differential evolution (ANFIS-DE) and ant colony optimization (ANFIS-ACO).
DyHAP: Dynamic Hybrid ANFIS-PSO Approach for Predicting Mobile Malware
Afifi, Firdaus; Anuar, Nor Badrul; Shamshirband, Shahaboddin
2016-01-01
To deal with the large number of malicious mobile applications (e.g. mobile malware), a number of malware detection systems have been proposed in the literature. In this paper, we propose a hybrid method to find the optimum parameters that can be used to facilitate mobile malware identification. We also present a multi agent system architecture comprising three system agents (i.e. sniffer, extraction and selection agent) to capture and manage the pcap file for data preparation phase. In our hybrid approach, we combine an adaptive neuro fuzzy inference system (ANFIS) and particle swarm optimization (PSO). Evaluations using data captured on a real-world Android device and the MalGenome dataset demonstrate the effectiveness of our approach, in comparison to two hybrid optimization methods which are differential evolution (ANFIS-DE) and ant colony optimization (ANFIS-ACO). PMID:27611312
Asteroid Exploration with Autonomic Systems
NASA Technical Reports Server (NTRS)
Truszkowski, Walt; Rash, James; Rouff, Christopher; Hinchey, Mike
2004-01-01
NASA is studying advanced technologies for a future robotic exploration mission to the asteroid belt. The prospective ANTS (Autonomous Nano Technology Swarm) mission comprises autonomous agents including worker agents (small spacecra3) designed to cooperate in asteroid exploration under the overall authoriq of at least one ruler agent (a larger spacecraft) whose goal is to cause science data to be returned to Earth. The ANTS team (ruler plus workers and messenger agents), but not necessarily any individual on the team, will exhibit behaviors that qualify it as an autonomic system, where an autonomic system is defined as a system that self-reconfigures, self-optimizes, self-heals, and self-protects. Autonomic system concepts lead naturally to realistic, scalable architectures rich in capabilities and behaviors. In-depth consideration of a major mission like ANTS in terms of autonomic systems brings new insights into alternative definitions of autonomic behavior. This paper gives an overview of the ANTS mission and discusses the autonomic properties of the mission.
A smart room for hospitalised elderly people: essay of modelling and first steps of an experiment.
Rialle, V; Lauvernay, N; Franco, A; Piquard, J F; Couturier, P
1999-01-01
We present a study of modelling and the first steps of an experiment of a smart room for hospitalised elderly people. The system aims at detecting falls and sicknesses, and implements four main functions: perception of patient and environment through sensors, reasoning from perceived events and patient clinical findings, action by way of alarm triggering and message passing to medical staff, and adaptation to various patient profiles, sensor layouts, house fixtures and architecture. It includes a physical multisensory device located in the patient's room, and a multi-agent system for fall detection and alarm triggering. This system encompasses a perception agent, and a reasoning agent. The latter has two complementary capacities implemented by sub-agents: deduction of type of alarm from incoming events, and knowledge induction from recorded events. The system has been tested with a few patients in real clinical situation, and the first experiment provides encouraging results which are described in a precise manner.
Architecture as animate landscape: circular shrines in the ancient Maya lowlands.
Harrison-Buck, Eleanor
2012-01-01
In this study, I develop a theory of landscape archaeology that incorporates the concept of “animism” as a cognitive approach. Current trends in anthropology are placing greater emphasis on indigenous perspectives, and in recent decades animism has seen a resurgence in anthropological theory. As a means of relating in (not to) one's world, animism is a mode of thought that has direct bearing on landscape archaeology. Yet, Americanist archaeologists have been slow to incorporate this concept as a component of landscape theory. I consider animism and Nurit Bird-David's (1999) theory of “relatedness” and how such perspectives might be expressed archaeologically in Mesoamerica. I examine the distribution of marine shells and cave formations that appear incorporated as architectural elements on ancient Maya circular shrine architecture. More than just “symbols” of sacred geography, I suggest these materials represent living entities that animate shrines through their ongoing relationships with human and other-than-human agents in the world.
ERIC Educational Resources Information Center
Hoppe, H. Ulrich
2016-01-01
The 1998 paper by Martin Mühlenbrock, Frank Tewissen, and myself introduced a multi-agent architecture and a component engineering approach for building open distributed learning environments to support group learning in different types of classroom settings. It took up prior work on "multiple student modeling" as a method to configure…
USDA-ARS?s Scientific Manuscript database
A Myxobolus sp., morphologically resembling M. toyamai, M. longisporus, and M. koi, was isolated from the gills of a koi, Cyprinus carpio that died in an ornamental pond. Large plasmodia were localized within lamellae, causing severe disruption of the normal branchial architecture, sufficient to com...
CARA: Cognitive Architecture for Reasoning About Adversaries
2012-01-20
synthesis approach taken here the KIDS principle (Keep It Descriptive, Stupid ) applies, and agents and organizations are profiled in great detail...developed two algorithms to make forecasts about adversarial behavior. We developed game-theoretical approaches to reason about group behavior. We...to automatically make forecasts about group behavior together with methods to quantify the uncertainty inherent in such forecasts; • Developed
NASA Astrophysics Data System (ADS)
Liu, Tingzhi; Li, Yangyang; Zhang, Hao; Wang, Min; Fei, Xiaoyan; Duo, Shuwang; Chen, Ying; Pan, Jian; Wang, Wei
2015-12-01
Different flower-like ZnO hierarchical architectures were prepared by tartaric acid assisted hydrothermal synthesis, especially four flower-like ZnO nanostructures were obtained simultaneously under the same reaction condition. The cauliflower-like ZnO is assembled by spherical shaped nanoparticles, and the chrysanthemum-like and other flower-like ZnO nanostructures are assembled by hexagonal rods/prisms with from planar to semi-pyramid, and to pyramid tips. TA acts as a capping agent and structure-directing agent during the synthesis. All ZnO possess the hexagonal wurtzite structure. The PL spectra can be tuned by changing TA concentration. XRD, PL and Raman spectra confirmed that oxygen vacancies mainly come from the ZnO surface. The flower-like samples of 1:4.5 and 1:3 with the largest aspect ratios have highest photocatalytic performance. They decompose 85% MB within 60 min. Combining PL Gaussian fitting with K, the higher content of oxygen vacancy is, the higher photocatalytic activity is. The enhanced photocatalytic performance is mainly induced by oxygen vacancy of ZnO. The possible formation mechanism, growth and change process of flower-like ZnO were proposed.
Architecture of security management unit for safe hosting of multiple agents
NASA Astrophysics Data System (ADS)
Gilmont, Tanguy; Legat, Jean-Didier; Quisquater, Jean-Jacques
1999-04-01
In such growing areas as remote applications in large public networks, electronic commerce, digital signature, intellectual property and copyright protection, and even operating system extensibility, the hardware security level offered by existing processors is insufficient. They lack protection mechanisms that prevent the user from tampering critical data owned by those applications. Some devices make exception, but have not enough processing power nor enough memory to stand up to such applications (e.g. smart cards). This paper proposes an architecture of secure processor, in which the classical memory management unit is extended into a new security management unit. It allows ciphered code execution and ciphered data processing. An internal permanent memory can store cipher keys and critical data for several client agents simultaneously. The ordinary supervisor privilege scheme is replaced by a privilege inheritance mechanism that is more suited to operating system extensibility. The result is a secure processor that has hardware support for extensible multitask operating systems, and can be used for both general applications and critical applications needing strong protection. The security management unit and the internal permanent memory can be added to an existing CPU core without loss of performance, and do not require it to be modified.
Inhibition of Orexin Signaling Promotes Sleep Yet Preserves Salient Arousability in Monkeys.
Tannenbaum, Pamela L; Tye, Spencer J; Stevens, Joanne; Gotter, Anthony L; Fox, Steven V; Savitz, Alan T; Coleman, Paul J; Uslaner, Jason M; Kuduk, Scott D; Hargreaves, Richard; Winrow, Christopher J; Renger, John J
2016-03-01
In addition to enhancing sleep onset and maintenance, a desirable insomnia therapeutic agent would preserve healthy sleep's ability to wake and respond to salient situations while maintaining sleep during irrelevant noise. Dual orexin receptor antagonists (DORAs) promote sleep by selectively inhibiting wake-promoting neuropeptide signaling, unlike global inhibition of central nervous system excitation by gamma-aminobutyric acid (GABA)-A receptor (GABAaR) modulators. We evaluated the effect of DORA versus GABAaR modulators on underlying sleep architecture, ability to waken to emotionally relevant stimuli versus neutral auditory cues, and performance on a sleepiness-sensitive cognitive task upon awakening. DORA-22 and GABAaR modulators (eszopiclone, diazepam) were evaluated in adult male rhesus monkeys (n = 34) with continuous polysomnography recordings in crossover studies of sleep architecture, arousability to a classically conditioned salient versus neutral acoustical stimulus, and psychomotor vigilance task (PVT) performance if awakened. All compounds decreased wakefulness, but only DORA-22 sleep resembled unmedicated sleep in terms of underlying sleep architecture, preserved ability to awaken to salient-conditioned acoustic stimuli while maintaining sleep during neutral acoustic stimuli, and no congnitive impairment in PVT performance. Although GABAaR modulators induced lighter sleep, monkeys rarely woke to salient stimuli and PVT performance was impaired if monkeys were awakened. In nonhuman primates, DORAs' targeted mechanism for promoting sleep protects the ability to selectively arouse to salient stimuli and perform attentional tasks unimpaired, suggesting meaningful differentiation between a hypnotic agent that works through antagonizing orexin wake signaling versus the sedative hypnotic effects of the GABAaR modulator mechanism of action. © 2016 Associated Professional Sleep Societies, LLC.
NASA Enterprise Architecture and Its Use in Transition of Research Results to Operations
NASA Astrophysics Data System (ADS)
Frisbie, T. E.; Hall, C. M.
2006-12-01
Enterprise architecture describes the design of the components of an enterprise, their relationships and how they support the objectives of that enterprise. NASA Stennis Space Center leads several projects involving enterprise architecture tools used to gather information on research assets within NASA's Earth Science Division. In the near future, enterprise architecture tools will link and display the relevant requirements, parameters, observatories, models, decision systems, and benefit/impact information relationships and map to the Federal Enterprise Architecture Reference Models. Components configured within the enterprise architecture serving the NASA Applied Sciences Program include the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool. The Earth Science Components Knowledge Base systematically catalogues NASA missions, sensors, models, data products, model products, and network partners appropriate for consideration in NASA Earth Science applications projects. The Systems Components database is a centralized information warehouse of NASA's Earth Science research assets and a critical first link in the implementation of enterprise architecture. The Earth Science Architecture Tool is used to analyze potential NASA candidate systems that may be beneficial to decision-making capabilities of other Federal agencies. Use of the current configuration of NASA enterprise architecture (the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool) has far exceeded its original intent and has tremendous potential for the transition of research results to operational entities.
Image Understanding Architecture
1991-09-01
architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers
Enterprise application architecture development based on DoDAF and TOGAF
NASA Astrophysics Data System (ADS)
Tao, Zhi-Gang; Luo, Yun-Feng; Chen, Chang-Xin; Wang, Ming-Zhe; Ni, Feng
2017-05-01
For the purpose of supporting the design and analysis of enterprise application architecture, here, we report a tailored enterprise application architecture description framework and its corresponding design method. The presented framework can effectively support service-oriented architecting and cloud computing by creating the metadata model based on architecture content framework (ACF), DoDAF metamodel (DM2) and Cloud Computing Modelling Notation (CCMN). The framework also makes an effort to extend and improve the mapping between The Open Group Architecture Framework (TOGAF) application architectural inputs/outputs, deliverables and Department of Defence Architecture Framework (DoDAF)-described models. The roadmap of 52 DoDAF-described models is constructed by creating the metamodels of these described models and analysing the constraint relationship among metamodels. By combining the tailored framework and the roadmap, this article proposes a service-oriented enterprise application architecture development process. Finally, a case study is presented to illustrate the results of implementing the tailored framework in the Southern Base Management Support and Information Platform construction project using the development process proposed by the paper.
Marsh, Jon N; Wallace, Kirk D; McCarthy, John E; Wickerhauser, Mladen V; Maurizi, Brian N; Lanza, Gregory M; Wickline, Samuel A; Hughes, Michael S
2010-08-01
Previously, we reported new methods for ultrasound signal characterization using entropy, H(f); a generalized entropy, the Renyi entropy, I(f)(r); and a limiting form of Renyi entropy suitable for real-time calculation, I(f),(infinity). All of these quantities demonstrated significantly more sensitivity to subtle changes in scattering architecture than energy-based methods in certain settings. In this study, the real-time calculable limit of the Renyi entropy, I(f),(infinity), is applied for the imaging of angiogenic murine neovasculature in a breast cancer xenograft using a targeted contrast agent. It is shown that this approach may be used to reliably detect the accumulation of targeted nanoparticles at five minutes post-injection in this in vivo model.
Two-dimensional optical architectures for the receive mode of phased-array antennas.
Pastur, L; Tonda-Goldstein, S; Dolfi, D; Huignard, J P; Merlet, T; Maas, O; Chazelas, J
1999-05-10
We propose and experimentally demonstrate two optical architectures that process the receive mode of a p x p element phased-array antenna. The architectures are based on free-space propagation and switching of the channelized optical carriers of microwave signals. With the first architecture a direct transposition of the received signals in the optical domain is assumed. The second architecture is based on the optical generation and distribution of a microwave local oscillator matched in frequency and direction. Preliminary experimental results at microwave frequencies of approximately 3 GHz are presented.
Santo, Vítor E; Gomes, Manuela E; Mano, João F; Reis, Rui L
2012-07-01
The field of biomaterials has advanced towards the molecular and nanoscale design of bioactive systems for tissue engineering, regenerative medicine and drug delivery. Spatial cues are displayed in the 3D extracellular matrix and can include signaling gradients, such as those observed during chemotaxis. Architectures range from the nanometer to the centimeter length scales as exemplified by extracellular matrix fibers, cells and macroscopic shapes. The main focus of this review is the application of a biomimetic approach by the combination of architectural cues, obtained through the application of micro- and nanofabrication techniques, with the ability to sequester and release growth factors and other bioactive agents in a spatiotemporal controlled manner for bone and cartilage engineering.
ERIC Educational Resources Information Center
Pihl, Ole
2015-01-01
How do architecture students experience the contradictions between the individual and the group at the Department of Architecture and Design of Aalborg University? The Problem-Based Learning model has been extensively applied to the department's degree programs in coherence with the Integrated Design Process, but is a group-based architecture and…
Real-time FPGA architectures for computer vision
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar
2000-03-01
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low level image processing. The FPGA-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on a dedicated VLSI to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real time performance are discussed. Some results are presented and discussed.
Geospatial Modelling Approach for 3d Urban Densification Developments
NASA Astrophysics Data System (ADS)
Koziatek, O.; Dragićević, S.; Li, S.
2016-06-01
With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D). The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE), and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI's CityEngine software and the Computer Generated Architecture (CGA) language.
Updates to the NASA Space Telecommunications Radio System (STRS) Architecture
NASA Technical Reports Server (NTRS)
Kacpura, Thomas J.; Handler, Louis M.; Briones, Janette; Hall, Charles S.
2008-01-01
This paper describes an update of the Space Telecommunications Radio System (STRS) open architecture for NASA space based radios. The STRS architecture has been defined as a framework for the design, development, operation and upgrade of space based software defined radios, where processing resources are constrained. The architecture has been updated based upon reviews by NASA missions, radio providers, and component vendors. The STRS Standard prescribes the architectural relationship between the software elements used in software execution and defines the Application Programmer Interface (API) between the operating environment and the waveform application. Modeling tools have been adopted to present the architecture. The paper will present a description of the updated API, configuration files, and constraints. Minimum compliance is discussed for early implementations. The paper then closes with a summary of the changes made and discussion of the relevant alignment with the Object Management Group (OMG) SWRadio specification, and enhancements to the specialized signal processing abstraction.
Extensive Evaluation of Using a Game Project in a Software Architecture Course
ERIC Educational Resources Information Center
Wang, Alf Inge
2011-01-01
This article describes an extensive evaluation of introducing a game project to a software architecture course. In this project, university students have to construct and design a type of software architecture, evaluate the architecture, implement an application based on the architecture, and test this implementation. In previous years, the domain…