Sample records for distributed system architecture

  1. Integrated Nationwide Electronic Health Records system: Semi-distributed architecture approach.

    PubMed

    Fragidis, Leonidas L; Chatzoglou, Prodromos D; Aggelidis, Vassilios P

    2016-11-14

    The integration of heterogeneous electronic health records systems by building an interoperable nationwide electronic health record system provides undisputable benefits in health care, like superior health information quality, medical errors prevention and cost saving. This paper proposes a semi-distributed system architecture approach for an integrated national electronic health record system incorporating the advantages of the two dominant approaches, the centralized architecture and the distributed architecture. The high level design of the main elements for the proposed architecture is provided along with diagrams of execution and operation and data synchronization architecture for the proposed solution. The proposed approach effectively handles issues related to redundancy, consistency, security, privacy, availability, load balancing, maintainability, complexity and interoperability of citizen's health data. The proposed semi-distributed architecture offers a robust interoperability framework without healthcare providers to change their local EHR systems. It is a pragmatic approach taking into account the characteristics of the Greek national healthcare system along with the national public administration data communication network infrastructure, for achieving EHR integration with acceptable implementation cost.

  2. Fuzzy-Neural Controller in Service Requests Distribution Broker for SOA-Based Systems

    NASA Astrophysics Data System (ADS)

    Fras, Mariusz; Zatwarnicka, Anna; Zatwarnicki, Krzysztof

    The evolution of software architectures led to the rising importance of the Service Oriented Architecture (SOA) concept. This architecture paradigm support building flexible distributed service systems. In the paper the architecture of service request distribution broker designed for use in SOA-based systems is proposed. The broker is built with idea of fuzzy control. The functional and non-functional request requirements in conjunction with monitoring of execution and communication links are used to distribute requests. Decisions are made with use of fuzzy-neural network.

  3. Communication Needs Assessment for Distributed Turbine Engine Control

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.; Behbahani, Alireza R.

    2008-01-01

    Control system architecture is a major contributor to future propulsion engine performance enhancement and life cycle cost reduction. The control system architecture can be a means to effect net weight reduction in future engine systems, provide a streamlined approach to system design and implementation, and enable new opportunities for performance optimization and increased awareness about system health. The transition from a centralized, point-to-point analog control topology to a modular, networked, distributed system is paramount to extracting these system improvements. However, distributed engine control systems are only possible through the successful design and implementation of a suitable communication system. In a networked system, understanding the data flow between control elements is a fundamental requirement for specifying the communication architecture which, itself, is dependent on the functional capability of electronics in the engine environment. This paper presents an assessment of the communication needs for distributed control using strawman designs and relates how system design decisions relate to overall goals as we progress from the baseline centralized architecture, through partially distributed and fully distributed control systems.

  4. Distributed sensor architecture for intelligent control that supports quality of control and quality of service.

    PubMed

    Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés

    2015-02-25

    This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems.

  5. Distributed Sensor Architecture for Intelligent Control that Supports Quality of Control and Quality of Service

    PubMed Central

    Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés

    2015-01-01

    This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems. PMID:25723145

  6. Development of System Architecture to Investigate the Impact of Integrated Air and Missile Defense in a Distributed Lethality Environment

    DTIC Science & Technology

    2017-12-01

    SYSTEM ARCHITECTURE TO INVESTIGATE THE IMPACT OF INTEGRATED AIR AND MISSILE DEFENSE IN A DISTRIBUTED LETHALITY ENVIRONMENT by Justin K. Davis...TO INVESTIGATE THE IMPACT OF INTEGRATED AIR AND MISSILE DEFENSE IN A DISTRIBUTED LETHALITY ENVIRONMENT 5. FUNDING NUMBERS 6. AUTHOR(S) Justin K...ARCHITECTURE TO INVESTIGATE THE IMPACT OF INTEGRATED AIR AND MISSILE DEFENSE IN A DISTRIBUTED LETHALITY ENVIRONMENT Justin K. Davis Lieutenant

  7. An Agent-Based Dynamic Model for Analysis of Distributed Space Exploration Architectures

    NASA Astrophysics Data System (ADS)

    Sindiy, Oleg V.; DeLaurentis, Daniel A.; Stein, William B.

    2009-07-01

    A range of complex challenges, but also potentially unique rewards, underlie the development of exploration architectures that use a distributed, dynamic network of resources across the solar system. From a methodological perspective, the prime challenge is to systematically model the evolution (and quantify comparative performance) of such architectures, under uncertainty, to effectively direct further study of specialized trajectories, spacecraft technologies, concept of operations, and resource allocation. A process model for System-of-Systems Engineering is used to define time-varying performance measures for comparative architecture analysis and identification of distinguishing patterns among interoperating systems. Agent-based modeling serves as the means to create a discrete-time simulation that generates dynamics for the study of architecture evolution. A Solar System Mobility Network proof-of-concept problem is introduced representing a set of longer-term, distributed exploration architectures. Options within this set revolve around deployment of human and robotic exploration and infrastructure assets, their organization, interoperability, and evolution, i.e., a system-of-systems. Agent-based simulations quantify relative payoffs for a fully distributed architecture (which can be significant over the long term), the latency period before they are manifest, and the up-front investment (which can be substantial compared to alternatives). Verification and sensitivity results provide further insight on development paths and indicate that the framework and simulation modeling approach may be useful in architectural design of other space exploration mass, energy, and information exchange settings.

  8. How to ensure sustainable interoperability in heterogeneous distributed systems through architectural approach.

    PubMed

    Pape-Haugaard, Louise; Frank, Lars

    2011-01-01

    A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.

  9. Integrating security in a group oriented distributed system

    NASA Technical Reports Server (NTRS)

    Reiter, Michael; Birman, Kenneth; Gong, LI

    1992-01-01

    A distributed security architecture is proposed for incorporation into group oriented distributed systems, and in particular, into the Isis distributed programming toolkit. The primary goal of the architecture is to make common group oriented abstractions robust in hostile settings, in order to facilitate the construction of high performance distributed applications that can tolerate both component failures and malicious attacks. These abstractions include process groups and causal group multicast. Moreover, a delegation and access control scheme is proposed for use in group oriented systems. The focus is the security architecture; particular cryptosystems and key exchange protocols are not emphasized.

  10. Prototyping a Distributed Information Retrieval System That Uses Statistical Ranking.

    ERIC Educational Resources Information Center

    Harman, Donna; And Others

    1991-01-01

    Built using a distributed architecture, this prototype distributed information retrieval system uses statistical ranking techniques to provide better service to the end user. Distributed architecture was shown to be a feasible alternative to centralized or CD-ROM information retrieval, and user testing of the ranking methodology showed both…

  11. Distributed computing environments for future space control systems

    NASA Technical Reports Server (NTRS)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  12. A Distributed Architecture for Tsunami Early Warning and Collaborative Decision-support in Crises

    NASA Astrophysics Data System (ADS)

    Moßgraber, J.; Middleton, S.; Hammitzsch, M.; Poslad, S.

    2012-04-01

    The presentation will describe work on the system architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". The challenges for a Tsunami Early Warning System (TEWS) are manifold and the success of a system depends crucially on the system's architecture. A modern warning system following a system-of-systems approach has to integrate various components and sub-systems such as different information sources, services and simulation systems. Furthermore, it has to take into account the distributed and collaborative nature of warning systems. In order to create an architecture that supports the whole spectrum of a modern, distributed and collaborative warning system one must deal with multiple challenges. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. At the bottom layer it has to reliably integrate a large set of conventional sensors, such as seismic sensors and sensor networks, buoys and tide gauges, and also innovative and unconventional sensors, such as streams of messages from social media services. At the top layer it has to support collaboration on high-level decision processes and facilitates information sharing between organizations. In between, the system has to process all data and integrate information on a semantic level in a timely manner. This complex communication follows an event-driven mechanism allowing events to be published, detected and consumed by various applications within the architecture. Therefore, at the upper layer the event-driven architecture (EDA) aspects are combined with principles of service-oriented architectures (SOA) using standards for communication and data exchange. The most prominent challenges on this layer include providing a framework for information integration on a syntactic and semantic level, leveraging distributed processing resources for a scalable data processing platform, and automating data processing and decision support workflows.

  13. Programming model for distributed intelligent systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.

    1988-01-01

    A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.

  14. Modeling and Verification of Dependable Electronic Power System Architecture

    NASA Astrophysics Data System (ADS)

    Yuan, Ling; Fan, Ping; Zhang, Xiao-fang

    The electronic power system can be viewed as a system composed of a set of concurrently interacting subsystems to generate, transmit, and distribute electric power. The complex interaction among sub-systems makes the design of electronic power system complicated. Furthermore, in order to guarantee the safe generation and distribution of electronic power, the fault tolerant mechanisms are incorporated in the system design to satisfy high reliability requirements. As a result, the incorporation makes the design of such system more complicated. We propose a dependable electronic power system architecture, which can provide a generic framework to guide the development of electronic power system to ease the development complexity. In order to provide common idioms and patterns to the system *designers, we formally model the electronic power system architecture by using the PVS formal language. Based on the PVS model of this system architecture, we formally verify the fault tolerant properties of the system architecture by using the PVS theorem prover, which can guarantee that the system architecture can satisfy high reliability requirements.

  15. Distributed hierarchical control architecture for integrating smart grid assets during normal and disrupted operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalsi, Karan; Fuller, Jason C.; Somani, Abhishek

    Disclosed herein are representative embodiments of methods, apparatus, and systems for facilitating operation and control of a resource distribution system (such as a power grid). Among the disclosed embodiments is a distributed hierarchical control architecture (DHCA) that enables smart grid assets to effectively contribute to grid operations in a controllable manner, while helping to ensure system stability and equitably rewarding their contribution. Embodiments of the disclosed architecture can help unify the dispatch of these resources to provide both market-based and balancing services.

  16. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  17. Proton beam therapy control system

    DOEpatents

    Baumann, Michael A [Riverside, CA; Beloussov, Alexandre V [Bernardino, CA; Bakir, Julide [Alta Loma, CA; Armon, Deganit [Redlands, CA; Olsen, Howard B [Colton, CA; Salem, Dana [Riverside, CA

    2008-07-08

    A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.

  18. Proton beam therapy control system

    DOEpatents

    Baumann, Michael A.; Beloussov, Alexandre V.; Bakir, Julide; Armon, Deganit; Olsen, Howard B.; Salem, Dana

    2010-09-21

    A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.

  19. Proton beam therapy control system

    DOEpatents

    Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana

    2013-06-25

    A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.

  20. Proton beam therapy control system

    DOEpatents

    Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana

    2013-12-03

    A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.

  1. Ground support system methodology and architecture

    NASA Technical Reports Server (NTRS)

    Schoen, P. D.

    1991-01-01

    A synergistic approach to systems test and support is explored. A building block architecture provides transportability of data, procedures, and knowledge. The synergistic approach also lowers cost and risk for life cycle of a program. The determination of design errors at the earliest phase reduces cost of vehicle ownership. Distributed scaleable architecture is based on industry standards maximizing transparency and maintainability. Autonomous control structure provides for distributed and segmented systems. Control of interfaces maximizes compatibility and reuse, reducing long term program cost. Intelligent data management architecture also reduces analysis time and cost (automation).

  2. Control and Communication for a Secure and Reconfigurable Power Distribution System

    NASA Astrophysics Data System (ADS)

    Giacomoni, Anthony Michael

    A major transformation is taking place throughout the electric power industry to overlay existing electric infrastructure with advanced sensing, communications, and control system technologies. This transformation to a smart grid promises to enhance system efficiency, increase system reliability, support the electrification of transportation, and provide customers with greater control over their electricity consumption. Upgrading control and communication systems for the end-to-end electric power grid, however, will present many new security challenges that must be dealt with before extensive deployment and implementation of these technologies can begin. In this dissertation, a comprehensive systems approach is taken to minimize and prevent cyber-physical disturbances to electric power distribution systems using sensing, communications, and control system technologies. To accomplish this task, an intelligent distributed secure control (IDSC) architecture is presented and validated in silico for distribution systems to provide greater adaptive protection, with the ability to proactively reconfigure, and rapidly respond to disturbances. Detailed descriptions of functionalities at each layer of the architecture as well as the whole system are provided. To compare the performance of the IDSC architecture with that of other control architectures, an original simulation methodology is developed. The simulation model integrates aspects of cyber-physical security, dynamic price and demand response, sensing, communications, intermittent distributed energy resources (DERs), and dynamic optimization and reconfiguration. Applying this comprehensive systems approach, performance results for the IEEE 123 node test feeder are simulated and analyzed. The results show the trade-offs between system reliability, operational constraints, and costs for several control architectures and optimization algorithms. Additional simulation results are also provided. In particular, the advantages of an IDSC architecture are highlighted when an intermittent DER is present on the system.

  3. ESPC Common Model Architecture

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Common Model Architecture Earth System Modeling...Operational Prediction Capability (NUOPC) was established between NOAA and Navy to develop common software architecture for easy and efficient...development under a common model architecture and other software-related standards in this project. OBJECTIVES NUOPC proposes to accelerate

  4. Power System Information Delivering System Based on Distributed Object

    NASA Astrophysics Data System (ADS)

    Tanaka, Tatsuji; Tsuchiya, Takehiko; Tamura, Setsuo; Seki, Tomomichi; Kubota, Kenji

    In recent years, improvement in computer performance and development of computer network technology or the distributed information processing technology has a remarkable thing. Moreover, the deregulation is starting and will be spreading in the electric power industry in Japan. Consequently, power suppliers are required to supply low cost power with high quality services to customers. Corresponding to these movements the authors have been proposed SCOPE (System Configuration Of PowEr control system) architecture for distributed EMS/SCADA (Energy Management Systems / Supervisory Control and Data Acquisition) system based on distributed object technology, which offers the flexibility and expandability adapting those movements. In this paper, the authors introduce a prototype of the power system information delivering system, which was developed based on SCOPE architecture. This paper describes the architecture and the evaluation results of this prototype system. The power system information delivering system supplies useful power systems information such as electric power failures to the customers using Internet and distributed object technology. This system is new type of SCADA system which monitors failure of power transmission system and power distribution system with geographic information integrated way.

  5. Using an Integrated Distributed Test Architecture to Develop an Architecture for Mars

    NASA Technical Reports Server (NTRS)

    Othon, William L.

    2016-01-01

    The creation of a crew-rated spacecraft architecture capable of sending humans to Mars requires the development and integration of multiple vehicle systems and subsystems. Important new technologies will be identified and matured within each technical discipline to support the mission. Architecture maturity also requires coordination with mission operations elements and ground infrastructure. During early architecture formulation, many of these assets will not be co-located and will required integrated, distributed test to show that the technologies and systems are being developed in a coordinated way. When complete, technologies must be shown to function together to achieve mission goals. In this presentation, an architecture will be described that promotes and advances integration of disparate systems within JSC and across NASA centers.

  6. Incorporating client-server database architecture and graphical user interface into outpatient medical records.

    PubMed Central

    Fiacco, P. A.; Rice, W. H.

    1991-01-01

    Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732

  7. A Geo-Distributed System Architecture for Different Domains

    NASA Astrophysics Data System (ADS)

    Moßgraber, Jürgen; Middleton, Stuart; Tao, Ran

    2013-04-01

    The presentation will describe work on the system-of-systems (SoS) architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". In this project we deal with two use-cases: Natural Crisis Management (e.g. Tsunami Early Warning) and Industrial Subsurface Development (e.g. drilling for oil). These use-cases seem to be quite different at first sight but share a lot of similarities, like managing and looking up available sensors, extracting data from them and annotate it semantically, intelligently manage the data (big data problem), run mathematical analysis algorithms on the data and finally provide decision support on this basis. The main challenge was to create a generic architecture which fits both use-cases. The requirements to the architecture are manifold and the whole spectrum of a modern, geo-distributed and collaborative system comes into play. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. The most important architectural challenges we needed to address are 1. Build a scalable communication layer for a System-of-sytems 2. Build a resilient communication layer for a System-of-sytems 3. Efficiently publish large volumes of semantically rich sensor data 4. Scalable and high performance storage of large distributed datasets 5. Handling federated multi-domain heterogeneous data 6. Discovery of resources in a geo-distributed SoS 7. Coordination of work between geo-distributed systems The design decisions made for each of them will be presented. These developed concepts are also applicable to the requirements of the Future Internet (FI) and Internet of Things (IoT) which will provide services like smart grids, smart metering, logistics and environmental monitoring.

  8. A Distributed Intelligent E-Learning System

    ERIC Educational Resources Information Center

    Kristensen, Terje

    2016-01-01

    An E-learning system based on a multi-agent (MAS) architecture combined with the Dynamic Content Manager (DCM) model of E-learning, is presented. We discuss the benefits of using such a multi-agent architecture. Finally, the MAS architecture is compared with a pure service-oriented architecture (SOA). This MAS architecture may also be used within…

  9. A new HLA-based distributed control architecture for agricultural teams of robots in hybrid applications with real and simulated devices or environments.

    PubMed

    Nebot, Patricio; Torres-Sospedra, Joaquín; Martínez, Rafael J

    2011-01-01

    The control architecture is one of the most important part of agricultural robotics and other robotic systems. Furthermore its importance increases when the system involves a group of heterogeneous robots that should cooperate to achieve a global goal. A new control architecture is introduced in this paper for groups of robots in charge of doing maintenance tasks in agricultural environments. Some important features such as scalability, code reuse, hardware abstraction and data distribution have been considered in the design of the new architecture. Furthermore, coordination and cooperation among the different elements in the system is allowed in the proposed control system. By integrating a network oriented device server Player, Java Agent Development Framework (JADE) and High Level Architecture (HLA), the previous concepts have been considered in the new architecture presented in this paper. HLA can be considered the most important part because it not only allows the data distribution and implicit communication among the parts of the system but also allows to simultaneously operate with simulated and real entities, thus allowing the use of hybrid systems in the development of applications.

  10. Dynamic Task Assignment of Autonomous Distributed AGV in an Intelligent FMS Environment

    NASA Astrophysics Data System (ADS)

    Fauadi, Muhammad Hafidz Fazli Bin Md; Lin, Hao Wen; Murata, Tomohiro

    The need of implementing distributed system is growing significantly as it is proven to be effective for organization to be flexible against a highly demanding market. Nevertheless, there are still large technical gaps need to be addressed to gain significant achievement. We propose a distributed architecture to control Automated Guided Vehicle (AGV) operation based on multi-agent architecture. System architectures and agents' functions have been designed to support distributed control of AGV. Furthermore, enhanced agent communication protocol has been configured to accommodate dynamic attributes of AGV task assignment procedure. Result proved that the technique successfully provides a better solution.

  11. High performance architecture design for large scale fibre-optic sensor arrays using distributed EDFAs and hybrid TDM/DWDM

    NASA Astrophysics Data System (ADS)

    Liao, Yi; Austin, Ed; Nash, Philip J.; Kingsley, Stuart A.; Richardson, David J.

    2013-09-01

    A distributed amplified dense wavelength division multiplexing (DWDM) array architecture is presented for interferometric fibre-optic sensor array systems. This architecture employs a distributed erbium-doped fibre amplifier (EDFA) scheme to decrease the array insertion loss, and employs time division multiplexing (TDM) at each wavelength to increase the number of sensors that can be supported. The first experimental demonstration of this system is reported including results which show the potential for multiplexing and interrogating up to 4096 sensors using a single telemetry fibre pair with good system performance. The number can be increased to 8192 by using dual pump sources.

  12. Status, Vision, and Challenges of an Intelligent Distributed Engine Control Architecture

    NASA Technical Reports Server (NTRS)

    Behbahani, Alireza; Culley, Dennis; Garg, Sanjay; Millar, Richard; Smith, Bert; Wood, Jim; Mahoney, Tim; Quinn, Ronald; Carpenter, Sheldon; Mailander, Bill; hide

    2007-01-01

    A Distributed Engine Control Working Group (DECWG) consisting of the Department of Defense (DoD), the National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) and industry has been formed to examine the current and future requirements of propulsion engine systems. The scope of this study will include an assessment of the paradigm shift from centralized engine control architecture to an architecture based on distributed control utilizing open system standards. Included will be a description of the work begun in the 1990's, which continues today, followed by the identification of the remaining technical challenges which present barriers to on-engine distributed control.

  13. A digital protection system incorporating knowledge based learning

    NASA Astrophysics Data System (ADS)

    Watson, Karan; Russell, B. Don; McCall, Kurt

    A digital system architecture used to diagnoses the operating state and health of electric distribution lines and to generate actions for line protection is presented. The architecture is described functionally and to a limited extent at the hardware level. This architecture incorporates multiple analysis and fault-detection techniques utilizing a variety of parameters. In addition, a knowledge-based decision maker, a long-term memory retention and recall scheme, and a learning environment are described. Preliminary laboratory implementations of the system elements have been completed. Enhanced protection for electric distribution feeders is provided by this system. Advantages of the system are enumerated.

  14. A practical approach for active camera coordination based on a fusion-driven multi-agent system

    NASA Astrophysics Data System (ADS)

    Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.

    2014-04-01

    In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.

  15. The TENOR Architecture for Advanced Distributed Learning and Intelligent Training

    DTIC Science & Technology

    2002-01-01

    called TENOR, for Training Education Network on Request. There have been a number of recent learning systems developed that leverage off Internet...AG2-14256 AIAA 2002-1054 The TENOR Architecture for Advanced Distributed Learning and Intelligent Training C. Tibaudo, J. Kristl and J. Schroeder...COVERED 4. TITLE AND SUBTITLE The TENOR Architecture for Advanced Distributed Learning and Intelligent Training 5a. CONTRACT NUMBER F33615-00-M

  16. Distributed information system architecture for Primary Health Care.

    PubMed

    Grammatikou, M; Stamatelopoulos, F; Maglaris, B

    2000-01-01

    We present a distributed architectural framework for Primary Health Care (PHC) Centres. Distribution is handled through the introduction of the Roaming Electronic Health Care Record (R-EHCR) and the use of local caching and incremental update of a global index. The proposed architecture is designed to accommodate a specific PHC workflow model. Finally, we discuss a pilot implementation in progress, which is based on CORBA and web-based user interfaces. However, the conceptual architecture is generic and open to other middleware approaches like the DHE or HL7.

  17. Finding idle machines in a workstation-based distributed system

    NASA Technical Reports Server (NTRS)

    Theimer, Marvin M.; Lantz, Keith A.

    1989-01-01

    The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.

  18. System design in an evolving system-of-systems architecture and concept of operations

    NASA Astrophysics Data System (ADS)

    Rovekamp, Roger N., Jr.

    Proposals for space exploration architectures have increased in complexity and scope. Constituent systems (e.g., rovers, habitats, in-situ resource utilization facilities, transfer vehicles, etc) must meet the needs of these architectures by performing in multiple operational environments and across multiple phases of the architecture's evolution. This thesis proposes an approach for using system-of-systems engineering principles in conjunction with system design methods (e.g., Multi-objective optimization, genetic algorithms, etc) to create system design options that perform effectively at both the system and system-of-systems levels, across multiple concepts of operations, and over multiple architectural phases. The framework is presented by way of an application problem that investigates the design of power systems within a power sharing architecture for use in a human Lunar Surface Exploration Campaign. A computer model has been developed that uses candidate power grid distribution solutions for a notional lunar base. The agent-based model utilizes virtual control agents to manage the interactions of various exploration and infrastructure agents. The philosophy behind the model is based both on lunar power supply strategies proposed in literature, as well as on the author's own approaches for power distribution strategies of future lunar bases. In addition to proposing a framework for system design, further implications of system-of-systems engineering principles are briefly explored, specifically as they relate to producing more robust cross-cultural system-of-systems architecture solutions.

  19. Immunology-directed methods for distributed robotics: a novel immunity-based architecture for robust control and coordination

    NASA Astrophysics Data System (ADS)

    Singh, Surya P. N.; Thayer, Scott M.

    2002-02-01

    This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.

  20. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  1. Advanced computer architecture specification for automated weld systems

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.

  2. Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine

    NASA Astrophysics Data System (ADS)

    Zhang, Daili

    Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.

  3. Stability and performance of propulsion control systems with distributed control architectures and failures

    NASA Astrophysics Data System (ADS)

    Belapurkar, Rohit K.

    Future aircraft engine control systems will be based on a distributed architecture, in which, the sensors and actuators will be connected to the Full Authority Digital Engine Control (FADEC) through an engine area network. Distributed engine control architecture will allow the implementation of advanced, active control techniques along with achieving weight reduction, improvement in performance and lower life cycle cost. The performance of a distributed engine control system is predominantly dependent on the performance of the communication network. Due to the serial data transmission policy, network-induced time delays and sampling jitter are introduced between the sensor/actuator nodes and the distributed FADEC. Communication network faults and transient node failures may result in data dropouts, which may not only degrade the control system performance but may even destabilize the engine control system. Three different architectures for a turbine engine control system based on a distributed framework are presented. A partially distributed control system for a turbo-shaft engine is designed based on ARINC 825 communication protocol. Stability conditions and control design methodology are developed for the proposed partially distributed turbo-shaft engine control system to guarantee the desired performance under the presence of network-induced time delay and random data loss due to transient sensor/actuator failures. A fault tolerant control design methodology is proposed to benefit from the availability of an additional system bandwidth and from the broadcast feature of the data network. It is shown that a reconfigurable fault tolerant control design can help to reduce the performance degradation in presence of node failures. A T-700 turbo-shaft engine model is used to validate the proposed control methodology based on both single input and multiple-input multiple-output control design techniques.

  4. On Event-Triggered Adaptive Architectures for Decentralized and Distributed Control of Large-Scale Modular Systems

    PubMed Central

    Albattat, Ali; Gruenwald, Benjamin C.; Yucelen, Tansel

    2016-01-01

    The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches. PMID:27537894

  5. On Event-Triggered Adaptive Architectures for Decentralized and Distributed Control of Large-Scale Modular Systems.

    PubMed

    Albattat, Ali; Gruenwald, Benjamin C; Yucelen, Tansel

    2016-08-16

    The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches.

  6. A novel system architecture for the national integration of electronic health records: a semi-centralized approach.

    PubMed

    AlJarullah, Asma; El-Masri, Samir

    2013-08-01

    The goal of a national electronic health records integration system is to aggregate electronic health records concerning a particular patient at different healthcare providers' systems to provide a complete medical history of the patient. It holds the promise to address the two most crucial challenges to the healthcare systems: improving healthcare quality and controlling costs. Typical approaches for the national integration of electronic health records are a centralized architecture and a distributed architecture. This paper proposes a new approach for the national integration of electronic health records, the semi-centralized approach, an intermediate solution between the centralized architecture and the distributed architecture that has the benefits of both approaches. The semi-centralized approach is provided with a clearly defined architecture. The main data elements needed by the system are defined and the main system modules that are necessary to achieve an effective and efficient functionality of the system are designed. Best practices and essential requirements are central to the evolution of the proposed architecture. The proposed architecture will provide the basis for designing the simplest and the most effective systems to integrate electronic health records on a nation-wide basis that maintain integrity and consistency across locations, time and systems, and that meet the challenges of interoperability, security, privacy, maintainability, mobility, availability, scalability, and load balancing.

  7. Multi-Agent Architecture with Support to Quality of Service and Quality of Control

    NASA Astrophysics Data System (ADS)

    Poza-Luján, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, Jose-Enrique

    Multi Agent Systems (MAS) are one of the most suitable frameworks for the implementation of intelligent distributed control system. Agents provide suitable flexibility to give support to implied heterogeneity in cyber-physical systems. Quality of Service (QoS) and Quality of Control (QoC) parameters are commonly utilized to evaluate the efficiency of the communications and the control loop. Agents can use the quality measures to take a wide range of decisions, like suitable placement on the control node or to change the workload to save energy. This article describes the architecture of a multi agent system that provides support to QoS and QoC parameters to optimize de system. The architecture uses a Publish-Subscriber model, based on Data Distribution Service (DDS) to send the control messages. Due to the nature of the Publish-Subscribe model, the architecture is suitable to implement event-based control (EBC) systems. The architecture has been called FSACtrl.

  8. Space Station Freedom power management and distribution design status

    NASA Technical Reports Server (NTRS)

    Javidi, S.; Gholdston, E.; Stroh, P.

    1989-01-01

    The design status of the power management and distribution electric power system for the Space Station Freedom is presented. The current design is a star architecture, which has been found to be the best approach for meeting the requirement to deliver 120 V dc to the user interface. The architecture minimizes mass and power losses while improving element-to-element isolation and system flexibility. The design is partitioned into three elements: energy collection, storage and conversion, system protection and distribution, and management and control.

  9. Extensions to the Parallel Real-Time Artificial Intelligence System (PRAIS) for fault-tolerant heterogeneous cycle-stealing reasoning

    NASA Technical Reports Server (NTRS)

    Goldstein, David

    1991-01-01

    Extensions to an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS) are discussed. PRAIS strives for transparently parallelizing production (rule-based) systems, even under real-time constraints. PRAIS accomplished these goals (presented at the first annual C Language Integrated Production System (CLIPS) conference) by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors. Results using the original PRAIS architecture over a network of Sun 3's, Sun 4's and VAX's are presented. Mechanisms using the producer-consumer model to extend the architecture for fault-tolerance and distributed truth maintenance initiation are also discussed.

  10. Research on the framework and key technologies of panoramic visualization for smart distribution network

    NASA Astrophysics Data System (ADS)

    Du, Jian; Sheng, Wanxing; Lin, Tao; Lv, Guangxian

    2018-05-01

    Nowadays, the smart distribution network has made tremendous progress, and the business visualization becomes even more significant and indispensable. Based on the summarization of traditional visualization technologies and demands of smart distribution network, a panoramic visualization application is proposed in this paper. The overall architecture, integrated architecture and service architecture of panoramic visualization application is firstly presented. Then, the architecture design and main functions of panoramic visualization system are elaborated in depth. In addition, the key technologies related to the application is discussed briefly. At last, two typical visualization scenarios in smart distribution network, which are risk warning and fault self-healing, proves that the panoramic visualization application is valuable for the operation and maintenance of the distribution network.

  11. Laboratory for Computer Science Progress Report 19, 1 July 1981-30 June 1982.

    DTIC Science & Technology

    1984-05-01

    Multiprocessor Architectures 202 4. TRIX Operating System 209 5. VLSI Tools 212 ’SYSTEMATIC PROGRAM DEVELOPMENT, 221 1. Introduction 222 2. Specification...exploring distributed operating systems and the architecture of single-user powerful computers that are interconnected by communication networks. The...to now. In particular, we expect to experiment with languages, operating systems , and applications that establish the feasibility of distributed

  12. A distributed parallel storage architecture and its potential application within EOSDIS

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony

    1994-01-01

    We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.

  13. Design of Distributed Engine Control Systems with Uncertain Delay.

    PubMed

    Liu, Xiaofeng; Li, Yanxi; Sun, Xu

    Future gas turbine engine control systems will be based on distributed architecture, in which, the sensors and actuators will be connected to the controllers via a communication network. The performance of the distributed engine control (DEC) is dependent on the network performance. This study introduces a distributed control system architecture based on a networked cascade control system (NCCS). Typical turboshaft engine-distributed controllers are designed based on the NCCS framework with a H∞ output feedback under network-induced time delays and uncertain disturbances. The sufficient conditions for robust stability are derived via the Lyapunov stability theory and linear matrix inequality approach. Both numerical and hardware-in-loop simulations illustrate the effectiveness of the presented method.

  14. Design of Distributed Engine Control Systems with Uncertain Delay

    PubMed Central

    Li, Yanxi; Sun, Xu

    2016-01-01

    Future gas turbine engine control systems will be based on distributed architecture, in which, the sensors and actuators will be connected to the controllers via a communication network. The performance of the distributed engine control (DEC) is dependent on the network performance. This study introduces a distributed control system architecture based on a networked cascade control system (NCCS). Typical turboshaft engine-distributed controllers are designed based on the NCCS framework with a H∞ output feedback under network-induced time delays and uncertain disturbances. The sufficient conditions for robust stability are derived via the Lyapunov stability theory and linear matrix inequality approach. Both numerical and hardware-in-loop simulations illustrate the effectiveness of the presented method. PMID:27669005

  15. An Autonomous Mobile Agent-Based Distributed Learning Architecture: A Proposal and Analytical Analysis

    ERIC Educational Resources Information Center

    Ahmed, Iftikhar; Sadeq, Muhammad Jafar

    2006-01-01

    Current distance learning systems are increasingly packing highly data-intensive contents on servers, resulting in the congestion of network and server resources at peak service times. A distributed learning system based on faded information field (FIF) architecture that employs mobile agents (MAs) has been proposed and simulated in this work. The…

  16. Assessment of the integration capability of system architectures from a complex and distributed software systems perspective

    NASA Astrophysics Data System (ADS)

    Leuchter, S.; Reinert, F.; Müller, W.

    2014-06-01

    Procurement and design of system architectures capable of network centric operations demand for an assessment scheme in order to compare different alternative realizations. In this contribution an assessment method for system architectures targeted at the C4ISR domain is presented. The method addresses the integration capability of software systems from a complex and distributed software system perspective focusing communication, interfaces and software. The aim is to evaluate the capability to integrate a system or its functions within a system-of-systems network. This method uses approaches from software architecture quality assessment and applies them on the system architecture level. It features a specific goal tree of several dimensions that are relevant for enterprise integration. These dimensions have to be weighed against each other and totalized using methods from the normative decision theory in order to reflect the intention of the particular enterprise integration effort. The indicators and measurements for many of the considered quality features rely on a model based view on systems, networks, and the enterprise. That means it is applicable to System-of-System specifications based on enterprise architectural frameworks relying on defined meta-models or domain ontologies for defining views and viewpoints. In the defense context we use the NATO Architecture Framework (NAF) to ground respective system models. The proposed assessment method allows evaluating and comparing competing system designs regarding their future integration potential. It is a contribution to the system-of-systems engineering methodology.

  17. A Distributed Prognostic Health Management Architecture

    NASA Technical Reports Server (NTRS)

    Bhaskar, Saha; Saha, Sankalita; Goebel, Kai

    2009-01-01

    This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.

  18. An eConsent-based System Architecture Supporting Cooperation in Integrated Healthcare Networks.

    PubMed

    Bergmann, Joachim; Bott, Oliver J; Hoffmann, Ina; Pretschner, Dietrich P

    2005-01-01

    The economical need for efficient healthcare leads to cooperative shared care networks. A virtual electronic health record is required, which integrates patient related information but reflects the distributed infrastructure and restricts access only to those health professionals involved into the care process. Our work aims on specification and development of a system architecture fulfilling these requirements to be used in concrete regional pilot studies. Methodical analysis and specification have been performed in a healthcare network using the formal method and modelling tool MOSAIK-M. The complexity of the application field was reduced by focusing on the scenario of thyroid disease care, which still includes various interdisciplinary cooperation. Result is an architecture for a secure distributed electronic health record for integrated care networks, specified in terms of a MOSAIK-M-based system model. The architecture proposes business processes, application services, and a sophisticated security concept, providing a platform for distributed document-based, patient-centred, and secure cooperation. A corresponding system prototype has been developed for pilot studies, using advanced application server technologies. The architecture combines a consolidated patient-centred document management with a decentralized system structure without needs for replication management. An eConsent-based approach assures, that access to the distributed health record remains under control of the patient. The proposed architecture replaces message-based communication approaches, because it implements a virtual health record providing complete and current information. Acceptance of the new communication services depends on compatibility with the clinical routine. Unique and cross-institutional identification of a patient is also a challenge, but will loose significance with establishing common patient cards.

  19. An architecture for automated fault diagnosis. [Space Station Module/Power Management And Distribution

    NASA Technical Reports Server (NTRS)

    Ashworth, Barry R.

    1989-01-01

    A description is given of the SSM/PMAD power system automation testbed, which was developed using a systems engineering approach. The architecture includes a knowledge-based system and has been successfully used in power system management and fault diagnosis. Architectural issues which effect overall system activities and performance are examined. The knowledge-based system is discussed along with its associated automation implications, and interfaces throughout the system are presented.

  20. Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture

    NASA Astrophysics Data System (ADS)

    Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel

    2003-11-01

    Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.

  1. Flexible distributed architecture for semiconductor process control and experimentation

    NASA Astrophysics Data System (ADS)

    Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.

    1997-01-01

    Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.

  2. Performance Analysis of Distributed Object-Oriented Applications

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.

  3. Distributed Control Architecture for Gas Turbine Engine. Chapter 4

    NASA Technical Reports Server (NTRS)

    Culley, Dennis; Garg, Sanjay

    2009-01-01

    The transformation of engine control systems from centralized to distributed architecture is both necessary and enabling for future aeropropulsion applications. The continued growth of adaptive control applications and the trend to smaller, light weight cores is a counter influence on the weight and volume of control system hardware. A distributed engine control system using high temperature electronics and open systems communications will reverse the growing trend of control system weight ratio to total engine weight and also be a major factor in decreasing overall cost of ownership for aeropropulsion systems. The implementation of distributed engine control is not without significant challenges. There are the needs for high temperature electronics, development of simple, robust communications, and power supply for the on-board electronics.

  4. Supporting shared data structures on distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, since there is no support for shared data structures. Current programming languages for distributed memory architectures force the user to decompose all data structures into separate pieces, with each piece owned by one of the processors in the machine, and with all communication explicitly specified by low-level message-passing primitives. A new programming environment is presented for distributed memory architectures, providing a global name space and allowing direct access to remote parts of data values. The analysis and program transformations required to implement this environment are described, and the efficiency of the resulting code on the NCUBE/7 and IPSC/2 hypercubes are described.

  5. Providing the full DDF link protection for bus-connected SIEPON based system architecture

    NASA Astrophysics Data System (ADS)

    Hwang, I.-Shyan; Pakpahan, Andrew Fernando; Liem, Andrew Tanny; Nikoukar, AliAkbar

    2016-09-01

    Currently a massive amount of traffic per second is delivered through EPON systems, one of the prominent access network technologies for delivering the next generation network. Therefore, it is vital to keep the EPON optical distribution network (ODN) working by providing the necessity protection mechanism in the deployed devices; otherwise, when failures occur it will cause a great loss for both network operators and business customers. In this paper, we propose a bus-connected architecture to protect and recover distribution drop fiber (DDF) link faults or transceiver failures at ONU(s) in SIEPON system. The proposed architecture provides a cost-effective architecture, which delivers the high fault-tolerance in handling multiple DDF faults, while also providing flexibility in choosing the backup ONU assignments. Simulation results show that the proposed architecture provides the reliability and maintains quality of service (QoS) performance in terms of mean packet delay, system throughput, packet loss and EF jitter when DDF link failures occur.

  6. The AI Bus architecture for distributed knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain

    1991-01-01

    The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.

  7. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Lytle, John K. (Technical Monitor)

    2002-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT). This paper discusses the salient features of the NPSS Architecture including its interface layer, object layer, implementation for accessing legacy codes, numerical zooming infrastructure and its computing layer. The computing layer focuses on the use and deployment of these propulsion simulations on parallel and distributed computing platforms which has been the focus of NASA Ames. Additional features of the object oriented architecture that support MultiDisciplinary (MD) Coupling, computer aided design (CAD) access and MD coupling objects will be discussed. Included will be a discussion of the successes, challenges and benefits of implementing this architecture.

  8. Architectural and Functional Design of an Environmental Information Network.

    DTIC Science & Technology

    1984-04-30

    study was accomplished under contract F08635-83-C-013(,, Task 83- 2 for Headquarters Air Force Engineering and Services Center, Engineering and Services...election Procedure ............................... 11 2 General Architecture of Distributed Data Management System...o.......60 A-1 Schema Architecture .......... o-.................. .... 74 A- 2 MULTIBASE Component Architecture

  9. A Distributed Ambient Intelligence Based Multi-Agent System for Alzheimer Health Care

    NASA Astrophysics Data System (ADS)

    Tapia, Dante I.; RodríGuez, Sara; Corchado, Juan M.

    This chapter presents ALZ-MAS (Alzheimer multi-agent system), an ambient intelligence (AmI)-based multi-agent system aimed at enhancing the assistance and health care for Alzheimer patients. The system makes use of several context-aware technologies that allow it to automatically obtain information from users and the environment in an evenly distributed way, focusing on the characteristics of ubiquity, awareness, intelligence, mobility, etc., all of which are concepts defined by AmI. ALZ-MAS makes use of a services oriented multi-agent architecture, called flexible user and services oriented multi-agent architecture, to distribute resources and enhance its performance. It is demonstrated that a SOA approach is adequate to build distributed and highly dynamic AmI-based multi-agent systems.

  10. All-digital radar architecture

    NASA Astrophysics Data System (ADS)

    Molchanov, Pavlo A.

    2014-10-01

    All digital radar architecture requires exclude mechanical scan system. The phase antenna array is necessarily large because the array elements must be co-located with very precise dimensions and will need high accuracy phase processing system for aggregate and distribute T/R modules data to/from antenna elements. Even phase array cannot provide wide field of view. New nature inspired all digital radar architecture proposed. The fly's eye consists of multiple angularly spaced sensors giving the fly simultaneously thee wide-area visual coverage it needs to detect and avoid the threats around him. Fly eye radar antenna array consist multiple directional antennas loose distributed along perimeter of ground vehicle or aircraft and coupled with receiving/transmitting front end modules connected by digital interface to central processor. Non-steering antenna array allows creating all-digital radar with extreme flexible architecture. Fly eye radar architecture provides wide possibility of digital modulation and different waveform generation. Simultaneous correlation and integration of thousands signals per second from each point of surveillance area allows not only detecting of low level signals ((low profile targets), but help to recognize and classify signals (targets) by using diversity signals, polarization modulation and intelligent processing. Proposed all digital radar architecture with distributed directional antenna array can provide a 3D space vector to the jammer by verification direction of arrival for signals sources and as result jam/spoof protection not only for radar systems, but for communication systems and any navigation constellation system, for both encrypted or unencrypted signals, for not limited number or close positioned jammers.

  11. 2000 Survey of Distributed Spacecraft Technologies and Architectures for NASA's Earth Science Enterprise in the 2010-2025 Timeframe

    NASA Technical Reports Server (NTRS)

    Ticker, Ronald L.; Azzolini, John D.

    2000-01-01

    The study investigates NASA's Earth Science Enterprise needs for Distributed Spacecraft Technologies in the 2010-2025 timeframe. In particular, the study focused on the Earth Science Vision Initiative and extrapolation of the measurement architecture from the 2002-2010 time period. Earth Science Enterprise documents were reviewed. Interviews were conducted with a number of Earth scientists and technologists. fundamental principles of formation flying were also explored. The results led to the development of four notional distribution spacecraft architectures. These four notional architectures (global constellations, virtual platforms, precision formation flying, and sensorwebs) are presented. They broadly and generically cover the distributed spacecraft architectures needed by Earth Science in the post-2010 era. These notional architectures are used to identify technology needs and drivers. Technology needs are subsequently grouped into five categories: Systems and architecture development tools; Miniaturization, production, manufacture, test and calibration; Data networks and information management; Orbit control, planning and operations; and Launch and deployment. The current state of the art and expected developments are explored. High-value technology areas are identified for possible future funding emphasis.

  12. A Flexible Hardware Test and Demonstration Platform for the Fractionated System Architecture YETE

    NASA Astrophysics Data System (ADS)

    Kempf, Florian; Haber, Roland; Tzschichholz, Tristan; Mikschl, Tobias; Hilgarth, Alexander; Montenegro, Sergio; Schilling, Klaus

    2016-08-01

    This paper introduces a hardware-in-the loop test and demonstration platform for the YETE system architecture for fractionated spacecraft. It is designed for rapid prototyping and testing of distributed control approaches for the YETE architecture subject to varying network topologies and transmission channel properties between the individual YETE hardware nodes.

  13. Integrating the Web and continuous media through distributed objects

    NASA Astrophysics Data System (ADS)

    Labajo, Saul P.; Garcia, Narciso N.

    1998-09-01

    The Web has rapidly grown to become the standard for documents interchange on the Internet. At the same time the interest on transmitting continuous media flows on the Internet, and its associated applications like multimedia on demand, is also growing. Integrating both kinds of systems should allow building real hypermedia systems where all media objects can be linked from any other, taking into account temporal and spatial synchronization. A way to achieve this integration is using the Corba architecture. This is a standard for open distributed systems. There are also recent efforts to integrate Web and Corba systems. We use this architecture to build a service for distribution of data flows endowed with timing restrictions. We use to integrate it with the Web, by one side Java applets that can use the Corba architecture and are embedded on HTML pages. On the other side, we also benefit from the efforts to integrate Corba and the Web.

  14. Analysis and Design of a Distributed System for Management and Distribution of Natural Language Assertions

    DTIC Science & Technology

    2010-09-01

    5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD

  15. DataHub knowledge based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.

    1991-01-01

    Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.

  16. Design and Field Experimentation of a Cooperative ITS Architecture Based on Distributed RSUs.

    PubMed

    Moreno, Asier; Osaba, Eneko; Onieva, Enrique; Perallos, Asier; Iovino, Giovanni; Fernández, Pablo

    2016-07-22

    This paper describes a new cooperative Intelligent Transportation System architecture that aims to enable collaborative sensing services. The main goal of this architecture is to improve transportation efficiency and performance. The system, which has been proven within the participation in the ICSI (Intelligent Cooperative Sensing for Improved traffic efficiency) European project, encompasses the entire process of capture and management of available road data. For this purpose, it applies a combination of cooperative services and methods for data sensing, acquisition, processing and communication amongst road users, vehicles, infrastructures and related stakeholders. Additionally, the advantages of using the proposed system are exposed. The most important of these advantages is the use of a distributed architecture, moving the system intelligence from the control centre to the peripheral devices. The global architecture of the system is presented, as well as the software design and the interaction between its main components. Finally, functional and operational results observed through the experimentation are described. This experimentation has been carried out in two real scenarios, in Lisbon (Portugal) and Pisa (Italy).

  17. Design and Field Experimentation of a Cooperative ITS Architecture Based on Distributed RSUs †

    PubMed Central

    Moreno, Asier; Osaba, Eneko; Onieva, Enrique; Perallos, Asier; Iovino, Giovanni; Fernández, Pablo

    2016-01-01

    This paper describes a new cooperative Intelligent Transportation System architecture that aims to enable collaborative sensing services. The main goal of this architecture is to improve transportation efficiency and performance. The system, which has been proven within the participation in the ICSI (Intelligent Cooperative Sensing for Improved traffic efficiency) European project, encompasses the entire process of capture and management of available road data. For this purpose, it applies a combination of cooperative services and methods for data sensing, acquisition, processing and communication amongst road users, vehicles, infrastructures and related stakeholders. Additionally, the advantages of using the proposed system are exposed. The most important of these advantages is the use of a distributed architecture, moving the system intelligence from the control centre to the peripheral devices. The global architecture of the system is presented, as well as the software design and the interaction between its main components. Finally, functional and operational results observed through the experimentation are described. This experimentation has been carried out in two real scenarios, in Lisbon (Portugal) and Pisa (Italy). PMID:27455277

  18. Advanced information processing system for advanced launch system: Avionics architecture synthesis

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Harper, Richard E.; Jaskowiak, Kenneth R.; Rosch, Gene; Alger, Linda S.; Schor, Andrei L.

    1991-01-01

    The Advanced Information Processing System (AIPS) is a fault-tolerant distributed computer system architecture that was developed to meet the real time computational needs of advanced aerospace vehicles. One such vehicle is the Advanced Launch System (ALS) being developed jointly by NASA and the Department of Defense to launch heavy payloads into low earth orbit at one tenth the cost (per pound of payload) of the current launch vehicles. An avionics architecture that utilizes the AIPS hardware and software building blocks was synthesized for ALS. The AIPS for ALS architecture synthesis process starting with the ALS mission requirements and ending with an analysis of the candidate ALS avionics architecture is described.

  19. A mission operations architecture for the 21st century

    NASA Technical Reports Server (NTRS)

    Tai, W.; Sweetnam, D.

    1996-01-01

    An operations architecture is proposed for low cost missions beyond the year 2000. The architecture consists of three elements: a service based architecture; a demand access automata; and distributed science hubs. The service based architecture is based on a set of standard multimission services that are defined, packaged and formalized by the deep space network and the advanced multi-mission operations system. The demand access automata is a suite of technologies which reduces the need to be in contact with the spacecraft, and thus reduces operating costs. The beacon signaling, the virtual emergency room, and the high efficiency tracking automata technologies are described. The distributed science hubs provide information system capabilities to the small science oriented flight teams: individual access to all traditional mission functions and services; multimedia intra-team communications, and automated direct transparent communications between the scientists and the instrument.

  20. Cooperative crossing of traffic intersections in a distributed robot system

    NASA Astrophysics Data System (ADS)

    Rausch, Alexander; Oswald, Norbert; Levi, Paul

    1995-09-01

    In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.

  1. A Reference Architecture for Space Information Management

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.

    2006-01-01

    We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.

  2. A resilient and secure software platform and architecture for distributed spacecraft

    NASA Astrophysics Data System (ADS)

    Otte, William R.; Dubey, Abhishek; Karsai, Gabor

    2014-06-01

    A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.

  3. Systematic Development of Intelligent Systems for Public Road Transport.

    PubMed

    García, Carmelo R; Quesada-Arencibia, Alexis; Cristóbal, Teresa; Padrón, Gabino; Alayón, Francisco

    2016-07-16

    This paper presents an architecture model for the development of intelligent systems for public passenger transport by road. The main objective of our proposal is to provide a framework for the systematic development and deployment of telematics systems to improve various aspects of this type of transport, such as efficiency, accessibility and safety. The architecture model presented herein is based on international standards on intelligent transport system architectures, ubiquitous computing and service-oriented architecture for distributed systems. To illustrate the utility of the model, we also present a use case of a monitoring system for stops on a public passenger road transport network.

  4. Systematic Development of Intelligent Systems for Public Road Transport

    PubMed Central

    García, Carmelo R.; Quesada-Arencibia, Alexis; Cristóbal, Teresa; Padrón, Gabino; Alayón, Francisco

    2016-01-01

    This paper presents an architecture model for the development of intelligent systems for public passenger transport by road. The main objective of our proposal is to provide a framework for the systematic development and deployment of telematics systems to improve various aspects of this type of transport, such as efficiency, accessibility and safety. The architecture model presented herein is based on international standards on intelligent transport system architectures, ubiquitous computing and service-oriented architecture for distributed systems. To illustrate the utility of the model, we also present a use case of a monitoring system for stops on a public passenger road transport network. PMID:27438836

  5. OXC management and control system architecture with scalability, maintenance, and distributed managing environment

    NASA Astrophysics Data System (ADS)

    Park, Soomyung; Joo, Seong-Soon; Yae, Byung-Ho; Lee, Jong-Hyun

    2002-07-01

    In this paper, we present the Optical Cross-Connect (OXC) Management Control System Architecture, which has the scalability and robust maintenance and provides the distributed managing environment in the optical transport network. The OXC system we are developing, which is divided into the hardware and the internal and external software for the OXC system, is made up the OXC subsystem with the Optical Transport Network (OTN) sub layers-hardware and the optical switch control system, the signaling control protocol subsystem performing the User-to-Network Interface (UNI) and Network-to-Network Interface (NNI) signaling control, the Operation Administration Maintenance & Provisioning (OAM&P) subsystem, and the network management subsystem. And the OXC management control system has the features that can support the flexible expansion of the optical transport network, provide the connectivity to heterogeneous external network elements, be added or deleted without interrupting OAM&P services, be remotely operated, provide the global view and detail information for network planner and operator, and have Common Object Request Broker Architecture (CORBA) based the open system architecture adding and deleting the intelligent service networking functions easily in future. To meet these considerations, we adopt the object oriented development method in the whole developing steps of the system analysis, design, and implementation to build the OXC management control system with the scalability, the maintenance, and the distributed managing environment. As a consequently, the componentification for the OXC operation management functions of each subsystem makes the robust maintenance, and increases code reusability. Also, the component based OXC management control system architecture will have the flexibility and scalability in nature.

  6. Computer Sciences and Data Systems, volume 1

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  7. Decision Criteria for Distributed Versus Non-Distributed Information Systems in the Health Care Environment

    PubMed Central

    McGinnis, John W.

    1980-01-01

    The very same technological advances that support distributed systems have also dramatically increased the efficiency and capabilities of centralized systems making it more complex for health care managers to select the “right” system architecture to meet their particular needs. How this selection can be made with a reasonable degree of managerial comfort is the focus of this paper. The approach advocated is based on experience in developing the Tri-Service Medical Information System (TRIMIS) program. Along with this technical standards and configuration management procedures were developed that provided the necessary guidance to implement the selected architecture and to allow it to change in a controlled way over its life cycle.

  8. Architecture for distributed design and fabrication

    NASA Astrophysics Data System (ADS)

    McIlrath, Michael B.; Boning, Duane S.; Troxel, Donald E.

    1997-01-01

    We describe a flexible, distributed system architecture capable of supporting collaborative design and fabrication of semi-conductor devices and integrated circuits. Such capabilities are of particular importance in the development of new technologies, where both equipment and expertise are limited. Distributed fabrication enables direct, remote, physical experimentation in the development of leading edge technology, where the necessary manufacturing resources are new, expensive, and scarce. Computational resources, software, processing equipment, and people may all be widely distributed; their effective integration is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages current vendor and consortia developments to define software interfaces and infrastructure based on existing and merging networking, CIM, and CAD standards. Process engineers and product designers access processing and simulation results through a common interface and collaborate across the distributed manufacturing environment.

  9. Applications of an architecture design and assessment system (ADAS)

    NASA Technical Reports Server (NTRS)

    Gray, F. Gail; Debrunner, Linda S.; White, Tennis S.

    1988-01-01

    A new Architecture Design and Assessment System (ADAS) tool package is introduced, and a range of possible applications is illustrated. ADAS was used to evaluate the performance of an advanced fault-tolerant computer architecture in a modern flight control application. Bottlenecks were identified and possible solutions suggested. The tool was also used to inject faults into the architecture and evaluate the synchronization algorithm, and improvements are suggested. Finally, ADAS was used as a front end research tool to aid in the design of reconfiguration algorithms in a distributed array architecture.

  10. A synchronized computational architecture for generalized bilateral control of robot arms

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Szakaly, Zoltan

    1987-01-01

    This paper describes a computational architecture for an interconnected high speed distributed computing system for generalized bilateral control of robot arms. The key method of the architecture is the use of fully synchronized, interrupt driven software. Since an objective of the development is to utilize the processing resources efficiently, the synchronization is done in the hardware level to reduce system software overhead. The architecture also achieves a balaced load on the communication channel. The paper also describes some architectural relations to trading or sharing manual and automatic control.

  11. Authentication and Authorization of End User in Microservice Architecture

    NASA Astrophysics Data System (ADS)

    He, Xiuyu; Yang, Xudong

    2017-10-01

    As the market and business continues to expand; the traditional single monolithic architecture is facing more and more challenges. The development of cloud computing and container technology promote microservice architecture became more popular. While the low coupling, fine granularity, scalability, flexibility and independence of the microservice architecture bring convenience, the inherent complexity of the distributed system make the security of microservice architecture important and difficult. This paper aims to study the authentication and authorization of the end user under the microservice architecture. By comparing with the traditional measures and researching on existing technology, this paper put forward a set of authentication and authorization strategies suitable for microservice architecture, such as distributed session, SSO solutions, client-side JSON web token and JWT + API Gateway, and summarize the advantages and disadvantages of each method.

  12. Distributed Earth observation data integration and on-demand services based on a collaborative framework of geospatial data service gateway

    NASA Astrophysics Data System (ADS)

    Xie, Jibo; Li, Guoqing

    2015-04-01

    Earth observation (EO) data obtained by air-borne or space-borne sensors has the characteristics of heterogeneity and geographical distribution of storage. These data sources belong to different organizations or agencies whose data management and storage methods are quite different and geographically distributed. Different data sources provide different data publish platforms or portals. With more Remote sensing sensors used for Earth Observation (EO) missions, different space agencies have distributed archived massive EO data. The distribution of EO data archives and system heterogeneity makes it difficult to efficiently use geospatial data for many EO applications, such as hazard mitigation. To solve the interoperable problems of different EO data systems, an advanced architecture of distributed geospatial data infrastructure is introduced to solve the complexity of distributed and heterogeneous EO data integration and on-demand processing in this paper. The concept and architecture of geospatial data service gateway (GDSG) is proposed to build connection with heterogeneous EO data sources by which EO data can be retrieved and accessed with unified interfaces. The GDSG consists of a set of tools and service to encapsulate heterogeneous geospatial data sources into homogenous service modules. The GDSG modules includes EO metadata harvesters and translators, adaptors to different type of data system, unified data query and access interfaces, EO data cache management, and gateway GUI, etc. The GDSG framework is used to implement interoperability and synchronization between distributed EO data sources with heterogeneous architecture. An on-demand distributed EO data platform is developed to validate the GDSG architecture and implementation techniques. Several distributed EO data achieves are used for test. Flood and earthquake serves as two scenarios for the use cases of distributed EO data integration and interoperability.

  13. Autonomous, Decentralized Grid Architecture: Prosumer-Based Distributed Autonomous Cyber-Physical Architecture for Ultra-Reliable Green Electricity Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2012-01-11

    GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.

  14. A Role for Semantic Web Technologies in Patient Record Data Collection

    NASA Astrophysics Data System (ADS)

    Ogbuji, Chimezie

    Business Process Management Systems (BPMS) are a component of the stack of Web standards that comprise Service Oriented Architecture (SOA). Such systems are representative of the architectural framework of modern information systems built in an enterprise intranet and are in contrast to systems built for deployment on the larger World Wide Web. The REST architectural style is an emerging style for building loosely coupled systems based purely on the native HTTP protocol. It is a coordinated set of architectural constraints with a goal to minimize latency, maximize the independence and scalability of distributed components, and facilitate the use of intermediary processors.Within the development community for distributed, Web-based systems, there has been a debate regarding themerits of both approaches. In some cases, there are legitimate concerns about the differences in both architectural styles. In other cases, the contention seems to be based on concerns that are marginal at best. In this chapter, we will attempt to contribute to this debate by focusing on a specific, deployed use case that emphasizes the role of the Semantic Web, a simple Web application architecture that leverages the use of declarative XML processing, and the needs of a workflow system. The use case involves orchestrating a work process associated with the data entry of structured patient record content into a research registry at the Cleveland Clinic's Clinical Investigation department in the Heart and Vascular Institute.

  15. Integrated Distributed Directory Service for KSC

    NASA Technical Reports Server (NTRS)

    Ghansah, Isaac

    1997-01-01

    This paper describes an integrated distributed directory services (DDS) architecture as a fundamental component of KSC distributed computing systems. Specifically, an architecture for an integrated directory service based on DNS and X.500/LDAP has been suggested. The architecture supports using DNS in its traditional role as a name service and X.500 for other services. Specific designs were made in the integration of X.500 DDS for Public Key Certificates, Kerberos Security Services, Network-wide Login, Electronic Mail, WWW URLS, Servers, and other diverse network objects. Issues involved in incorporating the emerging Microsoft Active Directory Service MADS in KSC's X.500 were discussed.

  16. A Survey of Some Approaches to Distributed Data Base & Distributed File System Architecture.

    DTIC Science & Technology

    1980-01-01

    BUS POD A DD A 12 12 A = A Cell D = D Cell Figure 7-1: MUFFIN logical architecture - 45 - MUFI January 1980 ".-.Bus Interface V Conventional Processor...and Applied Mathematics (14), * December, 1966. [Kimbleton 791 Kimbleton, Stephen; Wang, Pearl; and Fong, Elizabeth. XNDM: An Experimental Network

  17. Alternative Architectures for Distributed Cooperative Problem-Solving in the National Airspace System

    NASA Technical Reports Server (NTRS)

    Smith, Phillip J.; Billings, Charles; McCoy, C. Elaine; Orasanu, Judith

    1999-01-01

    The air traffic management system in the United States is an example of a distributed problem solving system. It has elements of both cooperative and competitive problem-solving. This system includes complex organizations such as Airline Operations Centers (AOCs), the FAA Air Traffic Control Systems Command Center (ATCSCC), and traffic management units (TMUs) at enroute centers and TRACONs, all of which have a major focus on strategic decision-making. It also includes individuals concerned more with tactical decisions (such as air traffic controllers and pilots). The architecture for this system has evolved over time to rely heavily on the distribution of tasks and control authority in order to keep cognitive complexity manageable for any one individual operator, and to provide redundancy (both human and technological) to serve as a safety net to catch the slips or mistakes that any one person or entity might make. Currently, major changes are being considered for this architecture, especially with respect to the locus of control, in an effort to improve efficiency and safety. This paper uses a series of case studies to help evaluate some of these changes from the perspective of system complexity, and to point out possible alternative approaches that might be taken to improve system performance. The paper illustrates the need to maintain a clear understanding of what is required to assure a high level of performance when alternative system architectures and decompositions are developed.

  18. Alternative electrical distribution system architectures for automobiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afridi, K.K.; Tabors, R.D.; Kassakian, J.G.

    At present most automobiles use a 12 V electrical system with point-to-point wiring. The capability of this architecture in meeting the needs of future electrical loads is questionable. Furthermore, with the development of electric vehicles (EVs) there is a greater need for a better architecture. In this paper the authors outline the limitations of the conventional architecture and identify alternatives. They also present a multi-attribute trade-off methodology which compares these alternatives, and identifies a set of Pareto optimal architectures. The system attributes traded off are cost, weight, losses and probability of failure. These are calculated by a computer program thatmore » has built-in component attribute models. System attributes of a few dozen architectures are also reported and the results analyzed. 17 refs.« less

  19. Software architecture of INO340 telescope control system

    NASA Astrophysics Data System (ADS)

    Ravanmehr, Reza; Khosroshahi, Habib

    2016-08-01

    The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.

  20. A Down-to-Earth Educational Operating System for Up-in-the-Cloud Many-Core Architectures

    ERIC Educational Resources Information Center

    Ziwisky, Michael; Persohn, Kyle; Brylow, Dennis

    2013-01-01

    We present "Xipx," the first port of a major educational operating system to a processor in the emerging class of many-core architectures. Through extensions to the proven Embedded Xinu operating system, Xipx gives students hands-on experience with system programming in a distributed message-passing environment. We expose the software primitives…

  1. Numerical Propulsion System Simulation Architecture

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia G.

    2004-01-01

    The Numerical Propulsion System Simulation (NPSS) is a framework for performing analysis of complex systems. Because the NPSS was developed using the object-oriented paradigm, the resulting architecture is an extensible and flexible framework that is currently being used by a diverse set of participants in government, academia, and the aerospace industry. NPSS is being used by over 15 different institutions to support rockets, hypersonics, power and propulsion, fuel cells, ground based power, and aerospace. Full system-level simulations as well as subsystems may be modeled using NPSS. The NPSS architecture enables the coupling of analyses at various levels of detail, which is called numerical zooming. The middleware used to enable zooming and distributed simulations is the Common Object Request Broker Architecture (CORBA). The NPSS Developer's Kit offers tools for the developer to generate CORBA-based components and wrap codes. The Developer's Kit enables distributed multi-fidelity and multi-discipline simulations, preserves proprietary and legacy codes, and facilitates addition of customized codes. The platforms supported are PC, Linux, HP, Sun, and SGI.

  2. System Engineering Strategy for Distributed Multi-Purpose Simulation Architectures

    NASA Technical Reports Server (NTRS)

    Bhula, Dlilpkumar; Kurt, Cindy Marie; Luty, Roger

    2007-01-01

    This paper describes the system engineering approach used to develop distributed multi-purpose simulations. The multi-purpose simulation architecture focuses on user needs, operations, flexibility, cost and maintenance. This approach was used to develop an International Space Station (ISS) simulator, which is called the International Space Station Integrated Simulation (ISIS)1. The ISIS runs unmodified ISS flight software, system models, and the astronaut command and control interface in an open system design that allows for rapid integration of multiple ISS models. The initial intent of ISIS was to provide a distributed system that allows access to ISS flight software and models for the creation, test, and validation of crew and ground controller procedures. This capability reduces the cost and scheduling issues associated with utilizing standalone simulators in fixed locations, and facilitates discovering unknowns and errors earlier in the development lifecycle. Since its inception, the flexible architecture of the ISIS has allowed its purpose to evolve to include ground operator system and display training, flight software modification testing, and as a realistic test bed for Exploration automation technology research and development.

  3. Architectures for mission control at the Jet Propulsion Laboratory

    NASA Technical Reports Server (NTRS)

    Davidson, Reger A.; Murphy, Susan C.

    1992-01-01

    JPL is currently converting to an innovative control center data system which is a distributed, open architecture for telemetry delivery and which is enabling advancement towards improved automation and operability, as well as new technology, in mission operations at JPL. The scope of mission control within mission operations is examined. The concepts of a mission control center and how operability can affect the design of a control center data system are discussed. Examples of JPL's mission control architecture, data system development, and prototype efforts at the JPL Operations Engineering Laboratory are provided. Strategies for the future of mission control architectures are outlined.

  4. The Global File System

    NASA Technical Reports Server (NTRS)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  5. Playable Serious Games for Studying and Programming Computational STEM and Informatics Applications of Distributed and Parallel Computer Architectures

    ERIC Educational Resources Information Center

    Amenyo, John-Thones

    2012-01-01

    Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

  6. Transition in Gas Turbine Control System Architecture: Modular, Distributed, and Embedded

    NASA Technical Reports Server (NTRS)

    Culley, Dennis

    2010-01-01

    Controls systems are an increasingly important component of turbine-engine system technology. However, as engines become more capable, the control system itself becomes ever more constrained by the inherent environmental conditions of the engine; a relationship forced by the continued reliance on commercial electronics technology. A revolutionary change in the architecture of turbine-engine control systems will change this paradigm and result in fully distributed engine control systems. Initially, the revolution will begin with the physical decoupling of the control law processor from the hostile engine environment using a digital communications network and engine-mounted high temperature electronics requiring little or no thermal control. The vision for the evolution of distributed control capability from this initial implementation to fully distributed and embedded control is described in a roadmap and implementation plan. The development of this plan is the result of discussions with government and industry stakeholders

  7. An Environment for Incremental Development of Distributed Extensible Asynchronous Real-time Systems

    NASA Technical Reports Server (NTRS)

    Ames, Charles K.; Burleigh, Scott; Briggs, Hugh C.; Auernheimer, Brent

    1996-01-01

    Incremental parallel development of distributed real-time systems is difficult. Architectural techniques and software tools developed at the Jet Propulsion Laboratory's (JPL's) Flight System Testbed make feasible the integration of complex systems in various stages of development.

  8. Design and Analysis of Architectures for Structural Health Monitoring Systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi; Sixto, S. L. (Technical Monitor)

    2002-01-01

    During the two-year project period, we have worked on several aspects of Health Usage and Monitoring Systems for structural health monitoring. In particular, we have made contributions in the following areas. 1. Reference HUMS architecture: We developed a high-level architecture for health monitoring and usage systems (HUMS). The proposed reference architecture is shown. It is compatible with the Generic Open Architecture (GOA) proposed as a standard for avionics systems. 2. HUMS kernel: One of the critical layers of HUMS reference architecture is the HUMS kernel. We developed a detailed design of a kernel to implement the high level architecture.3. Prototype implementation of HUMS kernel: We have implemented a preliminary version of the HUMS kernel on a Unix platform.We have implemented both a centralized system version and a distributed version. 4. SCRAMNet and HUMS: SCRAMNet (Shared Common Random Access Memory Network) is a system that is found to be suitable to implement HUMS. For this reason, we have conducted a simulation study to determine its stability in handling the input data rates in HUMS. 5. Architectural specification.

  9. A distributed and intelligent system approach for the automatic inspection of steam-generator tubes in nuclear power plants

    NASA Astrophysics Data System (ADS)

    Kang, Soon Ju; Moon, Jae Chul; Choi, Doo-Hyun; Choi, Sung Su; Woo, Hee Gon

    1998-06-01

    The inspection of steam-generator (SG) tubes in a nuclear power plant (NPP) is a time-consuming, laborious, and hazardous task because of several hard constraints such as a highly radiated working environment, a tight task schedule, and the need for many experienced human inspectors. This paper presents a new distributed intelligent system architecture for automating traditional inspection methods. The proposed architecture adopts three basic technical strategies in order to reduce the complexity of system implementation. The first is the distributed task allocation into four stages: inspection planning (IF), signal acquisition (SA), signal evaluation (SE), and inspection data management (IDM). Consequently, dedicated subsystems for automation of each stage can be designed and implemented separately. The second strategy is the inclusion of several useful artificial intelligence techniques for implementing the subsystems of each stage, such as an expert system for IP and SE and machine vision and remote robot control techniques for SA. The third strategy is the integration of the subsystems using client/server-based distributed computing architecture and a centralized database management concept. Through the use of the proposed architecture, human errors, which can occur during inspection, can be minimized because the element of human intervention has been almost eliminated; however, the productivity of the human inspector can be increased equally. A prototype of the proposed system has been developed and successfully tested over the last six years in domestic NPP's.

  10. Distributed Architecture for the Object-Oriented Method for Interoperability

    DTIC Science & Technology

    2003-03-01

    Collaborative Environment. ......................121 Figure V-2. Distributed OOMI And The Collaboration Centric Paradigm. .....................123 Figure V...of systems are formed into a system federation to resolve differences in modeling. An OOMI Integrated Development Environment (OOMI IDE) lends ...space for the creation of possible distributed systems is partitioned into User Centric systems, Processing/Storage Centric systems, Implementation

  11. Framework for the Parametric System Modeling of Space Exploration Architectures

    NASA Technical Reports Server (NTRS)

    Komar, David R.; Hoffman, Jim; Olds, Aaron D.; Seal, Mike D., II

    2008-01-01

    This paper presents a methodology for performing architecture definition and assessment prior to, or during, program formulation that utilizes a centralized, integrated architecture modeling framework operated by a small, core team of general space architects. This framework, known as the Exploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), enables: 1) a significantly larger fraction of an architecture trade space to be assessed in a given study timeframe; and 2) the complex element-to-element and element-to-system relationships to be quantitatively explored earlier in the design process. Discussion of the methodology advantages and disadvantages with respect to the distributed study team approach typically used within NASA to perform architecture studies is presented along with an overview of EXAMINE s functional components and tools. An example Mars transportation system architecture model is used to demonstrate EXAMINE s capabilities in this paper. However, the framework is generally applicable for exploration architecture modeling with destinations to any celestial body in the solar system.

  12. Human Factors Assessment of the UH-60M Common Avionics Architecture System (CAAS) Crew Station During the Limited User Evaluation (LEUE)

    DTIC Science & Technology

    2005-12-01

    weapon system evaluation as a high-level architecture and distributed interactive simulation 6 compliant, human-in-the-loop, virtual environment...Directorate to participate in the Limited Early User Evaluation (LEUE) of the Common Avionics Architecture System (CAAS) cockpit. ARL conducted a human...CAAS, the UH-60M PO conducted a limited early user evaluation (LEUE) to evaluate the integration of the CAAS in the UH-60M crew station. The

  13. A novel software architecture for the provision of context-aware semantic transport information.

    PubMed

    Moreno, Asier; Perallos, Asier; López-de-Ipiña, Diego; Onieva, Enrique; Salaberria, Itziar; Masegosa, Antonio D

    2015-05-26

    The effectiveness of Intelligent Transportation Systems depends largely on the ability to integrate information from diverse sources and the suitability of this information for the specific user. This paper describes a new approach for the management and exchange of this information, related to multimodal transportation. A novel software architecture is presented, with particular emphasis on the design of the data model and the enablement of services for information retrieval, thereby obtaining a semantic model for the representation of transport information. The publication of transport data as semantic information is established through the development of a Multimodal Transport Ontology (MTO) and the design of a distributed architecture allowing dynamic integration of transport data. The advantages afforded by the proposed system due to the use of Linked Open Data and a distributed architecture are stated, comparing it with other existing solutions. The adequacy of the information generated in regard to the specific user's context is also addressed. Finally, a working solution of a semantic trip planner using actual transport data and running on the proposed architecture is presented, as a demonstration and validation of the system.

  14. Centralized and distributed control architectures under Foundation Fieldbus network.

    PubMed

    Persechini, Maria Auxiliadora Muanis; Jota, Fábio Gonçalves

    2013-01-01

    This paper aims at discussing possible automation and control system architectures based on fieldbus networks in which the controllers can be implemented either in a centralized or in a distributed form. An experimental setup is used to demonstrate some of the addressed issues. The control and automation architecture is composed of a supervisory system, a programmable logic controller and various other devices connected to a Foundation Fieldbus H1 network. The procedures used in the network configuration, in the process modelling and in the design and implementation of controllers are described. The specificities of each one of the considered logical organizations are also discussed. Finally, experimental results are analysed using an algorithm for the assessment of control loops to compare the performances between the centralized and the distributed implementations. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Alternative Architectures for Distributed Work in the National Airspace System

    NASA Technical Reports Server (NTRS)

    Smith, Philip J.; Billings, Charles E.; Chapman, Roger; Obradovich, Heintz; McCoy, C. Elaine; Orasanu, Judith

    2000-01-01

    The architecture for the National Airspace System (NAS) in the United States has evolved over time to rely heavily on the distribution of tasks and control authority in order to keep cognitive complexity manageable for any one individual. This paper characterizes a number of different subsystems that have been recently incorporated in the NAS. The goal of this discussion is to begin to identify the critical parameters defining the differences among alternative architectures in terms of the locus of control and in terms of access to relevant data and knowledge. At an abstract level, this analysis can be described as an effort to describe alternative "rules of the game" for the NAS.

  16. Software architecture for a distributed real-time system in Ada, with application to telerobotics

    NASA Technical Reports Server (NTRS)

    Olsen, Douglas R.; Messiora, Steve; Leake, Stephen

    1992-01-01

    The architecture structure and software design methodology presented is described in the context of telerobotic application in Ada, specifically the Engineering Test Bed (ETB), which was developed to support the Flight Telerobotic Servicer (FTS) Program at GSFC. However, the nature of the architecture is such that it has applications to any multiprocessor distributed real-time system. The ETB architecture, which is a derivation of the NASA/NBS Standard Reference Model (NASREM), defines a hierarchy for representing a telerobot system. Within this hierarchy, a module is a logical entity consisting of the software associated with a set of related hardware components in the robot system. A module is comprised of submodules, which are cyclically executing processes that each perform a specific set of functions. The submodules in a module can run on separate processors. The submodules in the system communicate via command/status (C/S) interface channels, which are used to send commands down and relay status back up the system hierarchy. Submodules also communicate via setpoint data links, which are used to transfer control data from one submodule to another. A submodule invokes submodule algorithms (SMA's) to perform algorithmic operations. Data that describe or models a physical component of the system are stored as objects in the World Model (WM). The WM is a system-wide distributed database that is accessible to submodules in all modules of the system for creating, reading, and writing objects.

  17. Energy management and control of active distribution systems

    NASA Astrophysics Data System (ADS)

    Shariatzadeh, Farshid

    Advancements in the communication, control, computation and information technologies have driven the transition to the next generation active power distribution systems. Novel control techniques and management strategies are required to achieve the efficient, economic and reliable grid. The focus of this work is energy management and control of active distribution systems (ADS) with integrated renewable energy sources (RESs) and demand response (DR). Here, ADS mean automated distribution system with remotely operated controllers and distributed energy resources (DERs). DER as active part of the next generation future distribution system includes: distributed generations (DGs), RESs, energy storage system (ESS), plug-in hybrid electric vehicles (PHEV) and DR. Integration of DR and RESs into ADS is critical to realize the vision of sustainability. The objective of this dissertation is the development of management architecture to control and operate ADS in the presence of DR and RES. One of the most challenging issues for operating ADS is the inherent uncertainty of DR and RES as well as conflicting objective of DER and electric utilities. ADS can consist of different layers such as system layer and building layer and coordination between these layers is essential. In order to address these challenges, multi-layer energy management and control architecture is proposed with robust algorithms in this work. First layer of proposed multi-layer architecture have been implemented at the system layer. Developed AC optimal power flow (AC-OPF) generates fair price for all DR and non-DR loads which is used as a control signal for second layer. Second layer controls DR load at buildings using a developed look-ahead robust controller. Load aggregator collects information from all buildings and send aggregated load to the system optimizer. Due to the different time scale at these two management layers, time coordination scheme is developed. Robust and deterministic controllers are developed to maximize the energy usage from rooftop photovoltaic (PV) generation locally and minimize heat-ventilation and air conditioning (HVAC) consumption while maintaining inside temperature within comfort zone. The performance of the developed multi-layer architecture has been analyzed using test case studies and results show the robustness of developed controller in the presence of uncertainty.

  18. New architectural paradigms for multi-petabyte distributed storage systems

    NASA Technical Reports Server (NTRS)

    Lee, Richard R.

    1994-01-01

    In the not too distant future, programs such as NASA's Earth Observing System, NSF/ARPA/NASA's Digital Libraries Initiative and Intelligence Community's (NSA, CIA, NRO, etc.) mass storage system upgrades will all require multi-petabyte (petabyte: 1015 bytes of bitfile data) (or larger) distributed storage solutions. None of these requirements, as currently defined, will meet their objectives utilizing either today's architectural paradigms or storage solutions. Radically new approaches will be required to not only store and manage veritable 'mountain ranges of data', but to make the cost of ownership affordable, much less practical in today's (and certainly the future's) austere budget environment! Within this paper we will explore new architectural paradigms and project systems performance benefits and dollars per petabyte of information stored. We will discuss essential 'top down' approaches to achieving an overall systems level performance capability sufficient to meet the challenges of these major programs.

  19. Automatic Management of Parallel and Distributed System Resources

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.

    1990-01-01

    Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.

  20. Advanced flight control system study

    NASA Technical Reports Server (NTRS)

    Mcgough, J.; Moses, K.; Klafin, J. F.

    1982-01-01

    The architecture, requirements, and system elements of an ultrareliable, advanced flight control system are described. The basic criteria are functional reliability of 10 to the minus 10 power/hour of flight and only 6 month scheduled maintenance. A distributed system architecture is described, including a multiplexed communication system, reliable bus controller, the use of skewed sensor arrays, and actuator interfaces. Test bed and flight evaluation program are proposed.

  1. Quantum key distribution network for multiple applications

    NASA Astrophysics Data System (ADS)

    Tajima, A.; Kondoh, T.; Ochi, T.; Fujiwara, M.; Yoshino, K.; Iizuka, H.; Sakamoto, T.; Tomita, A.; Shimamura, E.; Asami, S.; Sasaki, M.

    2017-09-01

    The fundamental architecture and functions of secure key management in a quantum key distribution (QKD) network with enhanced universal interfaces for smooth key sharing between arbitrary two nodes and enabling multiple secure communication applications are proposed. The proposed architecture consists of three layers: a quantum layer, key management layer and key supply layer. We explain the functions of each layer, the key formats in each layer and the key lifecycle for enabling a practical QKD network. A quantum key distribution-advanced encryption standard (QKD-AES) hybrid system and an encrypted smartphone system were developed as secure communication applications on our QKD network. The validity and usefulness of these systems were demonstrated on the Tokyo QKD Network testbed.

  2. Developing an Integration Infrastructure for Distributed Engine Control Technologies

    NASA Technical Reports Server (NTRS)

    Culley, Dennis; Zinnecker, Alicia; Aretskin-Hariton, Eliot; Kratz, Jonathan

    2014-01-01

    Turbine engine control technology is poised to make the first revolutionary leap forward since the advent of full authority digital engine control in the mid-1980s. This change aims squarely at overcoming the physical constraints that have historically limited control system hardware on aero-engines to a federated architecture. Distributed control architecture allows complex analog interfaces existing between system elements and the control unit to be replaced by standardized digital interfaces. Embedded processing, enabled by high temperature electronics, provides for digitization of signals at the source and network communications resulting in a modular system at the hardware level. While this scheme simplifies the physical integration of the system, its complexity appears in other ways. In fact, integration now becomes a shared responsibility among suppliers and system integrators. While these are the most obvious changes, there are additional concerns about performance, reliability, and failure modes due to distributed architecture that warrant detailed study. This paper describes the development of a new facility intended to address the many challenges of the underlying technologies of distributed control. The facility is capable of performing both simulation and hardware studies ranging from component to system level complexity. Its modular and hierarchical structure allows the user to focus their interaction on specific areas of interest.

  3. PCI bus content-addressable-memory (CAM) implementation on FPGA for pattern recognition/image retrieval in a distributed environment

    NASA Astrophysics Data System (ADS)

    Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.

    2004-11-01

    Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.

  4. Lunar Outpost Life Support Architecture Study Based on a High-Mobility Exploration Scenario

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2010-01-01

    This paper presents results of a life support architecture study based on a 2009 NASA lunar surface exploration scenario known as Scenario 12. The study focuses on the assembly complete outpost configuration and includes pressurized rovers as part of a distributed outpost architecture in both stand-alone and integrated configurations. A range of life support architectures are examined reflecting different levels of closure and distributed functionality. Monte Carlo simulations are used to assess the sensitivity of results to volatile high-impact mission variables, including the quantity of residual Lander oxygen and hydrogen propellants available for scavenging, the fraction of crew time away from the outpost on excursions, total extravehicular activity hours, and habitat leakage. Surpluses or deficits of water and oxygen are reported for each architecture, along with fixed and 10-year total equivalent system mass estimates relative to a reference case. System robustness is discussed in terms of the probability of no water or oxygen resupply as determined from the Monte Carlo simulations.

  5. The architecture of a distributed medical dictionary.

    PubMed

    Fowler, J; Buffone, G; Moreau, D

    1995-01-01

    Exploiting high-speed computer networks to provide a national medical information infrastructure is a goal for medical informatics. The Distributed Medical Dictionary under development at Baylor College of Medicine is a model for an architecture that supports collaborative development of a distributed online medical terminology knowledge-base. A prototype is described that illustrates the concept. Issues that must be addressed by such a system include high availability, acceptable response time, support for local idiom, and control of vocabulary.

  6. A development framework for semantically interoperable health information systems.

    PubMed

    Lopez, Diego M; Blobel, Bernd G M E

    2009-02-01

    Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.

  7. Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture

    NASA Technical Reports Server (NTRS)

    Fiene, Bruce F.

    1994-01-01

    The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.

  8. Workflow-enabled distributed component-based information architecture for digital medical imaging enterprises.

    PubMed

    Wong, Stephen T C; Tjandra, Donny; Wang, Huili; Shen, Weimin

    2003-09-01

    Few information systems today offer a flexible means to define and manage the automated part of radiology processes, which provide clinical imaging services for the entire healthcare organization. Even fewer of them provide a coherent architecture that can easily cope with heterogeneity and inevitable local adaptation of applications and can integrate clinical and administrative information to aid better clinical, operational, and business decisions. We describe an innovative enterprise architecture of image information management systems to fill the needs. Such a system is based on the interplay of production workflow management, distributed object computing, Java and Web techniques, and in-depth domain knowledge in radiology operations. Our design adapts the approach of "4+1" architectural view. In this new architecture, PACS and RIS become one while the user interaction can be automated by customized workflow process. Clinical service applications are implemented as active components. They can be reasonably substituted by applications of local adaptations and can be multiplied for fault tolerance and load balancing. Furthermore, the workflow-enabled digital radiology system would provide powerful query and statistical functions for managing resources and improving productivity. This paper will potentially lead to a new direction of image information management. We illustrate the innovative design with examples taken from an implemented system.

  9. A Network Scheduling Model for Distributed Control Simulation

    NASA Technical Reports Server (NTRS)

    Culley, Dennis; Thomas, George; Aretskin-Hariton, Eliot

    2016-01-01

    Distributed engine control is a hardware technology that radically alters the architecture for aircraft engine control systems. Of its own accord, it does not change the function of control, rather it seeks to address the implementation issues for weight-constrained vehicles that can limit overall system performance and increase life-cycle cost. However, an inherent feature of this technology, digital communication networks, alters the flow of information between critical elements of the closed-loop control. Whereas control information has been available continuously in conventional centralized control architectures through virtue of analog signaling, moving forward, it will be transmitted digitally in serial fashion over the network(s) in distributed control architectures. An underlying effect is that all of the control information arrives asynchronously and may not be available every loop interval of the controller, therefore it must be scheduled. This paper proposes a methodology for modeling the nominal data flow over these networks and examines the resulting impact for an aero turbine engine system simulation.

  10. SimBOX: a scalable architecture for aggregate distributed command and control of spaceport and service constellation

    NASA Astrophysics Data System (ADS)

    Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj

    2004-08-01

    In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.

  11. SimBox: a simulation-based scalable architecture for distributed command and control of spaceport and service constellations

    NASA Astrophysics Data System (ADS)

    Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj

    2004-09-01

    In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.

  12. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.

    2003-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT).

  13. Remote voice training: A case study on space shuttle applications, appendix C

    NASA Technical Reports Server (NTRS)

    Mollakarimi, Cindy; Hamid, Tamin

    1990-01-01

    The Tile Automation System includes applications of automation and robotics technology to all aspects of the Shuttle tile processing and inspection system. An integrated set of rapid prototyping testbeds was developed which include speech recognition and synthesis, laser imaging systems, distributed Ada programming environments, distributed relational data base architectures, distributed computer network architectures, multi-media workbenches, and human factors considerations. Remote voice training in the Tile Automation System is discussed. The user is prompted over a headset by synthesized speech for the training sequences. The voice recognition units and the voice output units are remote from the user and are connected by Ethernet to the main computer system. A supervisory channel is used to monitor the training sequences. Discussions include the training approaches as well as the human factors problems and solutions for this system utilizing remote training techniques.

  14. Middleware Trade Study for NASA Domain

    NASA Technical Reports Server (NTRS)

    Bowman, Dan

    2007-01-01

    This presentation presents preliminary results of a trade study designed to assess three distributed simulation middleware technologies for support of the NASA Constellation Distributed Space Exploration Simulation (DSES) project and Test and Verification Distributed System Integration Laboratory (DSIL). The technologies are: the High Level Architecture (HLA), the Test and Training Enabling Architecture (TENA), and an XML-based variant of Distributed Interactive Simulation (DIS-XML) coupled with the Extensible Messaging and Presence Protocol (XMPP). According to the criteria and weights determined in this study, HLA scores better than the other two for DSES as well as the DSIL

  15. NASA Constellation Distributed Simulation Middleware Trade Study

    NASA Technical Reports Server (NTRS)

    Hasan, David; Bowman, James D.; Fisher, Nancy; Cutts, Dannie; Cures, Edwin Z.

    2008-01-01

    This paper presents the results of a trade study designed to assess three distributed simulation middleware technologies for support of the NASA Constellation Distributed Space Exploration Simulation (DSES) project and Test and Verification Distributed System Integration Laboratory (DSIL). The technologies are the High Level Architecture (HLA), the Test and Training Enabling Architecture (TENA), and an XML-based variant of Distributed Interactive Simulation (DIS-XML) coupled with the Extensible Messaging and Presence Protocol (XMPP). According to the criteria and weights determined in this study, HLA scores better than the other two for DSES as well as the DSIL.

  16. Advanced Distributed Measurements and Data Processing at the Vibro-Acoustic Test Facility, GRC Space Power Facility, Sandusky, Ohio - an Architecture and an Example

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.; Evans, Richard K.

    2009-01-01

    A large-scale, distributed, high-speed data acquisition system (HSDAS) is currently being installed at the Space Power Facility (SPF) at NASA Glenn Research Center s Plum Brook Station in Sandusky, OH. This installation is being done as part of a facility construction project to add Vibro-acoustic Test Capabilities (VTC) to the current thermal-vacuum testing capability of SPF in support of the Orion Project s requirement for Space Environments Testing (SET). The HSDAS architecture is a modular design, which utilizes fully-remotely managed components, enables the system to support multiple test locations with a wide-range of measurement types and a very large system channel count. The architecture of the system is presented along with details on system scalability and measurement verification. In addition, the ability of the system to automate many of its processes such as measurement verification and measurement system analysis is also discussed.

  17. Network Theory: A Primer and Questions for Air Transportation Systems Applications

    NASA Technical Reports Server (NTRS)

    Holmes, Bruce J.

    2004-01-01

    A new understanding (with potential applications to air transportation systems) has emerged in the past five years in the scientific field of networks. This development emerges in large part because we now have a new laboratory for developing theories about complex networks: The Internet. The premise of this new understanding is that most complex networks of interest, both of nature and of human contrivance, exhibit a fundamentally different behavior than thought for over two hundred years under classical graph theory. Classical theory held that networks exhibited random behavior, characterized by normal, (e.g., Gaussian or Poisson) degree distributions of the connectivity between nodes by links. The new understanding turns this idea on its head: networks of interest exhibit scale-free (or small world) degree distributions of connectivity, characterized by power law distributions. The implications of scale-free behavior for air transportation systems include the potential that some behaviors of complex system architectures might be analyzed through relatively simple approximations of local elements of the system. For air transportation applications, this presentation proposes a framework for constructing topologies (architectures) that represent the relationships between mobility, flight operations, aircraft requirements, and airspace capacity, and the related externalities in airspace procedures and architectures. The proposed architectures or topologies may serve as a framework for posing comparative and combinative analyses of performance, cost, security, environmental, and related metrics.

  18. NASA's NPOESS Preparatory Project Science Data Segment: A Framework for Measurement-based Earth Science Data Systems

    NASA Technical Reports Server (NTRS)

    Schwaller, Mathew R.; Schweiss, Robert J.

    2007-01-01

    The NPOESS Preparatory Project (NPP) Science Data Segment (SDS) provides a framework for the future of NASA s distributed Earth science data systems. The NPP SDS performs research and data product assessment while using a fully distributed architecture. The components of this architecture are organized around key environmental data disciplines: land, ocean, ozone, atmospheric sounding, and atmospheric composition. The SDS thus establishes a set of concepts and a working prototypes. This paper describes the framework used by the NPP Project as it enabled Measurement-Based Earth Science Data Systems for the assessment of NPP products.

  19. GEARS: An Enterprise Architecture Based On Common Ground Services

    NASA Astrophysics Data System (ADS)

    Petersen, S.

    2014-12-01

    Earth observation satellites collect a broad variety of data used in applications that range from weather forecasting to climate monitoring. Within NOAA the National Environmental Satellite Data and Information Service (NESDIS) supports these applications by operating satellites in both geosynchronous and polar orbits. Traditionally NESDIS has acquired and operated its satellites as stand-alone systems with their own command and control, mission management, processing, and distribution systems. As the volume, velocity, veracity, and variety of sensor data and products produced by these systems continues to increase, NESDIS is migrating to a new concept of operation in which it will operate and sustain the ground infrastructure as an integrated Enterprise. Based on a series of common ground services, the Ground Enterprise Architecture System (GEARS) approach promises greater agility, flexibility, and efficiency at reduced cost. This talk describes the new architecture and associated development activities, and presents the results of initial efforts to improve product processing and distribution.

  20. Design distributed simulation platform for vehicle management system

    NASA Astrophysics Data System (ADS)

    Wen, Zhaodong; Wang, Zhanlin; Qiu, Lihua

    2006-11-01

    Next generation military aircraft requires the airborne management system high performance. General modules, data integration, high speed data bus and so on are needed to share and manage information of the subsystems efficiently. The subsystems include flight control system, propulsion system, hydraulic power system, environmental control system, fuel management system, electrical power system and so on. The unattached or mixed architecture is changed to integrated architecture. That means the whole airborne system is regarded into one system to manage. So the physical devices are distributed but the system information is integrated and shared. The process function of each subsystem are integrated (including general process modules, dynamic reconfiguration), furthermore, the sensors and the signal processing functions are shared. On the other hand, it is a foundation for power shared. Establish a distributed vehicle management system using 1553B bus and distributed processors which can provide a validation platform for the research of airborne system integrated management. This paper establishes the Vehicle Management System (VMS) simulation platform. Discuss the software and hardware configuration and analyze the communication and fault-tolerant method.

  1. Considerations and Architectures for Inter-Satellite Communications in Distributed Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    Edwards, Bernard; Horne, William; Israel, David; Kwadrat, Carl; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    This paper will identify the important characteristics and requirements necessary for inter-satellite communications in distributed spacecraft systems and present analysis results focusing on architectural and protocol comparisons. Emerging spacecraft systems plan to deploy multiple satellites in various "distributed" configurations ranging from close proximity formation flying to widely separated constellations. Distributed spacecraft configurations provide advantages for science exploration and operations since many activities useful for missions may be better served by distributing them between spacecraft. For example, many scientific observations can be enhanced through spatially separated platforms, such as for deep space interferometry. operating multiple distributed spacecraft as a mission requires coordination that may be best provided through inter-satellite communications. For example, several future distributed spacecraft systems envision autonomous operations requiring relative navigational calculations and coordinated attitude and position corrections. To conduct these operations, data must be exchanged between spacecraft. Direct cross-links between satellites provides an efficient and practical method for transferring data and commands. Unlike existing "bent-pipe" relay networks supporting space missions, no standard or widely-used method exists for cross-link communications. Consequently, to support these future missions, the characteristics necessary for inter-satellite communications need to be examined. At first glance, all of the missions look extremely different. Some missions call for tens to hundreds of nano-satellites in constant communications in close proximity to each other. Other missions call for a handful of satellites communicating very slowly over thousands to hundreds of thousands of kilometers. The paper will first classify distributed spacecraft missions to help guide the evaluation and definition of cross-link architectures and approaches. Based on this general classification, the paper will examine general physical layer parameters, such as frequency bands and data rates, necessary to support the missions. The paper will also identify classes of communication architectures that may be employed, ranging from fully distributed to centralized topologies. Numerous factors, such as number of spacecraft, must be evaluated when attempting to pick a communications architecture. Also important is the stability of the formation from a communications standpoint. For example, do all of the spacecraft require equal bandwidth and are spacecraft allowed to enter and leave a formation? The type of science mission being attempted may also heavily influence the communications architecture. In addition, the paper will assess various parameters and characteristics typically associated with the data link layer. The paper will analyze the performance of various multiple access techniques given the operational scenario, requirements, and communication topologies envisioned for missions. This assessment will also include a survey of existing standards and their applicability for distributed spacecraft systems. An important consideration includes the interoperability of the lower layers (physical and data link) examined in this paper with the higher layer protocols(network) envisioned for future space internetworking. Finally, the paper will define a suggested path, including preliminary recommendations, for defining and developing a standard for intersatellite communications based on the classes of distributed spacecraft missions and analysis results.

  2. Status, Vision, and Challenges of an Intelligent Distributed Engine Control Architecture (Postprint)

    DTIC Science & Technology

    2007-09-18

    TERMS turbine engine control, engine health management, FADEC , Universal FADEC , Distributed Controls, UF, UF Platform, common FADEC , Generic FADEC ...Modular FADEC , Adaptive Control 16. SECURITY CLASSIFICATION OF: 19a. NAME OF RESPONSIBLE PERSON (Monitor) a. REPORT Unclassified b. ABSTRACT...Eventually the Full Authority Digital Electronic Control ( FADEC ) became the norm. Presently, this control system architecture accounts for 15 to 20% of

  3. Advanced and secure architectural EHR approaches.

    PubMed

    Blobel, Bernd

    2006-01-01

    Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context, the Australian GEHR project, the openEHR initiative, the revision of CEN ENV 13606 "Electronic Health Record communication", all based on Archetypes, but also the HL7 version 3 activities are discussed in some detail. The latter include the HL7 RIM, the HL7 Development Framework, the HL7's clinical document architecture (CDA) as well as the set of models from use cases, activity diagrams, sequence diagrams up to Domain Information Models (DMIMs) and their building blocks Common Message Element Types (CMET) Constraining Models to their underlying concepts. The future-proof EHR architecture as open, user-centric, user-friendly, flexible, scalable, portable core application in health information systems and health networks has to follow advanced architectural paradigms.

  4. Automation of Shuttle Tile Inspection - Engineering methodology for Space Station

    NASA Technical Reports Server (NTRS)

    Wiskerchen, M. J.; Mollakarimi, C.

    1987-01-01

    The Space Systems Integration and Operations Research Applications (SIORA) Program was initiated in late 1986 as a cooperative applications research effort between Stanford University, NASA Kennedy Space Center, and Lockheed Space Operations Company. One of the major initial SIORA tasks was the application of automation and robotics technology to all aspects of the Shuttle tile processing and inspection system. This effort has adopted a systems engineering approach consisting of an integrated set of rapid prototyping testbeds in which a government/university/industry team of users, technologists, and engineers test and evaluate new concepts and technologies within the operational world of Shuttle. These integrated testbeds include speech recognition and synthesis, laser imaging inspection systems, distributed Ada programming environments, distributed relational database architectures, distributed computer network architectures, multimedia workbenches, and human factors considerations.

  5. Study on the E-commerce platform based on the agent

    NASA Astrophysics Data System (ADS)

    Fu, Ruixue; Qin, Lishuan; Gao, Yinmin

    2011-10-01

    To solve problem of dynamic integration in e-commerce, the Multi-Agent architecture of electronic commerce platform system based on Agent and Ontology has been introduced, which includes three major types of agent, Ontology and rule collection. In this architecture, service agent and rule are used to realize the business process reengineering, the reuse of software component, and agility of the electronic commerce platform. To illustrate the architecture, a simulation work has been done and the results imply that the architecture provides a very efficient method to design and implement the flexible, distributed, open and intelligent electronic commerce platform system to solve problem of dynamic integration in ecommerce. The objective of this paper is to illustrate the architecture of electronic commerce platform system, and the approach how Agent and Ontology support the electronic commerce platform system.

  6. Towards scalable Byzantine fault-tolerant replication

    NASA Astrophysics Data System (ADS)

    Zbierski, Maciej

    2017-08-01

    Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.

  7. A support architecture for reliable distributed computing systems

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1988-01-01

    The Clouds project is well underway to its goal of building a unified distributed operating system supporting the object model. The operating system design uses the object concept of structuring software at all levels of the system. The basic operating system was developed and work is under progress to build a usable system.

  8. On-board B-ISDN fast packet switching architectures. Phase 2: Development. Proof-of-concept architecture definition report

    NASA Technical Reports Server (NTRS)

    Shyy, Dong-Jye; Redman, Wayne

    1993-01-01

    For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices.

  9. Linking and Combining Distributed Operations Facilities using NASA's "GMSEC" Systems Architectures

    NASA Technical Reports Server (NTRS)

    Smith, Danford; Grubb, Thomas; Esper, Jaime

    2008-01-01

    NASA's Goddard Mission Services Evolution Center (GMSEC) ground system architecture has been in development since late 2001, has successfully supported eight orbiting satellites and is being applied to many of NASA's future missions. GMSEC can be considered an event-driven service-oriented architecture built around a publish/subscribe message bus middleware. This paper briefly discusses the GMSEC technical approaches which have led to significant cost savings and risk reduction for NASA missions operated at the Goddard Space Flight Center (GSFC). The paper then focuses on the development and operational impacts of extending the architecture across multiple mission operations facilities.

  10. A Semantics-Based Information Distribution Framework for Large Web-Based Course Forum System

    ERIC Educational Resources Information Center

    Chim, Hung; Deng, Xiaotie

    2008-01-01

    We propose a novel data distribution framework for developing a large Web-based course forum system. In the distributed architectural design, each forum server is fully equipped with the ability to support some course forums independently. The forum servers collaborating with each other constitute the whole forum system. Therefore, the workload of…

  11. A Ground Systems Architecture Transition for a Distributed Operations System

    NASA Technical Reports Server (NTRS)

    Sellers, Donna; Pitts, Lee; Bryant, Barry

    2003-01-01

    The Marshall Space Flight Center (MSFC) Ground Systems Department (GSD) recently undertook an architecture change in the product line that serves the ISS program. As a result, the architecture tradeoffs between data system product lines that serve remote users versus those that serve control center flight control teams were explored extensively. This paper describes the resulting architecture that will be used in the International Space Station (ISS) payloads program, and the resulting functional breakdown of the products that support this architecture. It also describes the lessons learned from the path that was followed, as a migration of products cause the need to reevaluate the allocation of functions across the architecture. The result is a set of innovative ground system solutions that is scalable so it can support facilities of wide-ranging sizes, from a small site up to large control centers. Effective use of system automation, custom components, design optimization for data management, data storage, data transmissions, and advanced local and wide area networking architectures, plus the effective use of Commercial-Off-The-Shelf (COTS) products, provides flexible Remote Ground System options that can be tailored to the needs of each user. This paper offers a description of the efficiency and effectiveness of the Ground Systems architectural options that have been implemented, and includes successful implementation examples and lessons learned.

  12. A Novel Software Architecture for the Provision of Context-Aware Semantic Transport Information

    PubMed Central

    Moreno, Asier; Perallos, Asier; López-de-Ipiña, Diego; Onieva, Enrique; Salaberria, Itziar; Masegosa, Antonio D.

    2015-01-01

    The effectiveness of Intelligent Transportation Systems depends largely on the ability to integrate information from diverse sources and the suitability of this information for the specific user. This paper describes a new approach for the management and exchange of this information, related to multimodal transportation. A novel software architecture is presented, with particular emphasis on the design of the data model and the enablement of services for information retrieval, thereby obtaining a semantic model for the representation of transport information. The publication of transport data as semantic information is established through the development of a Multimodal Transport Ontology (MTO) and the design of a distributed architecture allowing dynamic integration of transport data. The advantages afforded by the proposed system due to the use of Linked Open Data and a distributed architecture are stated, comparing it with other existing solutions. The adequacy of the information generated in regard to the specific user’s context is also addressed. Finally, a working solution of a semantic trip planner using actual transport data and running on the proposed architecture is presented, as a demonstration and validation of the system. PMID:26016915

  13. Programming with process groups: Group and multicast semantics

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry

    1991-01-01

    Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.

  14. Hypercluster Parallel Processor

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela

    1992-01-01

    Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.

  15. Flight Demonstration of X-33 Vehicle Health Management System Components on the F/A-18 Systems Research Aircraft

    NASA Technical Reports Server (NTRS)

    Schweikhard, Keith A.; Richards, W. Lance; Theisen, John; Mouyos, William; Garbos, Raymond

    2001-01-01

    The X-33 reusable launch vehicle demonstrator has identified the need to implement a vehicle health monitoring system that can acquire data that monitors system health and performance. Sanders, a Lockheed Martin Company, has designed and developed a COTS-based open architecture system that implements a number of technologies that have not been previously used in a flight environment. NASA Dryden Flight Research Center and Sanders teamed to demonstrate that the distributed remote health nodes, fiber optic distributed strain sensor, and fiber distributed data interface communications components of the X-33 vehicle health management (VHM) system could be successfully integrated and flown on a NASA F-18 aircraft. This paper briefly describes components of X-33 VHM architecture flown at Dryden and summarizes the integration and flight demonstration of these X-33 VHM components. Finally, it presents early results from the integration and flight efforts.

  16. Flight Demonstration of X-33 Vehicle Health Management System Components on the F/A-18 Systems Research Aircraft

    NASA Technical Reports Server (NTRS)

    Schweikhard, Keith A.; Richards, W. Lance; Theisen, John; Mouyos, William; Garbos, Raymond; Schkolnik, Gerald (Technical Monitor)

    1998-01-01

    The X-33 reusable launch vehicle demonstrator has identified the need to implement a vehicle health monitoring system that can acquire data that monitors system health and performance. Sanders, a Lockheed Martin Company, has designed and developed a commercial off-the-shelf (COTS)-based open architecture system that implements a number of technologies that have not been previously used in a flight environment. NASA Dryden Flight Research Center and Sanders teamed to demonstrate that the distributed remote health nodes, fiber optic distributed strain sensor, and fiber distributed data interface communications components of the X-33 vehicle health management (VHM) system could be successfully integrated and flown on a NASA F-18 aircraft. This paper briefly describes components of X-33 VHM architecture flown at Dryden and summarizes the integration and flight demonstration of these X-33 VHM components. Finally, it presents early results from the integration and flight efforts.

  17. Distributed Large Data-Object Environments: End-to-End Performance Analysis of High Speed Distributed Storage Systems in Wide Area ATM Networks

    NASA Technical Reports Server (NTRS)

    Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary

    1996-01-01

    We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.

  18. Artificial intelligent e-learning architecture

    NASA Astrophysics Data System (ADS)

    Alharbi, Mafawez; Jemmali, Mahdi

    2017-03-01

    Many institutions and university has forced to use e learning, due to its ability to provide additional and flexible solutions for students and researchers. E-learning In the last decade have transported about the extreme changes in the distribution of education allowing learners to access multimedia course material at any time, from anywhere to suit their specific needs. In the form of e learning, instructors and learners live in different places and they do not engage in a classroom environment, but within virtual universe. Many researches have defined e learning based on their objectives. Therefore, there are small number of e-learning architecture have proposed in the literature. However, the proposed architecture has lack of embedding intelligent system in the architecture of e learning. This research argues that unexplored potential remains, as there is scope for e learning to be intelligent system. This research proposes e-learning architecture that incorporates intelligent system. There are intelligence components, which built into the architecture.

  19. Distributed numerical controllers

    NASA Astrophysics Data System (ADS)

    Orban, Peter E.

    2001-12-01

    While the basic principles of Numerical Controllers (NC) have not changed much during the years, the implementation of NCs' has changed tremendously. NC equipment has evolved from yesterday's hard-wired specialty control apparatus to today's graphics intensive, networked, increasingly PC based open systems, controlling a wide variety of industrial equipment with positioning needs. One of the newest trends in NC technology is the distributed implementation of the controllers. Distributed implementation promises to offer robustness, lower implementation costs, and a scalable architecture. Historically partitioning has been done along the hierarchical levels, moving individual modules into self contained units. The paper discusses various NC architectures, the underlying technology for distributed implementation, and relevant design issues. First the functional requirements of individual NC modules are analyzed. Module functionality, cycle times, and data requirements are examined. Next the infrastructure for distributed node implementation is reviewed. Various communication protocols and distributed real-time operating system issues are investigated and compared. Finally, a different, vertical system partitioning, offering true scalability and reconfigurability is presented.

  20. An Efficient Resource Management System for a Streaming Media Distribution Network

    ERIC Educational Resources Information Center

    Cahill, Adrian J.; Sreenan, Cormac J.

    2006-01-01

    This paper examines the design and evaluation of a TV on Demand (TVoD) system, consisting of a globally accessible storage architecture where all TV content broadcast over a period of time is made available for streaming. The proposed architecture consists of idle Internet Service Provider (ISP) servers that can be rented and released dynamically…

  1. A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.

    2014-12-01

    Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while managing the uncertainties of scientific conclusions derived from such capabilities. This talk will provide an overview of JPL's efforts in developing a comprehensive architectural approach to data science.

  2. Geospatial Applications on Different Parallel and Distributed Systems in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Rodila, D.; Bacu, V.; Gorgan, D.

    2012-04-01

    The execution of Earth Science applications and services on parallel and distributed systems has become a necessity especially due to the large amounts of Geospatial data these applications require and the large geographical areas they cover. The parallelization of these applications comes to solve important performance issues and can spread from task parallelism to data parallelism as well. Parallel and distributed architectures such as Grid, Cloud, Multicore, etc. seem to offer the necessary functionalities to solve important problems in the Earth Science domain: storing, distribution, management, processing and security of Geospatial data, execution of complex processing through task and data parallelism, etc. A main goal of the FP7-funded project enviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is the development of a Spatial Data Infrastructure targeting this catchment region but also the development of standardized and specialized tools for storing, analyzing, processing and visualizing the Geospatial data concerning this area. For achieving these objectives, the enviroGRIDS deals with the execution of different Earth Science applications, such as hydrological models, Geospatial Web services standardized by the Open Geospatial Consortium (OGC) and others, on parallel and distributed architecture to maximize the obtained performance. This presentation analysis the integration and execution of Geospatial applications on different parallel and distributed architectures and the possibility of choosing among these architectures based on application characteristics and user requirements through a specialized component. Versions of the proposed platform have been used in enviroGRIDS project on different use cases such as: the execution of Geospatial Web services both on Web and Grid infrastructures [2] and the execution of SWAT hydrological models both on Grid and Multicore architectures [3]. The current focus is to integrate in the proposed platform the Cloud infrastructure, which is still a paradigm with critical problems to be solved despite the great efforts and investments. Cloud computing comes as a new way of delivering resources while using a large set of old as well as new technologies and tools for providing the necessary functionalities. The main challenges in the Cloud computing, most of them identified also in the Open Cloud Manifesto 2009, address resource management and monitoring, data and application interoperability and portability, security, scalability, software licensing, etc. We propose a platform able to execute different Geospatial applications on different parallel and distributed architectures such as Grid, Cloud, Multicore, etc. with the possibility of choosing among these architectures based on application characteristics and complexity, user requirements, necessary performances, cost support, etc. The execution redirection on a selected architecture is realized through a specialized component and has the purpose of offering a flexible way in achieving the best performances considering the existing restrictions.

  3. Design of Distributed Engine Control Systems for Stability Under Communication Packet Dropouts

    DTIC Science & Technology

    2009-08-01

    remarks. II. Distributed Engine Control Systems A. FADEC based on Distributed Engine Control Architecture (DEC) In Distributed Engine...Control, the functions of Full Authority Digital Engine Control ( FADEC ) are distributed at the component level. Each sensor/actuator is to be replaced...diagnostics and health management functionality. Dual channel digital serial communication network is used to connect these smart modules with FADEC . Fig

  4. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1991-01-01

    The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  5. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  6. EOS MLS Science Data Processing System: A Description of Architecture and Capabilities

    NASA Technical Reports Server (NTRS)

    Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.

    2006-01-01

    This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.

  7. Optically controlled phased-array antenna technology for space communication systems

    NASA Technical Reports Server (NTRS)

    Kunath, Richard R.; Bhasin, Kul B.

    1988-01-01

    Using MMICs in phased-array applications above 20 GHz requires complex RF and control signal distribution systems. Conventional waveguide, coaxial cable, and microstrip methods are undesirable due to their high weight, high loss, limited mechanical flexibility and large volume. An attractive alternative to these transmission media, for RF and control signal distribution in MMIC phased-array antennas, is optical fiber. Presented are potential system architectures and their associated characteristics. The status of high frequency opto-electronic components needed to realize the potential system architectures is also discussed. It is concluded that an optical fiber network will reduce weight and complexity, and increase reliability and performance, but may require higher power.

  8. Access control and privacy in large distributed systems

    NASA Technical Reports Server (NTRS)

    Leiner, B. M.; Bishop, M.

    1986-01-01

    Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.

  9. Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.

    2016-12-01

    Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.

  10. Kanerva's sparse distributed memory: An associative memory algorithm well-suited to the Connection Machine

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1988-01-01

    The advent of the Connection Machine profoundly changes the world of supercomputers. The highly nontraditional architecture makes possible the exploration of algorithms that were impractical for standard Von Neumann architectures. Sparse distributed memory (SDM) is an example of such an algorithm. Sparse distributed memory is a particularly simple and elegant formulation for an associative memory. The foundations for sparse distributed memory are described, and some simple examples of using the memory are presented. The relationship of sparse distributed memory to three important computational systems is shown: random-access memory, neural networks, and the cerebellum of the brain. Finally, the implementation of the algorithm for sparse distributed memory on the Connection Machine is discussed.

  11. Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics

    DTIC Science & Technology

    2017-04-19

    enforcement . The system was tested in the wild using video files as well as a commercial Video Management System supporting more than 100 surveillance...research were used to implement a distributed on-demand video analytics system that was prototyped for the use of forensics investigators in law...cameras as video sources. The architectural considerations of this system are presented. Issues to be reckoned with in implementing a scalable

  12. Distributed and parallel approach for handle and perform huge datasets

    NASA Astrophysics Data System (ADS)

    Konopko, Joanna

    2015-12-01

    Big Data refers to the dynamic, large and disparate volumes of data comes from many different sources (tools, machines, sensors, mobile devices) uncorrelated with each others. It requires new, innovative and scalable technology to collect, host and analytically process the vast amount of data. Proper architecture of the system that perform huge data sets is needed. In this paper, the comparison of distributed and parallel system architecture is presented on the example of MapReduce (MR) Hadoop platform and parallel database platform (DBMS). This paper also analyzes the problem of performing and handling valuable information from petabytes of data. The both paradigms: MapReduce and parallel DBMS are described and compared. The hybrid architecture approach is also proposed and could be used to solve the analyzed problem of storing and processing Big Data.

  13. Adaptive Distributed Intelligent Control Architecture for Future Propulsion Systems (Preprint)

    DTIC Science & Technology

    2007-04-01

    weight will be reduced by replacing heavy harness assemblies and FADECs , with distributed processing elements interconnected. This paper reviews...Digital Electronic Controls ( FADECs ), with distributed processing elements interconnected through a serial bus. Efficient data flow throughout the...because intelligence is embedded in components while overall control is maintained in the FADEC . The need for Distributed Control Systems in

  14. Web-based Distributed Medical Information System for Chronic Viral Hepatitis

    NASA Astrophysics Data System (ADS)

    Yang, Ying; Qin, Tuan-fa; Jiang, Jian-ning; Lu, Hui; Ma, Zong-e.; Meng, Hong-chang

    2008-11-01

    To make a long-term dynamic monitoring to the chronically ill, especially patients of HBV A, we build a distributed Medical Information System for Chronic Viral Hepatitis (MISCHV). The Web-based system architecture and its function are described, and the extensive application and important role are also presented.

  15. Using Ada to implement the operations management system in a community of experts

    NASA Technical Reports Server (NTRS)

    Frank, M. S.

    1986-01-01

    An architecture is described for the Space Station Operations Management System (OMS), consisting of a distributed expert system framework implemented in Ada. The motivation for such a scheme is based on the desire to integrate the very diverse elements of the OMS while taking maximum advantage of knowledge based systems technology. Part of the foundation of an Ada based distributed expert system was accomplished in the form of a proof of concept prototype for the KNOMES project (Knowledge-based Maintenance Expert System). This prototype successfully used concurrently active experts to accomplish monitoring and diagnosis for the Remote Manipulator System. The basic concept of this software architecture is named ACTORS for Ada Cognitive Task ORganization Scheme. It is when one considers the overall problem of integrating all of the OMS elements into a cooperative system that the AI solution stands out. By utilizing a distributed knowledge based system as the framework for OMS, it is possible to integrate those components which need to share information in an intelligent manner.

  16. Experience in Construction and Operation of the Distributed Information Systems on the Basis of the Z39.50 Protocol

    NASA Astrophysics Data System (ADS)

    Zhizhimov, Oleg; Mazov, Nikolay; Skibin, Sergey

    Questions concerned with construction and operation of the distributed information systems on the basis of ANSI/NISO Z39.50 Information Retrieval Protocol are discussed in the paper. The paper is based on authors' practice in developing ZooPARK server. Architecture of distributed information systems, questions of reliability of such systems, minimization of search time and administration are examined. Problems with developing of distributed information systems are also described.

  17. Simulator for concurrent processing data flow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.; Stoughton, John W.; Mielke, Roland R.

    1992-01-01

    A software simulator capability of simulating execution of an algorithm graph on a given system under the Algorithm to Architecture Mapping Model (ATAMM) rules is presented. ATAMM is capable of modeling the execution of large-grained algorithms on distributed data flow architectures. Investigating the behavior and determining the performance of an ATAMM based system requires the aid of software tools. The ATAMM Simulator presented is capable of determining the performance of a system without having to build a hardware prototype. Case studies are performed on four algorithms to demonstrate the capabilities of the ATAMM Simulator. Simulated results are shown to be comparable to the experimental results of the Advanced Development Model System.

  18. Methods and tools for profiling and control of distributed systems

    NASA Astrophysics Data System (ADS)

    Sukharev, R.; Lukyanchikov, O.; Nikulchev, E.; Biryukov, D.; Ryadchikov, I.

    2018-02-01

    This article is devoted to the topic of profiling and control of distributed systems. Distributed systems have a complex architecture, applications are distributed among various computing nodes, and many network operations are performed. Therefore, today it is important to develop methods and tools for profiling distributed systems. The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.

  19. The Case for Distributed Engine Control in Turbo-Shaft Engine Systems

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.; Paluszewski, Paul J.; Storey, William; Smith, Bert J.

    2009-01-01

    The turbo-shaft engine is an important propulsion system used to power vehicles on land, sea, and in the air. As the power plant for many high performance helicopters, the characteristics of the engine and control are critical to proper vehicle operation as well as being the main determinant to overall vehicle performance. When applied to vertical flight, important distinctions exist in the turbo-shaft engine control system due to the high degree of dynamic coupling between the engine and airframe and the affect on vehicle handling characteristics. In this study, the impact of engine control system architecture is explored relative to engine performance, weight, reliability, safety, and overall cost. Comparison of the impact of architecture on these metrics is investigated as the control system is modified from a legacy centralized structure to a more distributed configuration. A composite strawman system which is typical of turbo-shaft engines in the 1000 to 2000 hp class is described and used for comparison. The overall benefits of these changes to control system architecture are assessed. The availability of supporting technologies to achieve this evolution is also discussed.

  20. Hybrid power system intelligent operation and protection involving distributed architectures and pulsed loads

    NASA Astrophysics Data System (ADS)

    Mohamed, Ahmed

    Efficient and reliable techniques for power delivery and utilization are needed to account for the increased penetration of renewable energy sources in electric power systems. Such methods are also required for current and future demands of plug-in electric vehicles and high-power electronic loads. Distributed control and optimal power network architectures will lead to viable solutions to the energy management issue with high level of reliability and security. This dissertation is aimed at developing and verifying new techniques for distributed control by deploying DC microgrids, involving distributed renewable generation and energy storage, through the operating AC power system. To achieve the findings of this dissertation, an energy system architecture was developed involving AC and DC networks, both with distributed generations and demands. The various components of the DC microgrid were designed and built including DC-DC converters, voltage source inverters (VSI) and AC-DC rectifiers featuring novel designs developed by the candidate. New control techniques were developed and implemented to maximize the operating range of the power conditioning units used for integrating renewable energy into the DC bus. The control and operation of the DC microgrids in the hybrid AC/DC system involve intelligent energy management. Real-time energy management algorithms were developed and experimentally verified. These algorithms are based on intelligent decision-making elements along with an optimization process. This was aimed at enhancing the overall performance of the power system and mitigating the effect of heavy non-linear loads with variable intensity and duration. The developed algorithms were also used for managing the charging/discharging process of plug-in electric vehicle emulators. The protection of the proposed hybrid AC/DC power system was studied. Fault analysis and protection scheme and coordination, in addition to ideas on how to retrofit currently available protection concepts and devices for AC systems in a DC network, were presented. A study was also conducted on the effect of changing the distribution architecture and distributing the storage assets on the various zones of the network on the system's dynamic security and stability. A practical shipboard power system was studied as an example of a hybrid AC/DC power system involving pulsed loads. Generally, the proposed hybrid AC/DC power system, besides most of the ideas, controls and algorithms presented in this dissertation, were experimentally verified at the Smart Grid Testbed, Energy Systems Research Laboratory. All the developments in this dissertation were experimentally verified at the Smart Grid Testbed.

  1. Architectures Toward Reusable Science Data Systems

    NASA Astrophysics Data System (ADS)

    Moses, J. F.

    2014-12-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building ground systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research, NOAA's weather satellites and USGS's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience the goal is to recognize architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  2. Architectures Toward Reusable Science Data Systems

    NASA Technical Reports Server (NTRS)

    Moses, John

    2015-01-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  3. Fault tolerant computer control for a Maglev transportation system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Nagle, Gail A.; Anagnostopoulos, George

    1994-01-01

    Magnetically levitated (Maglev) vehicles operating on dedicated guideways at speeds of 500 km/hr are an emerging transportation alternative to short-haul air and high-speed rail. They have the potential to offer a service significantly more dependable than air and with less operating cost than both air and high-speed rail. Maglev transportation derives these benefits by using magnetic forces to suspend a vehicle 8 to 200 mm above the guideway. Magnetic forces are also used for propulsion and guidance. The combination of high speed, short headways, stringent ride quality requirements, and a distributed offboard propulsion system necessitates high levels of automation for the Maglev control and operation. Very high levels of safety and availability will be required for the Maglev control system. This paper describes the mission scenario, functional requirements, and dependability and performance requirements of the Maglev command, control, and communications system. A distributed hierarchical architecture consisting of vehicle on-board computers, wayside zone computers, a central computer facility, and communication links between these entities was synthesized to meet the functional and dependability requirements on the maglev. Two variations of the basic architecture are described: the Smart Vehicle Architecture (SVA) and the Zone Control Architecture (ZCA). Preliminary dependability modeling results are also presented.

  4. Design Principles for E-Government Architectures

    NASA Astrophysics Data System (ADS)

    Sandoz, Alain

    The paper introduces a holistic approach for architecting systems which must sustain the entire e-government activity of a public authority. Four principles directly impact the architecture: Legality, Responsibility, Transparency, and Symmetry leading to coherent representations of the architecture for the client, the designer and the builder. The approach enables to deploy multipartite, distributed public services, including legal delegation of roles and outsourcing of non mandatory tasks through PPP.

  5. A hierarchically distributed architecture for fault isolation expert systems on the space station

    NASA Technical Reports Server (NTRS)

    Miksell, Steve; Coffer, Sue

    1987-01-01

    The Space Station Axiomatic Fault Isolating Expert Systems (SAFTIES) system deals with the hierarchical distribution of control and knowledge among independent expert systems doing fault isolation and scheduling of Space Station subsystems. On its lower level, fault isolation is performed on individual subsystems. These fault isolation expert systems contain knowledge about the performance requirements of their particular subsystem and corrective procedures which may be involved in repsonse to certain performance errors. They can control the functions of equipment in their system and coordinate system task schedules. On a higher level, the Executive contains knowledge of all resources, task schedules for all systems, and the relative priority of all resources and tasks. The executive can override any subsystem task schedule in order to resolve use conflicts or resolve errors that require resources from multiple subsystems. Interprocessor communication is implemented using the SAFTIES Communications Interface (SCI). The SCI is an application layer protocol which supports the SAFTIES distributed multi-level architecture.

  6. An adaptable product for material processing and life science missions

    NASA Technical Reports Server (NTRS)

    Wassick, Gregory; Dobbs, Michael

    1995-01-01

    The Experiment Control System II (ECS-II) is designed to make available to the microgravity research community the same tools and mode of automated experimentation that their ground-based counterparts have enjoyed for the last two decades. The design goal was accomplished by combining commercial automation tools familiar to the experimenter community with system control components that interface with the on-orbit platform in a distributed architecture. The architecture insulates the tools necessary for managing a payload. By using commercial software and hardware components whenever possible, development costs were greatly reduced when compared to traditional space development projects. Using commercial-off-the-shelf (COTS) components also improved the usability documentation, and reducing the need for training of the system by providing familiar user interfaces, providing a wealth of readily available documentation, and reducing the need for training on system-specific details. The modularity of the distributed architecture makes it very amenable for modification to different on-orbit experiments requiring robotics-based automation.

  7. SCOS 2: A distributed architecture for ground system control

    NASA Astrophysics Data System (ADS)

    Keyte, Karl P.

    The current generation of spacecraft ground control systems in use at the European Space Agency/European Space Operations Centre (ESA/ESOC) is based on the SCOS 1. Such systems have become difficult to manage in both functional and financial terms. The next generation of spacecraft is demanding more flexibility in the use, configuration and distribution of control facilities as well as functional requirements capable of matching those being planned for future missions. SCOS 2 is more than a successor to SCOS 1. Many of the shortcomings of the existing system have been carefully analyzed by user and technical communities and a complete redesign was made. Different technologies were used in many areas including hardware platform, network architecture, user interfaces and implementation techniques, methodologies and language. As far as possible a flexible design approach has been made using popular industry standards to provide vendor independence in both hardware and software areas. This paper describes many of the new approaches made in the architectural design of the SCOS 2.

  8. Sequence stratigraphic controls on reservoir characterization and architecture: case study of the Messinian Abu Madi incised-valley fill, Egypt

    NASA Astrophysics Data System (ADS)

    Abdel-Fattah, Mohamed I.; Slatt, Roger M.

    2013-12-01

    Understanding sequence stratigraphy architecture in the incised-valley is a crucial step to understanding the effect of relative sea level changes on reservoir characterization and architecture. This paper presents a sequence stratigraphic framework of the incised-valley strata within the late Messinian Abu Madi Formation based on seismic and borehole data. Analysis of sand-body distribution reveals that fluvial channel sandstones in the Abu Madi Formation in the Baltim Fields, offshore Nile Delta, Egypt, are not randomly distributed but are predictable in their spatial and stratigraphic position. Elucidation of the distribution of sandstones in the Abu Madi incised-valley fill within a sequence stratigraphic framework allows a better understanding of their characterization and architecture during burial. Strata of the Abu Madi Formation are interpreted to comprise two sequences, which are the most complex stratigraphically; their deposits comprise a complex incised valley fill. The lower sequence (SQ1) consists of a thick incised valley-fill of a Lowstand Systems Tract (LST1)) overlain by a Transgressive Systems Tract (TST1) and Highstand Systems Tract (HST1). The upper sequence (SQ2) contains channel-fill and is interpreted as a LST2 which has a thin sandstone channel deposits. Above this, channel-fill sandstone and related strata with tidal influence delineates the base of TST2, which is overlain by a HST2. Gas reservoirs of the Abu Madi Formation (present-day depth ˜3552 m), the Baltim Fields, Egypt, consist of fluvial lowstand systems tract (LST) sandstones deposited in an incised valley. LST sandstones have a wide range of porosity (15 to 28%) and permeability (1 to 5080mD), which reflect both depositional facies and diagenetic controls. This work demonstrates the value of constraining and evaluating the impact of sequence stratigraphic distribution on reservoir characterization and architecture in incised-valley deposits, and thus has an important impact on reservoir quality evolution in hydrocarbon exploration in such settings.

  9. Architectural development of an advanced EVA Electronic System

    NASA Technical Reports Server (NTRS)

    Lavelle, Joseph

    1992-01-01

    An advanced electronic system for future EVA missions (including zero gravity, the lunar surface, and the surface of Mars) is under research and development within the Advanced Life Support Division at NASA Ames Research Center. As a first step in the development, an optimum system architecture has been derived from an analysis of the projected requirements for these missions. The open, modular architecture centers around a distributed multiprocessing concept where the major subsystems independently process their own I/O functions and communicate over a common bus. Supervision and coordination of the subsystems is handled by an embedded real-time operating system kernel employing multitasking software techniques. A discussion of how the architecture most efficiently meets the electronic system functional requirements, maximizes flexibility for future development and mission applications, and enhances the reliability and serviceability of the system in these remote, hostile environments is included.

  10. EHR standards--A comparative study.

    PubMed

    Blobel, Bernd; Pharow, Peter

    2006-01-01

    For ensuring quality and efficiency of patient's care, the care paradigm moves from organization-centered over process-controlled towards personal care. Such health system paradigm change leads to new paradigms for analyzing, designing, implementing and deploying supporting health information systems including EHR systems as core application in a distributed eHealth environment. The paper defines the architectural paradigm for future-proof EHR systems. It compares advanced EHR architectures referencing them at the Generic Component Model. The paper introduces the evolving paradigm of autonomous computing for self-organizing health information systems.

  11. Distributed Engine Control Empirical/Analytical Verification Tools

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan

    2013-01-01

    NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique capabilities to study the effects of a given change to the control system in the context of the distributed paradigm. The simulation tool can support treatment of all components within the control system, both virtual and real; these include communication data network, smart sensor and actuator nodes, centralized control system (FADEC full authority digital engine control), and the aircraft engine itself. The DECsim tool can allow simulation-based prototyping of control laws, control architectures, and decentralization strategies before hardware is integrated into the system. With the configuration specified, the simulator allows a variety of key factors to be systematically assessed. Such factors include control system performance, reliability, weight, and bandwidth utilization.

  12. Payload accommodations. Avionics payload support architecture

    NASA Technical Reports Server (NTRS)

    Creasy, Susan L.; Levy, C. D.

    1990-01-01

    Concepts for vehicle and payload avionics architectures for future NASA programs, including the Assured Shuttle Access program, Space Station Freedom (SSF), Shuttle-C, Advanced Manned Launch System (AMLS), and the Lunar/Mars programs are discussed. Emphasis is on the potential available to increase payload services which will be required in the future, while decreasing the operational cost/complexity by utilizing state of the art advanced avionics systems and a distributed processing architecture. Also addressed are the trade studies required to determine the optimal degree of vehicle (NASA) to payload (customer) separation and the ramifications of these decisions.

  13. Overview of the LINCS architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J.G.; Watson, R.W.

    1982-01-13

    Computing at the Lawrence Livermore National Laboratory (LLNL) has evolved over the past 15 years with a computer network based resource sharing environment. The increasing use of low cost and high performance micro, mini and midi computers and commercially available local networking systems will accelerate this trend. Further, even the large scale computer systems, on which much of the LLNL scientific computing depends, are evolving into multiprocessor systems. It is our belief that the most cost effective use of this environment will depend on the development of application systems structured into cooperating concurrent program modules (processes) distributed appropriately over differentmore » nodes of the environment. A node is defined as one or more processors with a local (shared) high speed memory. Given the latter view, the environment can be characterized as consisting of: multiple nodes communicating over noisy channels with arbitrary delays and throughput, heterogenous base resources and information encodings, no single administration controlling all resources, distributed system state, and no uniform time base. The system design problem is - how to turn the heterogeneous base hardware/firmware/software resources of this environment into a coherent set of resources that facilitate development of cost effective, reliable, and human engineered applications. We believe the answer lies in developing a layered, communication oriented distributed system architecture; layered and modular to support ease of understanding, reconfiguration, extensibility, and hiding of implementation or nonessential local details; communication oriented because that is a central feature of the environment. The Livermore Interactive Network Communication System (LINCS) is a hierarchical architecture designed to meet the above needs. While having characteristics in common with other architectures, it differs in several respects.« less

  14. Distributed Learning Metadata Standards

    ERIC Educational Resources Information Center

    McClelland, Marilyn

    2004-01-01

    Significant economies can be achieved in distributed learning systems architected with a focus on interoperability and reuse. The key building blocks of an efficient distributed learning architecture are the use of standards and XML technologies. The goal of plug and play capability among various components of a distributed learning system…

  15. Architecture, Voltage, and Components for a Turboelectric Distributed Propulsion Electric Grid (AVC-TeDP)

    NASA Technical Reports Server (NTRS)

    Gemin, Paul; Kupiszewski, Tom; Radun, Arthur; Pan, Yan; Lai, Rixin; Zhang, Di; Wang, Ruxi; Wu, Xinhui; Jiang, Yan; Galioto, Steve; hide

    2015-01-01

    The purpose of this effort was to advance the selection, characterization, and modeling of a propulsion electric grid for a Turboelectric Distributed Propulsion (TeDP) system for transport aircraft. The TeDP aircraft would constitute a miniature electric grid with 50 MW or more of total power, two or more generators, redundant transmission lines, and multiple electric motors driving propulsion fans. The study proposed power system architectures, investigated electromechanical and solid state circuit breakers, estimated the impact of the system voltage on system mass, and recommended DC bus voltage range. The study assumed an all cryogenic power system. Detailed assumptions within the study include hybrid circuit breakers, a two cryogen system, and supercritical cyrogens. A dynamic model was developed to investigate control and parameter selection.

  16. Distributed environmental control

    NASA Technical Reports Server (NTRS)

    Cleveland, Gary A.

    1992-01-01

    We present an architecture of distributed, independent control agents designed to work with the Computer Aided System Engineering and Analysis (CASE/A) simulation tool. CASE/A simulates behavior of Environmental Control and Life Support Systems (ECLSS). We describe a lattice of agents capable of distributed sensing and overcoming certain sensor and effector failures. We address how the architecture can achieve the coordinating functions of a hierarchical command structure while maintaining the robustness and flexibility of independent agents. These agents work between the time steps of the CASE/A simulation tool to arrive at command decisions based on the state variables maintained by CASE/A. Control is evaluated according to both effectiveness (e.g., how well temperature was maintained) and resource utilization (the amount of power and materials used).

  17. Research of Ancient Architectures in Jin-Fen Area Based on GIS&BIM Technology

    NASA Astrophysics Data System (ADS)

    Jia, Jing; Zheng, Qiuhong; Gao, Huiying; Sun, Hai

    2017-05-01

    The number of well-preserved ancient buildings located in Shanxi Province, enjoying the absolute maximum proportion of ancient architectures in China, is about 18418, among which, 9053 buildings have the structural style of wood frame. The value of the application of BIM (Building Information Modeling) and GIS (Geographic Information System) is gradually probed and testified in the corresponding fields of ancient architecture’s spatial distribution information management, routine maintenance and special conservation & restoration, the evaluation and simulation of related disasters, such as earthquake. The research objects are ancient architectures in JIN-FEN area, which were first investigated by Sicheng LIANG and recorded in his work of “Chinese ancient architectures survey report”. The research objects, i.e. the ancient architectures in Jin-Fen area include those in Sicheng LIANG’s investigation, and further adjustments were made through authors’ on-site investigation and literature searching & collection. During this research process, the spatial distributing Geodatabase of research objects is established utilizing GIS. The BIM components library for ancient buildings is formed combining on-site investigation data and precedent classic works, such as “Yingzao Fashi”, a treatise on architectural methods in Song Dynasty, “Yongle Encyclopedia” and “Gongcheng Zuofa Zeli”, case collections of engineering practice, by the Ministry of Construction of Qing Dynasty. A building of Guangsheng temple in Hongtong county is selected as an example to elaborate the BIM model construction process based on the BIM components library for ancient buildings. Based on the foregoing work results of spatial distribution data, attribute data of features, 3D graphic information and parametric building information model, the information management system for ancient architectures in Jin-Fen Area, utilizing GIS&BIM technology, could be constructed to support the further research of seismic disaster analysis and seismic performance simulation.

  18. Redundancy management for efficient fault recovery in NASA's distributed computing system

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Pandya, Mihir; Yau, Kitty

    1991-01-01

    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.

  19. A Comparison Between Publish-and-Subscribe and Client-Server Models in Distributed Control System Networks

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)

    1998-01-01

    The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.

  20. Autonomous Robot Navigation in Human-Centered Environments Based on 3D Data Fusion

    NASA Astrophysics Data System (ADS)

    Steinhaus, Peter; Strand, Marcus; Dillmann, Rüdiger

    2007-12-01

    Efficient navigation of mobile platforms in dynamic human-centered environments is still an open research topic. We have already proposed an architecture (MEPHISTO) for a navigation system that is able to fulfill the main requirements of efficient navigation: fast and reliable sensor processing, extensive global world modeling, and distributed path planning. Our architecture uses a distributed system of sensor processing, world modeling, and path planning units. In this arcticle, we present implemented methods in the context of data fusion algorithms for 3D world modeling and real-time path planning. We also show results of the prototypic application of the system at the museum ZKM (center for art and media) in Karlsruhe.

  1. An IP-Based Software System for Real-time, Closed Loop, Multi-Spacecraft Mission Simulations

    NASA Technical Reports Server (NTRS)

    Cary, Everett; Davis, George; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis

    2003-01-01

    This viewgraph presentation provides information on the architecture of a computerized testbest for simulating Distributed Space Systems (DSS) for controlling spacecraft flying in formation. The presentation also discusses and diagrams the Distributed Synthesis Environment (DSE) for simulating and planning DSS missions.

  2. System architecture and operational analysis of medium displacement unmanned surface vehicle sea hunter as a surface warfare component of distributed lethality

    DTIC Science & Technology

    2017-06-01

    students in a war- gaming class , and working in tandem with a NPS distance...surface mode ability provides a threat suppression method against small craft attacks and boarding attempts. b. Vulnerability As a sea-going surface...Design Architecture With a proposed CONOPS established, the physical architecture can proceed to a more detailed design. For the purpose of

  3. Specifying structural constraints of architectural patterns in the ARCHERY language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez, Alejandro; HASLab INESC TEC and Universidade do Minho, Campus de Gualtar, 4710-057 Braga; Barbosa, Luis S.

    ARCHERY is an architectural description language for modelling and reasoning about distributed, heterogeneous and dynamically reconfigurable systems in terms of architectural patterns. The language supports the specification of architectures and their reconfiguration. This paper introduces a language extension for precisely describing the structural design decisions that pattern instances must respect in their (re)configurations. The extension is a propositional modal logic with recursion and nominals referencing components, i.e., a hybrid µ-calculus. Its expressiveness allows specifying safety and liveness constraints, as well as paths and cycles over structures. Refinements of classic architectural patterns are specified.

  4. Proceedings of the Workshop on Large, Distributed, Parallel Architecture, Real-Time Systems Held in Alexandria, Virginia on 15-19 March 1993

    DTIC Science & Technology

    1993-07-01

    distributed system. Second, to support the development of scaleable end-use applications that implement the mission critical control policies of the...implementation. These and other cogent reasons suggest two important rules for designing large, distributed, realtime systems: i) separate policies required...system design rules. 0 The separation of system coordination and management policies and mechanisms allows for the "objectification" of the underlying

  5. The structure of the clouds distributed operating system

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1989-01-01

    A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data and fault-tolerance.

  6. A component-based, distributed object services architecture for a clinical workstation.

    PubMed

    Chueh, H C; Raila, W F; Pappas, J J; Ford, M; Zatsman, P; Tu, J; Barnett, G O

    1996-01-01

    Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces.

  7. A component-based, distributed object services architecture for a clinical workstation.

    PubMed Central

    Chueh, H. C.; Raila, W. F.; Pappas, J. J.; Ford, M.; Zatsman, P.; Tu, J.; Barnett, G. O.

    1996-01-01

    Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces. PMID:8947744

  8. Design and Development of a 200-kW Turbo-Electric Distributed Propulsion Testbed

    NASA Technical Reports Server (NTRS)

    Papathakis, Kurt V.; Kloesel, Kurt J.; Lin, Yohan; Clarke, Sean; Ediger, Jacob J.; Ginn, Starr

    2016-01-01

    The National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC) (Edwards, California) is developing a Hybrid-Electric Integrated Systems Testbed (HEIST) Testbed as part of the HEIST Project, to study power management and transition complexities, modular architectures, and flight control laws for turbo-electric distributed propulsion technologies using representative hardware and piloted simulations. Capabilities are being developed to assess the flight readiness of hybrid electric and distributed electric vehicle architectures. Additionally, NASA will leverage experience gained and assets developed from HEIST to assist in flight-test proposal development, flight-test vehicle design, and evaluation of hybrid electric and distributed electric concept vehicles for flight safety. The HEIST test equipment will include three trailers supporting a distributed electric propulsion wing, a battery system and turbogenerator, dynamometers, and supporting power and communication infrastructure, all connected to the AFRC Core simulation. Plans call for 18 high performance electric motors that will be powered by batteries and the turbogenerator, and commanded by a piloted simulation. Flight control algorithms will be developed on the turbo-electric distributed propulsion system.

  9. Hadoop-based implementation of processing medical diagnostic records for visual patient system

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Shi, Liehang; Xie, Zhe; Zhang, Jianguo

    2018-03-01

    We have innovatively introduced Visual Patient (VP) concept and method visually to represent and index patient imaging diagnostic records (IDR) in last year SPIE Medical Imaging (SPIE MI 2017), which can enable a doctor to review a large amount of IDR of a patient in a limited appointed time slot. In this presentation, we presented a new approach to design data processing architecture of VP system (VPS) to acquire, process and store various kinds of IDR to build VP instance for each patient in hospital environment based on Hadoop distributed processing structure. We designed this system architecture called Medical Information Processing System (MIPS) with a combination of Hadoop batch processing architecture and Storm stream processing architecture. The MIPS implemented parallel processing of various kinds of clinical data with high efficiency, which come from disparate hospital information system such as PACS, RIS LIS and HIS.

  10. Modelling Root Systems Using Oriented Density Distributions

    NASA Astrophysics Data System (ADS)

    Dupuy, Lionel X.

    2011-09-01

    Root architectural models are essential tools to understand how plants access and utilize soil resources during their development. However, root architectural models use complex geometrical descriptions of the root system and this has limitations to model interactions with the soil. This paper presents the development of continuous models based on the concept of oriented density distribution function. The growth of the root system is built as a hierarchical system of partial differential equations (PDEs) that incorporate single root growth parameters such as elongation rate, gravitropism and branching rate which appear explicitly as coefficients of the PDE. Acquisition and transport of nutrients are then modelled by extending Darcy's law to oriented density distribution functions. This framework was applied to build a model of the growth and water uptake of barley root system. This study shows that simplified and computer effective continuous models of the root system development can be constructed. Such models will allow application of root growth models at field scale.

  11. Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    NASA Technical Reports Server (NTRS)

    Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.

    1989-01-01

    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.

  12. Space station power management and distribution

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1985-01-01

    The power system architecture is presented by a series of schematics which illustrate the power management and distribution (PMAD) system at the component level, including converters, controllers, switchgear, rotary power transfer devices, power and data cables, remote power controllers, and load converters. Power distribution options, reference power management, and control strategy are also outlined. A summary of advanced development status and plans and an overview of system test plans are presented.

  13. Pi-Sat: A Low Cost Small Satellite and Distributed Spacecraft Mission System Test Platform

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan

    2015-01-01

    Current technology and budget trends indicate a shift in satellite architectures from large, expensive single satellite missions, to small, low cost distributed spacecraft missions. At the center of this shift is the SmallSatCubesat architecture. The primary goal of the Pi-Sat project is to create a low cost, and easy to use Distributed Spacecraft Mission (DSM) test bed to facilitate the research and development of next-generation DSM technologies and concepts. This test bed also serves as a realistic software development platform for Small Satellite and Cubesat architectures. The Pi-Sat is based on the popular $35 Raspberry Pi single board computer featuring a 700Mhz ARM processor, 512MB of RAM, a flash memory card, and a wealth of IO options. The Raspberry Pi runs the Linux operating system and can easily run Code 582s Core Flight System flight software architecture. The low cost and high availability of the Raspberry Pi make it an ideal platform for a Distributed Spacecraft Mission and Cubesat software development. The Pi-Sat models currently include a Pi-Sat 1U Cube, a Pi-Sat Wireless Node, and a Pi-Sat Cubesat processor card.The Pi-Sat project takes advantage of many popular trends in the Maker community including low cost electronics, 3d printing, and rapid prototyping in order to provide a realistic platform for flight software testing, training, and technology development. The Pi-Sat has also provided fantastic hands on training opportunities for NASA summer interns and Pathways students.

  14. The deployment of routing protocols in distributed control plane of SDN.

    PubMed

    Jingjing, Zhou; Di, Cheng; Weiming, Wang; Rong, Jin; Xiaochun, Wu

    2014-01-01

    Software defined network (SDN) provides a programmable network through decoupling the data plane, control plane, and application plane from the original closed system, thus revolutionizing the existing network architecture to improve the performance and scalability. In this paper, we learned about the distributed characteristics of Kandoo architecture and, meanwhile, improved and optimized Kandoo's two levels of controllers based on ideological inspiration of RCP (routing control platform). Finally, we analyzed the deployment strategies of BGP and OSPF protocol in a distributed control plane of SDN. The simulation results show that our deployment strategies are superior to the traditional routing strategies.

  15. SANDS: a service-oriented architecture for clinical decision support in a National Health Information Network.

    PubMed

    Wright, Adam; Sittig, Dean F

    2008-12-01

    In this paper, we describe and evaluate a new distributed architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support), which leverages current health information exchange efforts and is based on the principles of a service-oriented architecture. The architecture allows disparate clinical information systems and clinical decision support systems to be seamlessly integrated over a network according to a set of interfaces and protocols described in this paper. The architecture described is fully defined and developed, and six use cases have been developed and tested using a prototype electronic health record which links to one of the existing prototype National Health Information Networks (NHIN): drug interaction checking, syndromic surveillance, diagnostic decision support, inappropriate prescribing in older adults, information at the point of care and a simple personal health record. Some of these use cases utilize existing decision support systems, which are either commercially or freely available at present, and developed outside of the SANDS project, while other use cases are based on decision support systems developed specifically for the project. Open source code for many of these components is available, and an open source reference parser is also available for comparison and testing of other clinical information systems and clinical decision support systems that wish to implement the SANDS architecture. The SANDS architecture for decision support has several significant advantages over other architectures for clinical decision support. The most salient of these are:

  16. LVFS: A Scalable Petabye/Exabyte Data Storage System

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.

    2013-12-01

    Managing petabytes of data with hundreds of millions of files is the first step necessary towards an effective big data computing and collaboration environment in a distributed system. We describe here the MODAPS LAADS Virtual File System (LVFS), a new storage architecture which replaces the previous MODAPS operational Level 1 Land Atmosphere Archive Distribution System (LAADS) NFS based approach to storing and distributing datasets from several instruments, such as MODIS, MERIS, and VIIRS. LAADS is responsible for the distribution of over 4 petabytes of data and over 300 million files across more than 500 disks. We present here the first LVFS big data comparative performance results and new capabilities not previously possible with the LAADS system. We consider two aspects in addressing inefficiencies of massive scales of data. First, is dealing in a reliable and resilient manner with the volume and quantity of files in such a dataset, and, second, minimizing the discovery and lookup times for accessing files in such large datasets. There are several popular file systems that successfully deal with the first aspect of the problem. Their solution, in general, is through distribution, replication, and parallelism of the storage architecture. The Hadoop Distributed File System (HDFS), Parallel Virtual File System (PVFS), and Lustre are examples of such file systems that deal with petabyte data volumes. The second aspect deals with data discovery among billions of files, the largest bottleneck in reducing access time. The metadata of a file, generally represented in a directory layout, is stored in ways that are not readily scalable. This is true for HDFS, PVFS, and Lustre as well. Recent experimental file systems, such as Spyglass or Pantheon, have attempted to address this problem through redesign of the metadata directory architecture. LVFS takes a radically different architectural approach by eliminating the need for a separate directory within the file system. The LVFS system replaces the NFS disk mounting approach of LAADS and utilizes the already existing highly optimized metadata database server, which is applicable to most scientific big data intensive compute systems. Thus, LVFS ties the existing storage system with the existing metadata infrastructure system which we believe leads to a scalable exabyte virtual file system. The uniqueness of the implemented design is not limited to LAADS but can be employed with most scientific data processing systems. By utilizing the Filesystem In Userspace (FUSE), a kernel module available in many operating systems, LVFS was able to replace the NFS system while staying POSIX compliant. As a result, the LVFS system becomes scalable to exabyte sizes owing to the use of highly scalable database servers optimized for metadata storage. The flexibility of the LVFS design allows it to organize data on the fly in different ways, such as by region, date, instrument or product without the need for duplication, symbolic links, or any other replication methods. We proposed here a strategic reference architecture that addresses the inefficiencies of scientific petabyte/exabyte file system access through the dynamic integration of the observing system's large metadata file.

  17. Distributed Planning in a Mixed-Initiative Environment: Collaborative Technologies for Network Centric Operations

    DTIC Science & Technology

    2008-10-01

    Agents in the DEEP architecture extend and use the Java Agent Development (JADE) framework. DEEP requires a distributed multi-agent system and a...framework to help simplify the implementation of this system. JADE was chosen because it is fully implemented in Java , and supports these requirements

  18. The Development of Design Guides for the Implementation of Multiprocessing Element Systems.

    DTIC Science & Technology

    1985-09-01

    Conclusions............................ 30 -~-.4 IMPLEMENTATION OF CHILL SIGNALS . COMMUNICATION PRIMITIVES ON A DISTRIBUTED SYSTEM ........................ 31...Architecture of a Distributed System .......... ........................... 32 4.2 Algorithm for the SEND Signal Operation ...... 35 4.3 Algorithm for the...elements operating concurrently. Such Multi Processing-element Systems are clearly going to be complex and it is important that the designers of such

  19. Aerospace Software Engineering for Advanced Systems Architectures (L’Ingenierie des Logiciels Pour les Architectures des Systemes Aerospatiaux)

    DTIC Science & Technology

    1993-11-01

    Eliezer N. Solomon Steve Sedrel Westinghouse Electronic Systems Group P.O. Box 746, MS 432, Baltimore, Maryland 21203-0746, USA SUMMARY The United States...subset of the Joint Intergrated Avionics NewAgentCollection which has four Working Group (JIAWG), Performance parameters: Acceptor, of type Task._D...Published Noember 1993 Distribution and Availability on Back Cover SAGARD-CP54 ADVISORY GROUP FOR AERSACE RESEARCH & DEVELOPMENT 7 RUE ANCELLE 92200

  20. Optimal Power Scheduling for a Medium Voltage AC/DC Hybrid Distribution Network

    DOE PAGES

    Zhu, Zhenshan; Liu, Dichen; Liao, Qingfen; ...

    2018-01-26

    With the great increase of renewable generation as well as the DC loads in the distribution network; DC distribution technology is receiving more attention; since the DC distribution network can improve operating efficiency and power quality by reducing the energy conversion stages. This paper presents a new architecture for the medium voltage AC/DC hybrid distribution network; where the AC and DC subgrids are looped by normally closed AC soft open point (ACSOP) and DC soft open point (DCSOP); respectively. The proposed AC/DC hybrid distribution systems contain renewable generation (i.e., wind power and photovoltaic (PV) generation); energy storage systems (ESSs); softmore » open points (SOPs); and both AC and DC flexible demands. An energy management strategy for the hybrid system is presented based on the dynamic optimal power flow (DOPF) method. The main objective of the proposed power scheduling strategy is to minimize the operating cost and reduce the curtailment of renewable generation while meeting operational and technical constraints. The proposed approach is verified in five scenarios. The five scenarios are classified as pure AC system; hybrid AC/DC system; hybrid system with interlinking converter; hybrid system with DC flexible demand; and hybrid system with SOPs. Results show that the proposed scheduling method can successfully dispatch the controllable elements; and that the presented architecture for the AC/DC hybrid distribution system is beneficial for reducing operating cost and renewable generation curtailment.« less

  1. Optimal Power Scheduling for a Medium Voltage AC/DC Hybrid Distribution Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Zhenshan; Liu, Dichen; Liao, Qingfen

    With the great increase of renewable generation as well as the DC loads in the distribution network; DC distribution technology is receiving more attention; since the DC distribution network can improve operating efficiency and power quality by reducing the energy conversion stages. This paper presents a new architecture for the medium voltage AC/DC hybrid distribution network; where the AC and DC subgrids are looped by normally closed AC soft open point (ACSOP) and DC soft open point (DCSOP); respectively. The proposed AC/DC hybrid distribution systems contain renewable generation (i.e., wind power and photovoltaic (PV) generation); energy storage systems (ESSs); softmore » open points (SOPs); and both AC and DC flexible demands. An energy management strategy for the hybrid system is presented based on the dynamic optimal power flow (DOPF) method. The main objective of the proposed power scheduling strategy is to minimize the operating cost and reduce the curtailment of renewable generation while meeting operational and technical constraints. The proposed approach is verified in five scenarios. The five scenarios are classified as pure AC system; hybrid AC/DC system; hybrid system with interlinking converter; hybrid system with DC flexible demand; and hybrid system with SOPs. Results show that the proposed scheduling method can successfully dispatch the controllable elements; and that the presented architecture for the AC/DC hybrid distribution system is beneficial for reducing operating cost and renewable generation curtailment.« less

  2. Rio: a dynamic self-healing services architecture using Jini networking technology

    NASA Astrophysics Data System (ADS)

    Clarke, James B.

    2002-06-01

    Current mainstream distributed Java architectures offer great capabilities embracing conventional enterprise architecture patterns and designs. These traditional systems provide robust transaction oriented environments that are in large part focused on data and host processors. Typically, these implementations require that an entire application be deployed on every machine that will be used as a compute resource. In order for this to happen, the application is usually taken down, installed and started with all systems in-sync and knowing about each other. Static environments such as these present an extremely difficult environment to setup, deploy and administer.

  3. SiC: An Agent Based Architecture for Preventing and Detecting Attacks to Ubiquitous Databases

    NASA Astrophysics Data System (ADS)

    Pinzón, Cristian; de Paz, Yanira; Bajo, Javier; Abraham, Ajith; Corchado, Juan M.

    One of the main attacks to ubiquitous databases is the structure query language (SQL) injection attack, which causes severe damages both in the commercial aspect and in the user’s confidence. This chapter proposes the SiC architecture as a solution to the SQL injection attack problem. This is a hierarchical distributed multiagent architecture, which involves an entirely new approach with respect to existing architectures for the prevention and detection of SQL injections. SiC incorporates a kind of intelligent agent, which integrates a case-based reasoning system. This agent, which is the core of the architecture, allows the application of detection techniques based on anomalies as well as those based on patterns, providing a great degree of autonomy, flexibility, robustness and dynamic scalability. The characteristics of the multiagent system allow an architecture to detect attacks from different types of devices, regardless of the physical location. The architecture has been tested on a medical database, guaranteeing safe access from various devices such as PDAs and notebook computers.

  4. Reference Avionics Architecture for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Somervill, Kevin M.; Lapin, Jonathan C.; Schmidt, Oron L.

    2010-01-01

    Developing and delivering infrastructure capable of supporting long-term manned operations to the lunar surface has been a primary objective of the Constellation Program in the Exploration Systems Mission Directorate. Several concepts have been developed related to development and deployment lunar exploration vehicles and assets that provide critical functionality such as transportation, habitation, and communication, to name a few. Together, these systems perform complex safety-critical functions, largely dependent on avionics for control and behavior of system functions. These functions are implemented using interchangeable, modular avionics designed for lunar transit and lunar surface deployment. Systems are optimized towards reuse and commonality of form and interface and can be configured via software or component integration for special purpose applications. There are two core concepts in the reference avionics architecture described in this report. The first concept uses distributed, smart systems to manage complexity, simplify integration, and facilitate commonality. The second core concept is to employ extensive commonality between elements and subsystems. These two concepts are used in the context of developing reference designs for many lunar surface exploration vehicles and elements. These concepts are repeated constantly as architectural patterns in a conceptual architectural framework. This report describes the use of these architectural patterns in a reference avionics architecture for Lunar surface systems elements.

  5. Distributed Network and Multiprocessing Minicomputer State-of-the-Art Capabilities.

    ERIC Educational Resources Information Center

    Theis, Douglas J.

    An examination of the capabilities of minicomputers and midicomputers now on the market reveals two basic items which users should evaluate when selecting computers for their own applications: distributed networking systems and multiprocessing architectures. Variables which should be considered in evaluating a distributed networking system…

  6. Distributed dynamic simulations of networked control and building performance applications.

    PubMed

    Yahiaoui, Azzedine

    2018-02-01

    The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.

  7. Distributed dynamic simulations of networked control and building performance applications

    PubMed Central

    Yahiaoui, Azzedine

    2017-01-01

    The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper. PMID:29568135

  8. Architecture of distributed picture archiving and communication systems for storing and processing high resolution medical images

    NASA Astrophysics Data System (ADS)

    Tokareva, Victoria

    2018-04-01

    New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.

  9. Super and parallel computers and their impact on civil engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamat, M.P.

    1986-01-01

    This book presents the papers given at a conference on the use of supercomputers in civil engineering. Topics considered at the conference included solving nonlinear equations on a hypercube, a custom architectured parallel processing system, distributed data processing, algorithms, computer architecture, parallel processing, vector processing, computerized simulation, and cost benefit analysis.

  10. Architectures for Distributed and Complex M-Learning Systems: Applying Intelligent Technologies

    ERIC Educational Resources Information Center

    Caballe, Santi, Ed.; Xhafa, Fatos, Ed.; Daradoumis, Thanasis, Ed.; Juan, Angel A., Ed.

    2009-01-01

    Over the last decade, the needs of educational organizations have been changing in accordance with increasingly complex pedagogical models and with the technological evolution of e-learning environments with very dynamic teaching and learning requirements. This book explores state-of-the-art software architectures and platforms used to support…

  11. Hybrid network defense model based on fuzzy evaluation.

    PubMed

    Cho, Ying-Chiang; Pan, Jen-Yi

    2014-01-01

    With sustained and rapid developments in the field of information technology, the issue of network security has become increasingly prominent. The theme of this study is network data security, with the test subject being a classified and sensitive network laboratory that belongs to the academic network. The analysis is based on the deficiencies and potential risks of the network's existing defense technology, characteristics of cyber attacks, and network security technologies. Subsequently, a distributed network security architecture using the technology of an intrusion prevention system is designed and implemented. In this paper, first, the overall design approach is presented. This design is used as the basis to establish a network defense model, an improvement over the traditional single-technology model that addresses the latter's inadequacies. Next, a distributed network security architecture is implemented, comprising a hybrid firewall, intrusion detection, virtual honeynet projects, and connectivity and interactivity between these three components. Finally, the proposed security system is tested. A statistical analysis of the test results verifies the feasibility and reliability of the proposed architecture. The findings of this study will potentially provide new ideas and stimuli for future designs of network security architecture.

  12. AIAA/NASA International Symposium on Space Information Systems, 2nd, Pasadena, CA, Sept. 17-19, 1990, Proceedings. Vols. 1 & 2

    NASA Technical Reports Server (NTRS)

    Tavenner, Leslie A. (Editor)

    1991-01-01

    These proceedings overview major space information system projects and lessons learned from current missions. Other topics include the science information system requirements for the 1990s, an information systems design approach for major programs, the technology needs and projections, the standards for space data information systems, the artificial intelligence technology and applications, international interoperability, and spacecraft data systems and architectures advanced communications. Other topics include the software engineering technology and applications, the multimission multidiscipline information system architectures, the distributed planning and scheduling systems and operations, and the computer and information systems architectures. Paper presented include prospects for scientific data analysis systems for solar-terrestrial physics in the 1990s, the Columbus data management system, data storage technologies for the future, the German aerospace research establishment, and launching artificial intelligence in NASA ground systems.

  13. Adaptive Fault-Resistant Systems

    DTIC Science & Technology

    1994-10-01

    An Architectural Overview of the Alpha Real-Time Distributed Kernel . In Proceeding., of the USEN[X Workshop on Microkernels and Other Kernel ...system and the controller are monolithic . We have noted earlier some of the problems of distributed systems-for exam- ple, the need to bound the...are monolithic . In practice, designers employ a layered structuring for their systems in order to manage complexity, and we expect that practical

  14. Stability Analysis of Distributed Engine Control Systems Under Communication Packet Drop (Postprint)

    DTIC Science & Technology

    2008-07-01

    use, modify, reproduce, release, perform, display, or disclose the work. 14. ABSTRACT Currently, Full Authority Digital Engine Control ( FADEC ...based on a centralized architecture framework is being widely used for gas turbine engine control. However, current FADEC is not able to meet the...system (DEC). FADEC based on Distributed Control Systems (DCS) offers modularity, improved control systems prognostics and fault tolerance along with

  15. Clinical results of HIS, RIS, PACS integration using data integration CASE tools

    NASA Astrophysics Data System (ADS)

    Taira, Ricky K.; Chan, Hing-Ming; Breant, Claudine M.; Huang, Lu J.; Valentino, Daniel J.

    1995-05-01

    Current infrastructure research in PACS is dominated by the development of communication networks (local area networks, teleradiology, ATM networks, etc.), multimedia display workstations, and hierarchical image storage architectures. However, limited work has been performed on developing flexible, expansible, and intelligent information processing architectures for the vast decentralized image and text data repositories prevalent in healthcare environments. Patient information is often distributed among multiple data management systems. Current large-scale efforts to integrate medical information and knowledge sources have been costly with limited retrieval functionality. Software integration strategies to unify distributed data and knowledge sources is still lacking commercially. Systems heterogeneity (i.e., differences in hardware platforms, communication protocols, database management software, nomenclature, etc.) is at the heart of the problem and is unlikely to be standardized in the near future. In this paper, we demonstrate the use of newly available CASE (computer- aided software engineering) tools to rapidly integrate HIS, RIS, and PACS information systems. The advantages of these tools include fast development time (low-level code is generated from graphical specifications), and easy system maintenance (excellent documentation, easy to perform changes, and centralized code repository in an object-oriented database). The CASE tools are used to develop and manage the `middle-ware' in our client- mediator-serve architecture for systems integration. Our architecture is scalable and can accommodate heterogeneous database and communication protocols.

  16. Distributed asynchronous microprocessor architectures in fault tolerant integrated flight systems

    NASA Technical Reports Server (NTRS)

    Dunn, W. R.

    1983-01-01

    The paper discusses the implementation of fault tolerant digital flight control and navigation systems for rotorcraft application. It is shown that in implementing fault tolerance at the systems level using advanced LSI/VLSI technology, aircraft physical layout and flight systems requirements tend to define a system architecture of distributed, asynchronous microprocessors in which fault tolerance can be achieved locally through hardware redundancy and/or globally through application of analytical redundancy. The effects of asynchronism on the execution of dynamic flight software is discussed. It is shown that if the asynchronous microprocessors have knowledge of time, these errors can be significantly reduced through appropiate modifications of the flight software. Finally, the papear extends previous work to show that through the combined use of time referencing and stable flight algorithms, individual microprocessors can be configured to autonomously tolerate intermittent faults.

  17. A Software Architecture for Intelligent Synthesis Environments

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    The NASA's Intelligent Synthesis Environment (ISE) program is a grand attempt to develop a system to transform the way complex artifacts are engineered. This paper discusses a "middleware" architecture for enabling the development of ISE. Desirable elements of such an Intelligent Synthesis Architecture (ISA) include remote invocation; plug-and-play applications; scripting of applications; management of design artifacts, tools, and artifact and tool attributes; common system services; system management; and systematic enforcement of policies. This paper argues that the ISA extend conventional distributed object technology (DOT) such as CORBA and Product Data Managers with flexible repositories of product and tool annotations and "plug-and-play" mechanisms for inserting "ility" or orthogonal concerns into the system. I describe the Object Infrastructure Framework, an Aspect Oriented Programming (AOP) environment for developing distributed systems that provides utility insertion and enables consistent annotation maintenance. This technology can be used to enforce policies such as maintaining the annotations of artifacts, particularly the provenance and access control rules of artifacts-, performing automatic datatype transformations between representations; supplying alternative servers of the same service; reporting on the status of jobs and the system; conveying privileges throughout an application; supporting long-lived transactions; maintaining version consistency; and providing software redundancy and mobility.

  18. A context management system for a cost-efficient smart home platform

    NASA Astrophysics Data System (ADS)

    Schneider, J.; Klein, A.; Mannweiler, C.; Schotten, H. D.

    2012-09-01

    This paper presents an overview of state-of-the-art architectures for integrating wireless sensor and actuators networks into the Future Internet. Furthermore, we will address advantages and disadvantages of the different architectures. With respect to these criteria, we develop a new architecture overcoming these weaknesses. Our system, called Smart Home Context Management System, will be used for intelligent home utilities, appliances, and electronics and includes physical, logical as well as network context sources within one concept. It considers important aspects and requirements of modern context management systems for smart X applications: plug and play as well as plug and trust capabilities, scalability, extensibility, security, and adaptability. As such, it is able to control roller blinds, heating systems as well as learn, for example, the user's taste w.r.t. to home entertainment (music, videos, etc.). Moreover, Smart Grid applications and Ambient Assisted Living (AAL) functions are applicable. With respect to AAL, we included an Emergency Handling function. It assures that emergency calls (police, ambulance or fire department) are processed appropriately. Our concept is based on a centralized Context Broker architecture, enhanced by a distributed Context Broker system. The goal of this concept is to develop a simple, low-priced, multi-functional, and save architecture affordable for everybody. Individual components of the architecture are well tested. Implementation and testing of the architecture as a whole is in progress.

  19. SANDS: A Service-Oriented Architecture for Clinical Decision Support in a National Health Information Network

    PubMed Central

    Wright, Adam; Sittig, Dean F.

    2008-01-01

    In this paper we describe and evaluate a new distributed architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support), which leverages current health information exchange efforts and is based on the principles of a service-oriented architecture. The architecture allows disparate clinical information systems and clinical decision support systems to be seamlessly integrated over a network according to a set of interfaces and protocols described in this paper. The architecture described is fully defined and developed, and six use cases have been developed and tested using a prototype electronic health record which links to one of the existing prototype National Health Information Networks (NHIN): drug interaction checking, syndromic surveillance, diagnostic decision support, inappropriate prescribing in older adults, information at the point of care and a simple personal health record. Some of these use cases utilize existing decision support systems, which are either commercially or freely available at present, and developed outside of the SANDS project, while other use cases are based on decision support systems developed specifically for the project. Open source code for many of these components is available, and an open source reference parser is also available for comparison and testing of other clinical information systems and clinical decision support systems that wish to implement the SANDS architecture. PMID:18434256

  20. Designing and Implementing a Distributed System Architecture for the Mars Rover Mission Planning Software (Maestro)

    NASA Technical Reports Server (NTRS)

    Goldgof, Gregory M.

    2005-01-01

    Distributed systems allow scientists from around the world to plan missions concurrently, while being updated on the revisions of their colleagues in real time. However, permitting multiple clients to simultaneously modify a single data repository can quickly lead to data corruption or inconsistent states between users. Since our message broker, the Java Message Service, does not ensure that messages will be received in the order they were published, we must implement our own numbering scheme to guarantee that changes to mission plans are performed in the correct sequence. Furthermore, distributed architectures must ensure that as new users connect to the system, they synchronize with the database without missing any messages or falling into an inconsistent state. Robust systems must also guarantee that all clients will remain synchronized with the database even in the case of multiple client failure, which can occur at any time due to lost network connections or a user's own system instability. The final design for the distributed system behind the Mars rover mission planning software fulfills all of these requirements and upon completion will be deployed to MER at the end of 2005 as well as Phoenix (2007) and MSL (2009).

  1. An architecture and protocol for communications satellite constellations regarded as multi-agent systems

    NASA Technical Reports Server (NTRS)

    Lindley, Craig A.

    1995-01-01

    This paper presents an architecture for satellites regarded as intercommunicating agents. The architecture is based upon a postmodern paradigm of artificial intelligence in which represented knowledge is regarded as text, inference procedures are regarded as social discourse and decision making conventions and the semantics of representations are grounded in the situated behaviour and activity of agents. A particular protocol is described for agent participation in distributed search and retrieval operations conducted as joint activities.

  2. Distributed Optimization and Control | Grid Modernization | NREL

    Science.gov Websites

    developing an innovative, distributed photovoltaic (PV) inverter control architecture that maximizes PV communications systems to support distribution grid operations. The growth of PV capacity has introduced prescribed limits, while fast variations in PV output tend to cause transients that lead to wear-out of

  3. A Rendering System Independent High Level Architecture Implementation for Networked Virtual Environments

    DTIC Science & Technology

    2002-09-01

    Management .........................15 5. Time Management ..............................16 6. Data Distribution Management .................16 D...50 b. Ownership Management .....................51 c. Data Distribution Management .............51 2. Additional Objects and Interactions...16 Figure 6. Data Distribution Management . (From: ref. 2) ...16 Figure 7. RTI and Federate Code Responsibilities. (From: ref. 2

  4. Content Management Middleware for the Support of Distributed Teaching

    ERIC Educational Resources Information Center

    Tsalapatas, Hariklia; Stav, John B.; Kalantzis, Christos

    2004-01-01

    eCMS is a web-based federated content management system for the support of distributed teaching based on an open, distributed middleware architecture for the publication, discovery, retrieval, and integration of educational material. The infrastructure supports the management of both standalone material and structured courses, as well as the…

  5. PRAIS: Distributed, real-time knowledge-based systems made easy

    NASA Technical Reports Server (NTRS)

    Goldstein, David G.

    1990-01-01

    This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.

  6. Monitoring Distributed Real-Time Systems: A Survey and Future Directions

    NASA Technical Reports Server (NTRS)

    Goodloe, Alwyn E.; Pike, Lee

    2010-01-01

    Runtime monitors have been proposed as a means to increase the reliability of safety-critical systems. In particular, this report addresses runtime monitors for distributed hard real-time systems. This class of systems has had little attention from the monitoring community. The need for monitors is shown by discussing examples of avionic systems failure. We survey related work in the field of runtime monitoring. Several potential monitoring architectures for distributed real-time systems are presented along with a discussion of how they might be used to monitor properties of interest.

  7. The Deployment of Routing Protocols in Distributed Control Plane of SDN

    PubMed Central

    Jingjing, Zhou; Di, Cheng; Weiming, Wang; Rong, Jin; Xiaochun, Wu

    2014-01-01

    Software defined network (SDN) provides a programmable network through decoupling the data plane, control plane, and application plane from the original closed system, thus revolutionizing the existing network architecture to improve the performance and scalability. In this paper, we learned about the distributed characteristics of Kandoo architecture and, meanwhile, improved and optimized Kandoo's two levels of controllers based on ideological inspiration of RCP (routing control platform). Finally, we analyzed the deployment strategies of BGP and OSPF protocol in a distributed control plane of SDN. The simulation results show that our deployment strategies are superior to the traditional routing strategies. PMID:25250395

  8. Low latency network and distributed storage for next generation HPC systems: the ExaNeSt project

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Pisani, F.; Simula, F.; Vicini, P.; Navaridas, J.; Chaix, F.; Chrysos, N.; Katevenis, M.; Papaeustathiou, V.

    2017-10-01

    With processor architecture evolution, the HPC market has undergone a paradigm shift. The adoption of low-cost, Linux-based clusters extended the reach of HPC from its roots in modelling and simulation of complex physical systems to a broader range of industries, from biotechnology, cloud computing, computer analytics and big data challenges to manufacturing sectors. In this perspective, the near future HPC systems can be envisioned as composed of millions of low-power computing cores, densely packed — meaning cooling by appropriate technology — with a tightly interconnected, low latency and high performance network and equipped with a distributed storage architecture. Each of these features — dense packing, distributed storage and high performance interconnect — represents a challenge, made all the harder by the need to solve them at the same time. These challenges lie as stumbling blocks along the road towards Exascale-class systems; the ExaNeSt project acknowledges them and tasks itself with investigating ways around them.

  9. Networking and AI systems: Requirements and benefits

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The price performance benefits of network systems is well documented. The ability to share expensive resources sold timesharing for mainframes, department clusters of minicomputers, and now local area networks of workstations and servers. In the process, other fundamental system requirements emerged. These have now been generalized with open system requirements for hardware, software, applications and tools. The ability to interconnect a variety of vendor products has led to a specification of interfaces that allow new techniques to extend existing systems for new and exciting applications. As an example of the message passing system, local area networks provide a testbed for many of the issues addressed by future concurrent architectures: synchronization, load balancing, fault tolerance and scalability. Gold Hill has been working with a number of vendors on distributed architectures that range from a network of workstations to a hypercube of microprocessors with distributed memory. Results from early applications are promising both for performance and scalability.

  10. Investigating Actuation Force Fight with Asynchronous and Synchronous Redundancy Management Techniques

    NASA Technical Reports Server (NTRS)

    Hall, Brendan; Driscoll, Kevin; Schweiker, Kevin; Dutertre, Bruno

    2013-01-01

    Within distributed fault-tolerant systems the term force-fight is colloquially used to describe the level of command disagreement present at redundant actuation interfaces. This report details an investigation of force-fight using three distributed system case-study architectures. Each case study architecture is abstracted and formally modeled using the Symbolic Analysis Laboratory (SAL) tool chain from the Stanford Research Institute (SRI). We use the formal SAL models to produce k-induction based proofs of a bounded actuation agreement property. We also present a mathematically derived bound of redundant actuation agreement for sine-wave stimulus. The report documents our experiences and lessons learned developing the formal models and the associated proofs.

  11. INO340 telescope control system: middleware requirements, design, and evaluation

    NASA Astrophysics Data System (ADS)

    Shalchian, Hengameh; Ravanmehr, Reza

    2016-07-01

    The INO340 Control System (INOCS) is being designed in terms of a distributed real-time architecture. The real-time (soft and firm) nature of many processes inside INOCS causes the communication paradigm between its different components to be time-critical and sensitive. For this purpose, we have chosen the Data Distribution Service (DDS) standard as the communications middleware which is itself based on the publish-subscribe paradigm. In this paper, we review and compare the main middleware types, and then we illustrate the middleware architecture of INOCS and its specific requirements. Finally, we present the experimental results, performed to evaluate our middleware in order to ensure that it meets our requirements.

  12. GUEST EDITORS' INTRODUCTION: Guest Editors' introduction

    NASA Astrophysics Data System (ADS)

    Guerraoui, Rachid; Vinoski, Steve

    1997-09-01

    The organization of a distributed system can have a tremendous impact on its capabilities, its performance, and its ability to evolve to meet changing requirements. For example, the client - server organization model has proven to be adequate for organizing a distributed system as a number of distributed servers that offer various functions to client processes across the network. However, it lacks peer-to-peer capabilities, and experience with the model has been predominantly in the context of local networks. To achieve peer-to-peer cooperation in a more global context, systems issues of scale, heterogeneity, configuration management, accounting and sharing are crucial, and the complexity of migrating from locally distributed to more global systems demands new tools and techniques. An emphasis on interfaces and modules leads to the modelling of a complex distributed system as a collection of interacting objects that communicate with each other only using requests sent to well defined interfaces. Although object granularity typically varies at different levels of a system architecture, the same object abstraction can be applied to various levels of a computing architecture. Since 1989, the Object Management Group (OMG), an international software consortium, has been defining an architecture for distributed object systems called the Object Management Architecture (OMA). At the core of the OMA is a `software bus' called an Object Request Broker (ORB), which is specified by the OMG Common Object Request Broker Architecture (CORBA) specification. The OMA distributed object model fits the structure of heterogeneous distributed applications, and is applied in all layers of the OMA. For example, each of the OMG Object Services, such as the OMG Naming Service, is structured as a set of distributed objects that communicate using the ORB. Similarly, higher-level OMA components such as Common Facilities and Domain Interfaces are also organized as distributed objects that can be layered over both Object Services and the ORB. The OMG creates specifications, not code, but the interfaces it standardizes are always derived from demonstrated technology submitted by member companies. The specified interfaces are written in a neutral Interface Definition Language (IDL) that defines contractual interfaces with potential clients. Interfaces written in IDL can be translated to a number of programming languages via OMG standard language mappings so that they can be used to develop components. The resulting components can transparently communicate with other components written in different languages and running on different operating systems and machine types. The ORB is responsible for providing the illusion of `virtual homogeneity' regardless of the programming languages, tools, operating systems and networks used to realize and support these components. With the adoption of the CORBA 2.0 specification in 1995, these components are able to interoperate across multi-vendor CORBA-based products. More than 700 member companies have joined the OMG, including Hewlett-Packard, Digital, Siemens, IONA Technologies, Netscape, Sun Microsystems, Microsoft and IBM, which makes it the largest standards body in existence. These companies continue to work together within the OMG to refine and enhance the OMA and its components. This special issue of Distributed Systems Engineering publishes five papers that were originally presented at the `Distributed Object-Based Platforms' track of the 30th Hawaii International Conference on System Sciences (HICSS), which was held in Wailea on Maui on 6 - 10 January 1997. The papers, which were selected based on their quality and the range of topics they cover, address different aspects of CORBA, including advanced aspects such as fault tolerance and transactions. These papers discuss the use of CORBA and evaluate CORBA-based development for different types of distributed object systems and architectures. The first paper, by S Rahkila and S Stenberg, discusses the application of CORBA to telecommunication management networks. In the second paper, P Narasimhan, L E Moser and P M Melliar-Smith present a fault-tolerant extension of an ORB. The third paper, by J Liang, S Sédillot and B Traverson, provides an overview of the CORBA Transaction Service and its integration with the ISO Distributed Transaction Processing protocol. In the fourth paper, D Sherer, T Murer and A Würtz discuss the evolution of a cooperative software engineering infrastructure to a CORBA-based framework. The fifth paper, by R Fatoohi, evaluates the communication performance of a commercially-available Object Request Broker (Orbix from IONA Technologies) on several networks, and compares the performance with that of more traditional communication primitives (e.g., BSD UNIX sockets and PVM). We wish to thank both the referees and the authors of these papers, as their cooperation was fundamental in ensuring timely publication.

  13. A Proposed Information Architecture for Telehealth System Interoperability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, R.L.; Funkhouser, D.R.; Gallagher, L.K.

    1999-04-20

    We propose an object-oriented information architecture for telemedicine systems that promotes secure `plug-and-play' interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a ''lego-like'' fashion to achieve the desired device or system functionality. Introduction Telemedicine systems today rely increasingly on distributed, collaborative information technology during the care delivery process. While these leading-edge systems are bellwethers for highly advanced telemedicine, most are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that amore » single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver en- tire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. This paper proposes a reference architecture for plug-and-play telemedicine systems that addresses these issues.« less

  14. Flexible architecture of data acquisition firmware based on multi-behaviors finite state machine

    NASA Astrophysics Data System (ADS)

    Arpaia, Pasquale; Cimmino, Pasquale

    2016-11-01

    A flexible firmware architecture for different kinds of data acquisition systems, ranging from high-precision bench instruments to low-cost wireless transducers networks, is presented. The key component is a multi-behaviors finite state machine, easily configurable to both low- and high-performance requirements, to diverse operating systems, as well as to on-line and batch measurement algorithms. The proposed solution was validated experimentally on three case studies with data acquisition architectures: (i) concentrated, in a high-precision instrument for magnetic measurements at CERN, (ii) decentralized, for telemedicine remote monitoring of patients at home, and (iii) distributed, for remote monitoring of building's energy loss.

  15. An e-consent-based shared EHR system architecture for integrated healthcare networks.

    PubMed

    Bergmann, Joachim; Bott, Oliver J; Pretschner, Dietrich P; Haux, Reinhold

    2007-01-01

    Virtual integration of distributed patient data promises advantages over a consolidated health record, but raises questions mainly about practicability and authorization concepts. Our work aims on specification and development of a virtual shared health record architecture using a patient-centred integration and authorization model. A literature survey summarizes considerations of current architectural approaches. Complemented by a methodical analysis in two regional settings, a formal architecture model was specified and implemented. Results presented in this paper are a survey of architectural approaches for shared health records and an architecture model for a virtual shared EHR, which combines a patient-centred integration policy with provider-oriented document management. An electronic consent system assures, that access to the shared record remains under control of the patient. A corresponding system prototype has been developed and is currently being introduced and evaluated in a regional setting. The proposed architecture is capable of partly replacing message-based communications. Operating highly available provider repositories for the virtual shared EHR requires advanced technology and probably means additional costs for care providers. Acceptance of the proposed architecture depends on transparently embedding document validation and digital signature into the work processes. The paradigm shift from paper-based messaging to a "pull model" needs further evaluation.

  16. Business logic for geoprocessing of distributed geodata

    NASA Astrophysics Data System (ADS)

    Kiehle, Christian

    2006-12-01

    This paper describes the development of a business-logic component for the geoprocessing of distributed geodata. The business logic acts as a mediator between the data and the user, therefore playing a central role in any spatial information system. The component is used in service-oriented architectures to foster the reuse of existing geodata inventories. Based on a geoscientific case study of groundwater vulnerability assessment and mapping, the demands for such architectures are identified with special regard to software engineering tasks. Methods are derived from the field of applied Geosciences (Hydrogeology), Geoinformatics, and Software Engineering. In addition to the development of a business logic component, a forthcoming Open Geospatial Consortium (OGC) specification is introduced: the OGC Web Processing Service (WPS) specification. A sample application is introduced to demonstrate the potential of WPS for future information systems. The sample application Geoservice Groundwater Vulnerability is described in detail to provide insight into the business logic component, and demonstrate how information can be generated out of distributed geodata. This has the potential to significantly accelerate the assessment and mapping of groundwater vulnerability. The presented concept is easily transferable to other geoscientific use cases dealing with distributed data inventories. Potential application fields include web-based geoinformation systems operating on distributed data (e.g. environmental planning systems, cadastral information systems, and others).

  17. Master Clock and Time-Signal-Distribution System

    NASA Technical Reports Server (NTRS)

    Tjoelker, Robert; Calhoun, Malcolm; Kuhnle, Paul; Sydnor, Richard; Lauf, John

    2007-01-01

    A timing system comprising an electronic master clock and a subsystem for distributing time signals from the master clock to end users is undergoing development to satisfy anticipated timing requirements of NASA s Deep Space Network (DSN) for the next 20 to 30 years. This system has a modular, flexible, expandable architecture that is easier to operate and maintain than the present frequency and timing subsystem (FTS).

  18. Prognostics and health management system for hydropower plant based on fog computing and docker container

    NASA Astrophysics Data System (ADS)

    Xiao, Jian; Zhang, Mingqiang; Tian, Haiping; Huang, Bo; Fu, Wenlong

    2018-02-01

    In this paper, a novel prognostics and health management system architecture for hydropower plant equipment was proposed based on fog computing and Docker container. We employed the fog node to improve the real-time processing ability of improving the cloud architecture-based prognostics and health management system and overcome the problems of long delay time, network congestion and so on. Then Storm-based stream processing of fog node was present and could calculate the health index in the edge of network. Moreover, the distributed micros-service and Docker container architecture of hydropower plants equipment prognostics and health management was also proposed. Using the micro service architecture proposed in this paper, the hydropower unit can achieve the goal of the business intercommunication and seamless integration of different equipment and different manufacturers. Finally a real application case is given in this paper.

  19. High Performance Data Distribution for Scientific Community

    NASA Astrophysics Data System (ADS)

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  20. Resource-aware system architecture model for implementation of quantum aided Byzantine agreement on quantum repeater networks

    NASA Astrophysics Data System (ADS)

    Taherkhani, Mohammand Amin; Navi, Keivan; Van Meter, Rodney

    2018-01-01

    Quantum aided Byzantine agreement is an important distributed quantum algorithm with unique features in comparison to classical deterministic and randomized algorithms, requiring only a constant expected number of rounds in addition to giving a higher level of security. In this paper, we analyze details of the high level multi-party algorithm, and propose elements of the design for the quantum architecture and circuits required at each node to run the algorithm on a quantum repeater network (QRN). Our optimization techniques have reduced the quantum circuit depth by 44% and the number of qubits in each node by 20% for a minimum five-node setup compared to the design based on the standard arithmetic circuits. These improvements lead to a quantum system architecture with 160 qubits per node, space-time product (an estimate of the required fidelity) {KQ}≈ 1.3× {10}5 per node and error threshold 1.1× {10}-6 for the total nodes in the network. The evaluation of the designed architecture shows that to execute the algorithm once on the minimum setup, we need to successfully distribute a total of 648 Bell pairs across the network, spread evenly between all pairs of nodes. This framework can be considered a starting point for establishing a road-map for light-weight demonstration of a distributed quantum application on QRNs.

  1. A Novel Design of an Automatic Lighting Control System for a Wireless Sensor Network with Increased Sensor Lifetime and Reduced Sensor Numbers

    PubMed Central

    Mohamaddoust, Reza; Haghighat, Abolfazl Toroghi; Sharif, Mohamad Javad Motahari; Capanni, Niccolo

    2011-01-01

    Wireless sensor networks (WSN) are currently being applied to energy conservation applications such as light control. We propose a design for such a system called a Lighting Automatic Control System (LACS). The LACS system contains a centralized or distributed architecture determined by application requirements and space usage. The system optimizes the calculations and communications for lighting intensity, incorporates user illumination requirements according to their activities and performs adjustments based on external lighting effects in external sensor and external sensor-less architectures. Methods are proposed for reducing the number of sensors required and increasing the lifetime of those used, for considerably reduced energy consumption. Additionally we suggest methods for improving uniformity of illuminance distribution on a workplane’s surface, which improves user satisfaction. Finally simulation results are presented to verify the effectiveness of our design. PMID:22164114

  2. A novel design of an automatic lighting control system for a wireless sensor network with increased sensor lifetime and reduced sensor numbers.

    PubMed

    Mohamaddoust, Reza; Haghighat, Abolfazl Toroghi; Sharif, Mohamad Javad Motahari; Capanni, Niccolo

    2011-01-01

    Wireless sensor networks (WSN) are currently being applied to energy conservation applications such as light control. We propose a design for such a system called a lighting automatic control system (LACS). The LACS system contains a centralized or distributed architecture determined by application requirements and space usage. The system optimizes the calculations and communications for lighting intensity, incorporates user illumination requirements according to their activities and performs adjustments based on external lighting effects in external sensor and external sensor-less architectures. Methods are proposed for reducing the number of sensors required and increasing the lifetime of those used, for considerably reduced energy consumption. Additionally we suggest methods for improving uniformity of illuminance distribution on a workplane's surface, which improves user satisfaction. Finally simulation results are presented to verify the effectiveness of our design.

  3. The CMIP5 archive architecture: A system for petabyte-scale distributed archival of climate model data

    NASA Astrophysics Data System (ADS)

    Pascoe, Stephen; Cinquini, Luca; Lawrence, Bryan

    2010-05-01

    The Phase 5 Coupled Model Intercomparison Project (CMIP5) will produce a petabyte scale archive of climate data relevant to future international assessments of climate science (e.g., the IPCC's 5th Assessment Report scheduled for publication in 2013). The infrastructure for the CMIP5 archive must meet many challenges to support this ambitious international project. We describe here the distributed software architecture being deployed worldwide to meet these challenges. The CMIP5 architecture extends the Earth System Grid (ESG) distributed architecture of Datanodes, providing data access and visualisation services, and Gateways providing the user interface including registration, search and browse services. Additional features developed for CMIP5 include a publication workflow incorporating quality control and metadata submission, data replication, version control, update notification and production of citable metadata records. Implementation of these features have been driven by the requirements of reliable global access to over 1Pb of data and consistent citability of data and metadata. Central to the implementation is the concept of Atomic Datasets that are identifiable through a Data Reference Syntax (DRS). Atomic Datasets are immutable to allow them to be replicated and tracked whilst maintaining data consistency. However, since occasional errors in data production and processing is inevitable, new versions can be published and users notified of these updates. As deprecated datasets may be the target of existing citations they can remain visible in the system. Replication of Atomic Datasets is designed to improve regional access and provide fault tolerance. Several datanodes in the system are designated replicating nodes and hold replicas of a portion of the archive expected to be of broad interest to the community. Gateways provide a system-wide interface to users where they can track the version history and location of replicas to select the most appropriate location for download. In addition to meeting the immediate needs of CMIP5 this architecture provides a basis for the Earth System Modeling e-infrastructure being further developed within the EU FP7 IS-ENES project.

  4. An object-oriented software approach for a distributed human tracking motion system

    NASA Astrophysics Data System (ADS)

    Micucci, Daniela L.

    2003-06-01

    Tracking is a composite job involving the co-operation of autonomous activities which exploit a complex information model and rely on a distributed architecture. Both information and activities must be classified and related in several dimensions: abstraction levels (what is modelled and how information is processed); topology (where the modelled entities are); time (when entities exist); strategy (why something happens); responsibilities (who is in charge of processing the information). A proper Object-Oriented analysis and design approach leads to a modular architecture where information about conceptual entities is modelled at each abstraction level via classes and intra-level associations, whereas inter-level associations between classes model the abstraction process. Both information and computation are partitioned according to level-specific topological models. They are also placed in a temporal framework modelled by suitable abstractions. Domain-specific strategies control the execution of the computations. Computational components perform both intra-level processing and intra-level information conversion. The paper overviews the phases of the analysis and design process, presents major concepts at each abstraction level, and shows how the resulting design turns into a modular, flexible and adaptive architecture. Finally, the paper sketches how the conceptual architecture can be deployed into a concrete distribute architecture by relying on an experimental framework.

  5. Research on Three-phase Four-wire Inverter

    NASA Astrophysics Data System (ADS)

    Xin, W. D.; Li, X. K.; Huang, G. Z.; Fan, X. C.; Gong, X. J.; Sun, L.; Wang, J.; Zhu, D. W.

    2017-05-01

    The concept of Voltage Source Converter (VSC) based hybrid AC and DC distribution system architecture is proposed, which can solve the traditional AC distribution power quality problems and respond to the request of DC distribution development. At first, a novel VSC system structure combining the four-leg based three-phase four-wire with LC filter is adopted, using the overall coordination control scheme of the AC current tracking compensation based grid-interfaced VSC. In the end, the 75 kW simulation experimental system is designed and tested to verify the performance of the proposed VSC under DC distribution, distributed DC sources conditions, as well as power quality management of AC distribution.

  6. The blackboard model - A framework for integrating multiple cooperating expert systems

    NASA Technical Reports Server (NTRS)

    Erickson, W. K.

    1985-01-01

    The use of an artificial intelligence (AI) architecture known as the blackboard model is examined as a framework for designing and building distributed systems requiring the integration of multiple cooperating expert systems (MCXS). Aerospace vehicles provide many examples of potential systems, ranging from commercial and military aircraft to spacecraft such as satellites, the Space Shuttle, and the Space Station. One such system, free-flying, spaceborne telerobots to be used in construction, servicing, inspection, and repair tasks around NASA's Space Station, is examined. The major difficulties found in designing and integrating the individual expert system components necessary to implement such a robot are outlined. The blackboard model, a general expert system architecture which seems to address many of the problems found in designing and building such a system, is discussed. A progress report on a prototype system under development called DBB (Distributed BlackBoard model) is given. The prototype will act as a testbed for investigating the feasibility, utility, and efficiency of MCXS-based designs developed under the blackboard model.

  7. A comparative analysis of loop heat pipe based thermal architectures for spacecraft thermal control

    NASA Technical Reports Server (NTRS)

    Pauken, Mike; Birur, Gaj

    2004-01-01

    Loop Heat Pipes (LHP) have gained acceptance as a viable means of heat transport in many spacecraft in recent years. However, applications using LHP technology tend to only remove waste heat from a single component to an external radiator. Removing heat from multiple components has been done by using multiple LHPs. This paper discusses the development and implementation of a Loop Heat Pipe based thermal architecture for spacecraft. In this architecture, a Loop Heat Pipe with multiple evaporators and condensers is described in which heat load sharing and thermal control of multiple components can be achieved. A key element in using a LHP thermal architecture is defining the need for such an architecture early in the spacecraft design process. This paper describes an example in which a LHP based thermal architecture can be used and how such a system can have advantages in weight, cost and reliability over other kinds of distributed thermal control systems. The example used in this paper focuses on a Mars Rover Thermal Architecture. However, the principles described here are applicable to Earth orbiting spacecraft as well.

  8. Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration

    PubMed Central

    Losada, Diego P.; Fernández, Joaquín L.; Paz, Enrique; Sanz, Rafael

    2017-01-01

    In this article, we present a CAN-based (Controller Area Network) distributed system to integrate sensors, actuators and hardware controllers in a mobile robot platform. With this work, we provide a robust, simple, flexible and open system to make hardware elements or subsystems communicate, that can be applied to different robots or mobile platforms. Hardware modules can be connected to or disconnected from the CAN bus while the system is working. It has been tested in our mobile robot Rato, based on a RWI (Real World Interface) mobile platform, to replace the old sensor and motor controllers. It has also been used in the design of two new robots: BellBot and WatchBot. Currently, our hardware integration architecture supports different sensors, actuators and control subsystems, such as motor controllers and inertial measurement units. The integration architecture was tested and compared with other solutions through a performance analysis of relevant parameters such as transmission efficiency and bandwidth usage. The results conclude that the proposed solution implements a lightweight communication protocol for mobile robot applications that avoids transmission delays and overhead. PMID:28467381

  9. Medusa: A Scalable MR Console Using USB

    PubMed Central

    Stang, Pascal P.; Conolly, Steven M.; Santos, Juan M.; Pauly, John M.; Scott, Greig C.

    2012-01-01

    MRI pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multi-channel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the Universal Serial Bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multi-channel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications. PMID:21954200

  10. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  11. Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration.

    PubMed

    Losada, Diego P; Fernández, Joaquín L; Paz, Enrique; Sanz, Rafael

    2017-05-03

    In this article, we present a CAN-based (Controller Area Network) distributed system to integrate sensors, actuators and hardware controllers in a mobile robot platform. With this work, we provide a robust, simple, flexible and open system to make hardware elements or subsystems communicate, that can be applied to different robots or mobile platforms. Hardware modules can be connected to or disconnected from the CAN bus while the system is working. It has been tested in our mobile robot Rato, based on a RWI (Real World Interface) mobile platform, to replace the old sensor and motor controllers. It has also been used in the design of two new robots: BellBot and WatchBot. Currently, our hardware integration architecture supports different sensors, actuators and control subsystems, such as motor controllers and inertial measurement units. The integration architecture was tested and compared with other solutions through a performance analysis of relevant parameters such as transmission efficiency and bandwidth usage. The results conclude that the proposed solution implements a lightweight communication protocol for mobile robot applications that avoids transmission delays and overhead.

  12. Baseline Architecture of ITER Control System

    NASA Astrophysics Data System (ADS)

    Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.

    2011-08-01

    The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.

  13. TEAM (Technologies Enabling Agile Manufacturing) shop floor control requirements guide: Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-03-28

    TEAM will create a shop floor control system (SFC) to link the pre-production planning to shop floor execution. SFC must meet the requirements of a multi-facility corporation, where control must be maintained between co-located facilities down to individual workstations within each facility. SFC must also meet the requirements of a small corporation, where there may only be one small facility. A hierarchical architecture is required to meet these diverse needs. The hierarchy contains the following levels: Enterprise, Factory, Cell, Station, and Equipment. SFC is focused on the top three levels. Each level of the hierarchy is divided into three basicmore » functions: Scheduler, Dispatcher, and Monitor. The requirements of each function depend on the hierarchical level in which it is to be used. For example, the scheduler at the Enterprise level must allocate production to individual factories and assign due-dates; the scheduler at the Cell level must provide detailed start and stop times of individual operations. Finally the system shall have the following features: distributed and open-architecture. Open architecture software is required in order that the appropriate technology be used at each level of the SFC hierarchy, and even at different instances within the same hierarchical level (for example, Factory A uses discrete-event simulation scheduling software, and Factory B uses an optimization-based scheduler). A distributed implementation is required to reduce the computational burden of the overall system, and allow for localized control. A distributed, open-architecture implementation will also require standards for communication between hierarchical levels.« less

  14. Peeling the Onion: Okapi System Architecture and Software Design Issues.

    ERIC Educational Resources Information Center

    Jones, S.; And Others

    1997-01-01

    Discusses software design issues for Okapi, an information retrieval system that incorporates both search engine and user interface and supports weighted searching, relevance feedback, and query expansion. The basic search system, adjacency searching, and moving toward a distributed system are discussed. (Author/LRW)

  15. Architecture-Centric Development in Globally Distributed Projects

    NASA Astrophysics Data System (ADS)

    Sauer, Joachim

    In this chapter architecture-centric development is proposed as a means to strengthen the cohesion of distributed teams and to tackle challenges due to geographical and temporal distances and the clash of different cultures. A shared software architecture serves as blueprint for all activities in the development process and ties them together. Architecture-centric development thus provides a plan for task allocation, facilitates the cooperation of globally distributed developers, and enables continuous integration reaching across distributed teams. Advice is also provided for software architects who work with distributed teams in an agile manner.

  16. Inter-computer communication architecture for a mixed redundancy distributed system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Adams, Stuart J.

    1987-01-01

    The triply redundant intercomputer network for the Advanced Information Processing System (AIPS), an architecture developed to serve as the core avionics system for a broad range of aerospace vehicles, is discussed. The AIPS intercomputer network provides a high-speed, Byzantine-fault-resilient communication service between processing sites, even in the presence of arbitrary failures of simplex and duplex processing sites on the IC network. The IC network contention poll has evolved from the Laning Poll. An analysis of the failure modes and effects and a simulation of the AIPS contention poll, demonstrate the robustness of the system.

  17. MonALISA, an agent-based monitoring and control system for the LHC experiments

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.

    2017-10-01

    MonALISA, which stands for Monitoring Agents using a Large Integrated Services Architecture, has been developed over the last fifteen years by California Insitute of Technology (Caltech) and its partners with the support of the software and computing program of the CMS and ALICE experiments at the Large Hadron Collider (LHC). The framework is based on Dynamic Distributed Service Architecture and is able to provide complete system monitoring, performance metrics of applications, Jobs or services, system control and global optimization services for complex systems. A short overview and status of MonALISA is given in this paper.

  18. The SKA1 LOW telescope: system architecture and design performance

    NASA Astrophysics Data System (ADS)

    Waterson, Mark F.; Labate, Maria Grazia; Schnetler, Hermine; Wagg, Jeff; Turner, Wallace; Dewdney, Peter

    2016-07-01

    The SKA1-LOW radio telescope will be a low-frequency (50-350 MHz) aperture array located in Western Australia. Its scientific objectives will prioritize studies of the Epoch of Reionization and pulsar physics. Development of the telescope has been allocated to consortia responsible for the aperture array front end, timing distribution, signal and data transport, correlation and beamforming signal processors, infrastructure, monitor and control systems, and science data processing. This paper will describe the system architectural design and key performance parameters of the telescope and summarize the high-level sub-system designs of the consortia.

  19. Hybrid Network Defense Model Based on Fuzzy Evaluation

    PubMed Central

    2014-01-01

    With sustained and rapid developments in the field of information technology, the issue of network security has become increasingly prominent. The theme of this study is network data security, with the test subject being a classified and sensitive network laboratory that belongs to the academic network. The analysis is based on the deficiencies and potential risks of the network's existing defense technology, characteristics of cyber attacks, and network security technologies. Subsequently, a distributed network security architecture using the technology of an intrusion prevention system is designed and implemented. In this paper, first, the overall design approach is presented. This design is used as the basis to establish a network defense model, an improvement over the traditional single-technology model that addresses the latter's inadequacies. Next, a distributed network security architecture is implemented, comprising a hybrid firewall, intrusion detection, virtual honeynet projects, and connectivity and interactivity between these three components. Finally, the proposed security system is tested. A statistical analysis of the test results verifies the feasibility and reliability of the proposed architecture. The findings of this study will potentially provide new ideas and stimuli for future designs of network security architecture. PMID:24574870

  20. High rate information systems - Architectural trends in support of the interdisciplinary investigator

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Preheim, Larry E.

    1990-01-01

    Data systems requirements in the Earth Observing System (EOS) Space Station Freedom (SSF) eras indicate increasing data volume, increased discipline interplay, higher complexity and broader data integration and interpretation. A response to the needs of the interdisciplinary investigator is proposed, considering the increasing complexity and rising costs of scientific investigation. The EOS Data Information System, conceived to be a widely distributed system with reliable communication links between central processing and the science user community, is described. Details are provided on information architecture, system models, intelligent data management of large complex databases, and standards for archiving ancillary data, using a research library, a laboratory and collaboration services.

  1. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  2. The architecture of a virtual grid GIS server

    NASA Astrophysics Data System (ADS)

    Wu, Pengfei; Fang, Yu; Chen, Bin; Wu, Xi; Tian, Xiaoting

    2008-10-01

    The grid computing technology provides the service oriented architecture for distributed applications. The virtual Grid GIS server is the distributed and interoperable enterprise application GIS architecture running in the grid environment, which integrates heterogeneous GIS platforms. All sorts of legacy GIS platforms join the grid as members of GIS virtual organization. Based on Microkernel we design the ESB and portal GIS service layer, which compose Microkernel GIS. Through web portals, portal GIS services and mediation of service bus, following the principle of SoC, we separate business logic from implementing logic. Microkernel GIS greatly reduces the coupling degree between applications and GIS platforms. The enterprise applications are independent of certain GIS platforms, and making the application developers to pay attention to the business logic. Via configuration and orchestration of a set of fine-grained services, the system creates GIS Business, which acts as a whole WebGIS request when activated. In this way, the system satisfies a business workflow directly and simply, with little or no new code.

  3. A new flight control and management system architecture and configuration

    NASA Astrophysics Data System (ADS)

    Kong, Fan-e.; Chen, Zongji

    2006-11-01

    The advanced fighter should possess the performance such as super-sound cruising, stealth, agility, STOVL(Short Take-Off Vertical Landing),powerful communication and information processing. For this purpose, it is not enough only to improve the aerodynamic and propulsion system. More importantly, it is necessary to enhance the control system. A complete flight control system provides not only autopilot, auto-throttle and control augmentation, but also the given mission management. F-22 and JSF possess considerably outstanding flight control system on the basis of pave pillar and pave pace avionics architecture. But their control architecture is not enough integrated. The main purpose of this paper is to build a novel fighter control system architecture. The control system constructed on this architecture should be enough integrated, inexpensive, fault-tolerant, high safe, reliable and effective. And it will take charge of both the flight control and mission management. Starting from this purpose, this paper finishes the work as follows: First, based on the human nervous control, a three-leveled hierarchical control architecture is proposed. At the top of the architecture, decision level is in charge of decision-making works. In the middle, organization & coordination level will schedule resources, monitor the states of the fighter and switch the control modes etc. And the bottom is execution level which holds the concrete drive and measurement; then, according to their function and resources all the tasks involving flight control and mission management are sorted to individual level; at last, in order to validate the three-leveled architecture, a physical configuration is also showed. The configuration is distributed and applies some new advancement in information technology industry such line replaced module and cluster technology.

  4. Model-Unified Planning and Execution for Distributed Autonomous System Control

    NASA Technical Reports Server (NTRS)

    Aschwanden, Pascal; Baskaran, Vijay; Bernardini, Sara; Fry, Chuck; Moreno, Maria; Muscettola, Nicola; Plaunt, Chris; Rijsman, David; Tompkins, Paul

    2006-01-01

    The Intelligent Distributed Execution Architecture (IDEA) is a real-time architecture that exploits artificial intelligence planning as the core reasoning engine for interacting autonomous agents. Rather than enforcing separate deliberation and execution layers, IDEA unifies them under a single planning technology. Deliberative and reactive planners reason about and act according to a single representation of the past, present and future domain state. The domain state behaves the rules dictated by a declarative model of the subsystem to be controlled, internal processes of the IDEA controller, and interactions with other agents. We present IDEA concepts - modeling, the IDEA core architecture, the unification of deliberation and reaction under planning - and illustrate its use in a simple example. Finally, we present several real-world applications of IDEA, and compare IDEA to other high-level control approaches.

  5. Space Power Architectures for NASA Missions: The Applicability and Benefits of Advanced Power and Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Hoffman, David J.

    2001-01-01

    The relative importance of electrical power systems as compared with other spacecraft bus systems is examined. The quantified benefits of advanced space power architectures for NASA Earth Science, Space Science, and Human Exploration and Development of Space (HEDS) missions is then presented. Advanced space power technologies highlighted include high specific power solar arrays, regenerative fuel cells, Stirling radioisotope power sources, flywheel energy storage and attitude control, lithium ion polymer energy storage and advanced power management and distribution.

  6. From MetroII to Metronomy, Designing Contract-based Function-Architecture Co-simulation Framework for Timing Verification of Cyber-Physical Systems

    DTIC Science & Technology

    2015-03-13

    A. Lee. “A Programming Model for Time - Synchronized Distributed Real- Time Systems”. In: Proceedings of Real Time and Em- bedded Technology and Applications Symposium. 2007, pp. 259–268. ...From MetroII to Metronomy, Designing Contract-based Function-Architecture Co-simulation Framework for Timing Verification of Cyber-Physical Systems...the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data

  7. A General theory of Signal Integration for Fault-Tolerant Dynamic Distributed Sensor Networks

    DTIC Science & Technology

    1993-10-01

    related to a) the architecture and fault- tolerance of the distributed sensor network, b) the proper synchronisation of sensor signals, c) the...Computational complexities of the problem of distributed detection. 5) Issues related to recording of events and synchronization in distributed sensor...Intervals for Synchronization in Real Time Distributed Systems", Submitted to Electronic Encyclopedia. 3. V. G. Hegde and S. S. Iyengar "Efficient

  8. 78 FR 9951 - Excepted Service

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-12

    ...) Not to exceed 3000 positions that require unique cyber security skills and knowledge to perform cyber..., distributed control systems security, cyber incident response, cyber exercise facilitation and management, cyber vulnerability detection and assessment, network and systems engineering, enterprise architecture...

  9. Open architecture of smart sensor suites

    NASA Astrophysics Data System (ADS)

    Müller, Wilmuth; Kuwertz, Achim; Grönwall, Christina; Petersson, Henrik; Dekker, Rob; Reinert, Frank; Ditzel, Maarten

    2017-10-01

    Experiences from recent conflicts show the strong need for smart sensor suites comprising different multi-spectral imaging sensors as core elements as well as additional non-imaging sensors. Smart sensor suites should be part of a smart sensor network - a network of sensors, databases, evaluation stations and user terminals. Its goal is to optimize the use of various information sources for military operations such as situation assessment, intelligence, surveillance, reconnaissance, target recognition and tracking. Such a smart sensor network will enable commanders to achieve higher levels of situational awareness. Within the study at hand, an open system architecture was developed in order to increase the efficiency of sensor suites. The open system architecture for smart sensor suites, based on a system-of-systems approach, enables combining different sensors in multiple physical configurations, such as distributed sensors, co-located sensors combined in a single package, tower-mounted sensors, sensors integrated in a mobile platform, and trigger sensors. The architecture was derived from a set of system requirements and relevant scenarios. Its mode of operation is adaptable to a series of scenarios with respect to relevant objects of interest, activities to be observed, available transmission bandwidth, etc. The presented open architecture is designed in accordance with the NATO Architecture Framework (NAF). The architecture allows smart sensor suites to be part of a surveillance network, linked e.g. to a sensor planning system and a C4ISR center, and to be used in combination with future RPAS (Remotely Piloted Aircraft Systems) for supporting a more flexible dynamic configuration of RPAS payloads.

  10. The development of the rhizosphere: simulation of root exudation for two contrasting exudates: citrate and mucilage

    NASA Astrophysics Data System (ADS)

    Sheng, Cheng; Bol, Roland; Vetterlein, Doris; Vanderborght, Jan; Schnepf, Andrea

    2017-04-01

    Different types of root exudates and their effect on soil/rhizosphere properties have received a lot of attention. Since their influence of rhizosphere properties and processes depends on their concentration in the soil, the assessment of the spatial-temporal exudate concentration distribution around roots is of key importance for understanding the functioning of the rhizosphere. Different root systems have different root architectures. Different types of root exudates diffuse in the rhizosphere with different diffusion coefficient. Both of them are responsible for the dynamics of exudate concentration distribution in the rhizosphere. Hence, simulations of root exudation involving four kinds of plant root systems (Vicia faba, Lupinus albus, Triticum aestivum and Zea mays) and two kinds of root exudates (citrate and mucilage) were conducted. We consider a simplified root architecture where each root is represented by a straight line. Assuming that root tips move at a constant velocity and that mucilage transport is linear, concentration distributions can be obtained from a convolution of the analytical solution of the transport equation in a stationary flow field for an instantaneous point source injection with the spatial-temporal distribution of the source strength. By coupling the analytical equation with a root growth model that delivers the spatial-temporal source term, we simulated exudate concentration distributions for citrate and mucilage with MATLAB. From the simulation results, we inferred the following information about the rhizosphere: (a) the dynamics of the root architecture development is the main effect of exudate distribution in the root zone; (b) a steady rhizosphere with constant width is more likely to develop for individual roots when the diffusion coefficient is small. The simulations suggest that rhizosphere development depends in the following way on the root and exudate properties: the dynamics of the root architecture result in various development patterns of the rhizosphere. Meanwhile, Results improve our understanding of the impact of the spatial and temporal heterogeneity of exudate input on rhizosphere development for different root system types and substances. In future work, we will use the simulation tool to infer critical parameters that determine the spatial-temporal extent of the rhizosphere from experimental data.

  11. An OAIS-Based Hospital Information System on the Cloud: Analysis of a NoSQL Column-Oriented Approach.

    PubMed

    Celesti, Antonio; Fazio, Maria; Romano, Agata; Bramanti, Alessia; Bramanti, Placido; Villari, Massimo

    2018-05-01

    The Open Archive Information System (OAIS) is a reference model for organizing people and resources in a system, and it is already adopted in care centers and medical systems to efficiently manage clinical data, medical personnel, and patients. Archival storage systems are typically implemented using traditional relational database systems, but the relation-oriented technology strongly limits the efficiency in the management of huge amount of patients' clinical data, especially in emerging cloud-based, that are distributed. In this paper, we present an OAIS healthcare architecture useful to manage a huge amount of HL7 clinical documents in a scalable way. Specifically, it is based on a NoSQL column-oriented Data Base Management System deployed in the cloud, thus to benefit from a big tables and wide rows available over a virtual distributed infrastructure. We developed a prototype of the proposed architecture at the IRCCS, and we evaluated its efficiency in a real case of study.

  12. The emotion system promotes diversity and evolvability

    PubMed Central

    Giske, Jarl; Eliassen, Sigrunn; Fiksen, Øyvind; Jakobsen, Per J.; Aksnes, Dag L.; Mangel, Marc; Jørgensen, Christian

    2014-01-01

    Studies on the relationship between the optimal phenotype and its environment have had limited focus on genotype-to-phenotype pathways and their evolutionary consequences. Here, we study how multi-layered trait architecture and its associated constraints prescribe diversity. Using an idealized model of the emotion system in fish, we find that trait architecture yields genetic and phenotypic diversity even in absence of frequency-dependent selection or environmental variation. That is, for a given environment, phenotype frequency distributions are predictable while gene pools are not. The conservation of phenotypic traits among these genetically different populations is due to the multi-layered trait architecture, in which one adaptation at a higher architectural level can be achieved by several different adaptations at a lower level. Our results emphasize the role of convergent evolution and the organismal level of selection. While trait architecture makes individuals more constrained than what has been assumed in optimization theory, the resulting populations are genetically more diverse and adaptable. The emotion system in animals may thus have evolved by natural selection because it simultaneously enhances three important functions, the behavioural robustness of individuals, the evolvability of gene pools and the rate of evolutionary innovation at several architectural levels. PMID:25100697

  13. The emotion system promotes diversity and evolvability.

    PubMed

    Giske, Jarl; Eliassen, Sigrunn; Fiksen, Øyvind; Jakobsen, Per J; Aksnes, Dag L; Mangel, Marc; Jørgensen, Christian

    2014-09-22

    Studies on the relationship between the optimal phenotype and its environment have had limited focus on genotype-to-phenotype pathways and their evolutionary consequences. Here, we study how multi-layered trait architecture and its associated constraints prescribe diversity. Using an idealized model of the emotion system in fish, we find that trait architecture yields genetic and phenotypic diversity even in absence of frequency-dependent selection or environmental variation. That is, for a given environment, phenotype frequency distributions are predictable while gene pools are not. The conservation of phenotypic traits among these genetically different populations is due to the multi-layered trait architecture, in which one adaptation at a higher architectural level can be achieved by several different adaptations at a lower level. Our results emphasize the role of convergent evolution and the organismal level of selection. While trait architecture makes individuals more constrained than what has been assumed in optimization theory, the resulting populations are genetically more diverse and adaptable. The emotion system in animals may thus have evolved by natural selection because it simultaneously enhances three important functions, the behavioural robustness of individuals, the evolvability of gene pools and the rate of evolutionary innovation at several architectural levels.

  14. TMN: Introduction and interpretation

    NASA Astrophysics Data System (ADS)

    Pras, Aiko

    An overview of Telecommunications Management Network (TMN) status is presented. Its relation with Open System Interconnection (OSI) systems management is given and the commonalities and distinctions are identified. Those aspects that distinguish TMN from OSI management are introduced; TMN's functional and physical architectures and TMN's logical layered architecture are discussed. An analysis of the concepts used by these architectures (reference point, interface, function block, and building block) is given. The use of these concepts to express geographical distribution and functional layering is investigated. This aspect is interesting to understand how OSI management protocols can be used in a TMN environment. A statement regarding applicability of TMN as a model that helps the designers of (management) networks is given.

  15. Fault tolerant and lifetime control architecture for autonomous vehicles

    NASA Astrophysics Data System (ADS)

    Bogdanov, Alexander; Chen, Yi-Liang; Sundareswaran, Venkataraman; Altshuler, Thomas

    2008-04-01

    Increased vehicle autonomy, survivability and utility can provide an unprecedented impact on mission success and are one of the most desirable improvements for modern autonomous vehicles. We propose a general architecture of intelligent resource allocation, reconfigurable control and system restructuring for autonomous vehicles. The architecture is based on fault-tolerant control and lifetime prediction principles, and it provides improved vehicle survivability, extended service intervals, greater operational autonomy through lower rate of time-critical mission failures and lesser dependence on supplies and maintenance. The architecture enables mission distribution, adaptation and execution constrained on vehicle and payload faults and desirable lifetime. The proposed architecture will allow managing missions more efficiently by weighing vehicle capabilities versus mission objectives and replacing the vehicle only when it is necessary.

  16. From Onions to Shallots: Rewarding Tor Relays with TEARS

    DTIC Science & Technology

    2014-07-18

    distributed banking using protocols from distributed digital cryp- tocurrency systems like Bitcoin . Shallots are publicly-verifiable, minimiz- ing reliance on...model; none of these have yet been implemented. In this paper, we draw upon the distributed Bitcoin architecture to design a transparent, efficient, and... Bitcoin . Shallots are publicly-veri able, minimiz- ing reliance on and trust in banking authorities, making them auditable while naturally distributing

  17. The contemporary Malay Cultural and architecture in Medan City

    NASA Astrophysics Data System (ADS)

    Nawawiy Loebis, M.; Nirfalini Aulia, Dwira; Asdiana; Tuah Aditya Saragih, Jhon

    2018-03-01

    Malay is one of the identity of the city of Medan. Especially the Malay kingdom that has an important role in the history of the city of Medan and the river Deli. Some relics of the Malay kingdom in the form of buildings with Malay architecture that made the tourist area. In this modern era, many buildings are designed contemporary and leave behind a cultural background, especially Malay architecture. The research methodology used is qualitative methodology by way of observation and interviews with informants who are Malay people still in Medan city. And the distribution of questionnaires to the Malay community. The variables tested are, location and environment, language, technology, organizational livelihood system, the arts, and religious system. This study aims to determine the contemporary Malay culture and architecture prevailing in today’s Malay society.

  18. Data management system performance modeling

    NASA Technical Reports Server (NTRS)

    Kiser, Larry M.

    1993-01-01

    This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.

  19. The design of multiplayer online video game systems

    NASA Astrophysics Data System (ADS)

    Hsu, Chia-chun A.; Ling, Jim; Li, Qing; Kuo, C.-C. J.

    2003-11-01

    The distributed Multiplayer Online Game (MOG) system is complex since it involves technologies in computer graphics, multimedia, artificial intelligence, computer networking, embedded systems, etc. Due to the large scope of this problem, the design of MOG systems has not yet been widely addressed in the literatures. In this paper, we review and analyze the current MOG system architecture followed by evaluation. Furthermore, we propose a clustered-server architecture to provide a scalable solution together with the region oriented allocation strategy. Two key issues, i.e. interesting management and synchronization, are discussed in depth. Some preliminary ideas to deal with the identified problems are described.

  20. TROPIX Power System Architecture

    NASA Technical Reports Server (NTRS)

    Manner, David B.; Hickman, J. Mark

    1995-01-01

    This document contains results obtained in the process of performing a power system definition study of the TROPIX power management and distribution system (PMAD). Requirements derived from the PMADs interaction with other spacecraft systems are discussed first. Since the design is dependent on the performance of the photovoltaics, there is a comprehensive discussion of the appropriate models for cells and arrays. A trade study of the array operating voltage and its effect on array bus mass is also presented. A system architecture is developed which makes use of a combination of high efficiency switching power convertors and analog regulators. Mass and volume estimates are presented for all subsystems.

  1. A Review of Microgrid Architectures and Control Strategy

    NASA Astrophysics Data System (ADS)

    Jadav, Krishnarajsinh A.; Karkar, Hitesh M.; Trivedi, I. N.

    2017-12-01

    In this paper microgrid architecture and various converters control strategies are reviewed. Microgrid is defined as interconnected network of distributed energy resources, loads and energy storage systems. This emerging concept realizes the potential of distributed generators. AC microgrid interconnects various AC distributed generators like wind turbine and DC distributed generators like PV, fuel cell using inverter. While in DC microgrid output of an AC distributed generator must be converted to DC using rectifiers and DC distributed generator can be directly interconnected. Hybrid microgrid is the solution to avoid this multiple reverse conversions AC-DC-AC and DC-AC-DC that occur in the individual AC-DC microgrid. In hybrid microgrid all AC distributed generators will be connected in AC microgrid and DC distributed generators will be connected in DC microgrid. Interlinking converter is used for power balance in both microgrids, which transfer power from one microgrid to other if any microgrid is overloaded. At the end, review of interlinking converter control strategies is presented.

  2. Architectures of Kepler Planet Systems with Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Morehead, Robert C.; Ford, Eric B.

    2015-12-01

    The distribution of period normalized transit duration ratios among Kepler’s multiple transiting planet systems constrains the distributions of mutual orbital inclinations and orbital eccentricities. However, degeneracies in these parameters tied to the underlying number of planets in these systems complicate their interpretation. To untangle the true architecture of planet systems, the mutual inclination, eccentricity, and underlying planet number distributions must be considered simultaneously. The complexities of target selection, transit probability, detection biases, vetting, and follow-up observations make it impractical to write an explicit likelihood function. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC generates a sample of trial population parameters from a prior distribution to produce synthetic datasets via a physically-motivated forward model. Samples are then accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We build on the considerable progress from the field of statistics to develop sequential algorithms for performing ABC in an efficient and flexible manner. We demonstrate the utility of ABC in exoplanet populations and present new constraints on the distributions of mutual orbital inclinations, eccentricities, and the relative number of short-period planets per star. We conclude with a discussion of the implications for other planet occurrence rate calculations, such as eta-Earth.

  3. A Systematic Review on Recent Advances in mHealth Systems: Deployment Architecture for Emergency Response

    PubMed Central

    2017-01-01

    The continuous technological advances in favor of mHealth represent a key factor in the improvement of medical emergency services. This systematic review presents the identification, study, and classification of the most up-to-date approaches surrounding the deployment of architectures for mHealth. Our review includes 25 articles obtained from databases such as IEEE Xplore, Scopus, SpringerLink, ScienceDirect, and SAGE. This review focused on studies addressing mHealth systems for outdoor emergency situations. In 60% of the articles, the deployment architecture relied in the connective infrastructure associated with emergent technologies such as cloud services, distributed services, Internet-of-things, machine-to-machine, vehicular ad hoc network, and service-oriented architecture. In 40% of the literature review, the deployment architecture for mHealth considered traditional connective infrastructure. Only 20% of the studies implemented an energy consumption protocol to extend system lifetime. We concluded that there is a need for more integrated solutions specifically for outdoor scenarios. Energy consumption protocols are needed to be implemented and evaluated. Emergent connective technologies are redefining the information management and overcome traditional technologies. PMID:29075430

  4. A Systematic Review on Recent Advances in mHealth Systems: Deployment Architecture for Emergency Response.

    PubMed

    Gonzalez, Enrique; Peña, Raul; Avila, Alfonso; Vargas-Rosales, Cesar; Munoz-Rodriguez, David

    2017-01-01

    The continuous technological advances in favor of mHealth represent a key factor in the improvement of medical emergency services. This systematic review presents the identification, study, and classification of the most up-to-date approaches surrounding the deployment of architectures for mHealth. Our review includes 25 articles obtained from databases such as IEEE Xplore, Scopus, SpringerLink, ScienceDirect, and SAGE. This review focused on studies addressing mHealth systems for outdoor emergency situations. In 60% of the articles, the deployment architecture relied in the connective infrastructure associated with emergent technologies such as cloud services, distributed services, Internet-of-things, machine-to-machine, vehicular ad hoc network, and service-oriented architecture. In 40% of the literature review, the deployment architecture for mHealth considered traditional connective infrastructure. Only 20% of the studies implemented an energy consumption protocol to extend system lifetime. We concluded that there is a need for more integrated solutions specifically for outdoor scenarios. Energy consumption protocols are needed to be implemented and evaluated. Emergent connective technologies are redefining the information management and overcome traditional technologies.

  5. Space/ground systems as cooperating agents

    NASA Technical Reports Server (NTRS)

    Grant, T. J.

    1994-01-01

    Within NASA and the European Space Agency (ESA) it is agreed that autonomy is an important goal for the design of future spacecraft and that this requires on-board artificial intelligence. NASA emphasizes deep space and planetary rover missions, while ESA considers on-board autonomy as an enabling technology for missions that must cope with imperfect communications. ESA's attention is on the space/ground system. A major issue is the optimal distribution of intelligent functions within the space/ground system. This paper describes the multi-agent architecture for space/ground systems (MAASGS) which would enable this issue to be investigated. A MAASGS agent may model a complete spacecraft, a spacecraft subsystem or payload, a ground segment, a spacecraft control system, a human operator, or an environment. The MAASGS architecture has evolved through a series of prototypes. The paper recommends that the MAASGS architecture should be implemented in the operational Dutch Utilization Center.

  6. Hybrid Communication Architectures for Distributed Smart Grid Applications

    DOE PAGES

    Zhang, Jianhua; Hasandka, Adarsh; Wei, Jin; ...

    2018-04-09

    Wired and wireless communications both play an important role in the blend of communications technologies necessary to enable future smart grid communications. Hybrid networks exploit independent mediums to extend network coverage and improve performance. However, whereas individual technologies have been applied in simulation networks, as far as we know there is only limited attention that has been paid to the development of a suite of hybrid communication simulation models for the communications system design. Hybrid simulation models are needed to capture the mixed communication technologies and IP address mechanisms in one simulation. To close this gap, we have developed amore » suite of hybrid communication system simulation models to validate the critical system design criteria for a distributed solar Photovoltaic (PV) communications system, including a single trip latency of 300 ms, throughput of 9.6 Kbps, and packet loss rate of 1%. In conclusion, the results show that three low-power wireless personal area network (LoWPAN)-based hybrid architectures can satisfy three performance metrics that are critical for distributed energy resource communications.« less

  7. Hybrid Communication Architectures for Distributed Smart Grid Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jianhua; Hasandka, Adarsh; Wei, Jin

    Wired and wireless communications both play an important role in the blend of communications technologies necessary to enable future smart grid communications. Hybrid networks exploit independent mediums to extend network coverage and improve performance. However, whereas individual technologies have been applied in simulation networks, as far as we know there is only limited attention that has been paid to the development of a suite of hybrid communication simulation models for the communications system design. Hybrid simulation models are needed to capture the mixed communication technologies and IP address mechanisms in one simulation. To close this gap, we have developed amore » suite of hybrid communication system simulation models to validate the critical system design criteria for a distributed solar Photovoltaic (PV) communications system, including a single trip latency of 300 ms, throughput of 9.6 Kbps, and packet loss rate of 1%. In conclusion, the results show that three low-power wireless personal area network (LoWPAN)-based hybrid architectures can satisfy three performance metrics that are critical for distributed energy resource communications.« less

  8. Fault Management Architectures and the Challenges of Providing Software Assurance

    NASA Technical Reports Server (NTRS)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    The satellite systems Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most is system complexity due to a need to establish a multi-dimensional structure across hardware, software and operations. This structure is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. These architecture, implementation and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (VV) is challenging. A breakout session at the 2012 NASA Independent Verification Validation (IVV) Annual Workshop titled VV of Fault Management: Challenges and Successes exposed these issues in terms of VV for a representative set of architectures. NASA's IVV is funded by NASA's Software Assurance Research Program (SARP) in partnership with NASA's Jet Propulsion Laboratory (JPL) to extend the work performed at the Workshop session. NASA IVV will extract FM architectures across the IVV portfolio and evaluate the data set for robustness, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This work focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures, visibility, and associated VVIVV techniques provides a data set that can enable higher assurance that a satellite system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the satellite community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the research including identification of FM architectures, visibility observations, and methods utilized for VVIVV.

  9. Distributed photovoltaic architecture powering a DC bus: Impact of duty cycle and load variations on the efficiency of the generator

    NASA Astrophysics Data System (ADS)

    Allouache, Hadj; Zegaoui, Abdallah; Boutoubat, Mohamed; Bokhtache, Aicha Aissa; Kessaissia, Fatma Zohra; Charles, Jean-Pierre; Aillerie, Michel

    2018-05-01

    This paper focuses on a photovoltaic generator feeding a load via a boost converter in a distributed PV architecture. The principal target is the evaluation of the efficiency of a distributed photovoltaic architecture powering a direct current (DC) PV bus. This task is achieved by outlining an original way for tracking the Maximum Power Point (MPP) taking into account load variations and duty cycle on the electrical quantities of the boost converter and on the PV generator output apparent impedance. Thereafter, in a given sized PV system, we analyze the influence of the load variations on the behavior of the boost converter and we deduce the limits imposed by the load on the DC PV bus. The simultaneous influences of 1- the variation of the duty cycle of the boost converter and 2- the load power on the parameters of the various components of the photovoltaic chain and on the boost performances are clearly presented as deduced by simulation.

  10. NATO Human View Architecture and Human Networks

    NASA Technical Reports Server (NTRS)

    Handley, Holly A. H.; Houston, Nancy P.

    2010-01-01

    The NATO Human View is a system architectural viewpoint that focuses on the human as part of a system. Its purpose is to capture the human requirements and to inform on how the human impacts the system design. The viewpoint contains seven static models that include different aspects of the human element, such as roles, tasks, constraints, training and metrics. It also includes a Human Dynamics component to perform simulations of the human system under design. One of the static models, termed Human Networks, focuses on the human-to-human communication patterns that occur as a result of ad hoc or deliberate team formation, especially teams distributed across space and time. Parameters of human teams that effect system performance can be captured in this model. Human centered aspects of networks, such as differences in operational tempo (sense of urgency), priorities (common goal), and team history (knowledge of the other team members), can be incorporated. The information captured in the Human Network static model can then be included in the Human Dynamics component so that the impact of distributed teams is represented in the simulation. As the NATO militaries transform to a more networked force, the Human View architecture is an important tool that can be used to make recommendations on the proper mix of technological innovations and human interactions.

  11. A New On-Line Diagnosis Protocol for the SPIDER Family of Byzantine Fault Tolerant Architectures

    NASA Technical Reports Server (NTRS)

    Geser, Alfons; Miner, Paul S.

    2004-01-01

    This paper presents the formal verification of a new protocol for online distributed diagnosis for the SPIDER family of architectures. An instance of the Scalable Processor-Independent Design for Electromagnetic Resilience (SPIDER) architecture consists of a collection of processing elements communicating over a Reliable Optical Bus (ROBUS). The ROBUS is a specialized fault-tolerant device that guarantees Interactive Consistency, Distributed Diagnosis (Group Membership), and Synchronization in the presence of a bounded number of physical faults. Formal verification of the original SPIDER diagnosis protocol provided a detailed understanding that led to the discovery of a significantly more efficient protocol. The original protocol was adapted from the formally verified protocol used in the MAFT architecture. It required O(N) message exchanges per defendant to correctly diagnose failures in a system with N nodes. The new protocol achieves the same diagnostic fidelity, but only requires O(1) exchanges per defendant. This paper presents this new diagnosis protocol and a formal proof of its correctness using PVS.

  12. NELS 2.0 - A general system for enterprise wide information management

    NASA Technical Reports Server (NTRS)

    Smith, Stephanie L.

    1993-01-01

    NELS, the NASA Electronic Library System, is an information management tool for creating distributed repositories of documents, drawings, and code for use and reuse by the aerospace community. The NELS retrieval engine can load metadata and source files of full text objects, perform natural language queries to retrieve ranked objects, and create links to connect user interfaces. For flexibility, the NELS architecture has layered interfaces between the application program and the stored library information. The session manager provides the interface functions for development of NELS applications. The data manager is an interface between session manager and the structured data system. The center of the structured data system is the Wide Area Information Server. This system architecture provides access to information across heterogeneous platforms in a distributed environment. There are presently three user interfaces that connect to the NELS engine; an X-Windows interface, and ASCII interface and the Spatial Data Management System. This paper describes the design and operation of NELS as an information management tool and repository.

  13. Developing a Distributed Computing Architecture at Arizona State University.

    ERIC Educational Resources Information Center

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  14. A Study on Human Oriented Autonomous Distributed Manufacturing System —Real-time Scheduling Method Based on Preference of Human Operators

    NASA Astrophysics Data System (ADS)

    Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro

    Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.

  15. Optical beam forming techniques for phased array antennas

    NASA Technical Reports Server (NTRS)

    Wu, Te-Kao; Chandler, C.

    1993-01-01

    Conventional phased array antennas using waveguide or coax for signal distribution are impractical for large scale implementation on satellites or spacecraft because they exhibit prohibitively large system size, heavy weight, high attenuation loss, limited bandwidth, sensitivity to electromagnetic interference (EMI) temperature drifts and phase instability. However, optical beam forming systems are smaller, lighter, and more flexible. Three optical beam forming techniques are identified as applicable to large spaceborne phased array antennas. They are (1) the optical fiber replacement of conventional RF phased array distribution and control components, (2) spatial beam forming, and (3) optical beam splitting with integrated quasi-optical components. The optical fiber replacement and the spatial beam forming approaches were pursued by many organizations. Two new optical beam forming architectures are presented. Both architectures involve monolithic integration of the antenna radiating elements with quasi-optical grid detector arrays. The advantages of the grid detector array in the optical process are the higher power handling capability and the dynamic range. One architecture involves a modified version of the original spatial beam forming approach. The basic difference is the spatial light modulator (SLM) device for controlling the aperture field distribution. The original liquid crystal light valve SLM is replaced by an optical shuffling SLM, which was demonstrated for the 'smart pixel' technology. The advantages are the capability of generating the agile beams of a phased array antenna and to provide simultaneous transmit and receive functions. The second architecture considered is the optical beam splitting approach. This architecture involves an alternative amplitude control for each antenna element with an optical beam power divider comprised of mirrors and beam splitters. It also implements the quasi-optical grid phase shifter for phase control and grid amplifier for RF power. The advantages are no SLM is required for this approach, and the complete antenna system is capable of full monolithic integration.

  16. The GOES-R Product Generation Architecture

    NASA Astrophysics Data System (ADS)

    Dittberner, G. J.; Kalluri, S.; Hansen, D.; Weiner, A.; Tarpley, A.; Marley, S.

    2011-12-01

    The GOES-R system will substantially improve users' ability to succeed in their work by providing data with significantly enhanced instruments, higher resolution, much shorter relook times, and an increased number and diversity of products. The Product Generation architecture is designed to provide the computer and memory resources necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.

  17. Processor tradeoffs in distributed real-time systems

    NASA Technical Reports Server (NTRS)

    Krishna, C. M.; Shin, Kang G.; Bhandari, Inderpal S.

    1987-01-01

    The problem of the optimization of the design of real-time distributed systems is examined with reference to a class of computer architectures similar to the continuously reconfigurable multiprocessor flight control system structure, CM2FCS. Particular attention is given to the impact of processor replacement and the burn-in time on the probability of dynamic failure and mean cost. The solution is obtained numerically and interpreted in the context of real-time applications.

  18. The Monitoring, Detection, Isolation and Assessment of Information Warfare Attacks Through Multi-Level, Multi-Scale System Modeling and Model Based Technology

    DTIC Science & Technology

    2004-01-01

    login identity to the one under which the system call is executed, the parameters of the system call execution - file names including full path...Anomaly detection COAST-EIMDT Distributed on target hosts EMERALD Distributed on target hosts and security servers Signature recognition Anomaly...uses a centralized architecture, and employs an anomaly detection technique for intrusion detection. The EMERALD project [80] proposes a

  19. LVFS: A Big Data File Storage Bridge for the HPC Community

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.

    2015-12-01

    Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.

  20. A Power Hardware-in-the-Loop Platform with Remote Distribution Circuit Cosimulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Lundstrom, Blake; Chakraborty, Sudipta

    2015-04-01

    This paper demonstrates the use of a novel cosimulation architecture that integrates hardware testing using Power Hardware-in-the-Loop (PHIL) with larger-scale electric grid models using off-the-shelf, non-PHIL software tools. This architecture enables utilities to study the impacts of emerging energy technologies on their system and manufacturers to explore the interactions of new devices with existing and emerging devices on the power system, both without the need to convert existing grid models to a new platform or to conduct in-field trials. The paper describes an implementation of this architecture for testing two residential-scale advanced solar inverters at separate points of common coupling.more » The same hardware setup is tested with two different distribution feeders (IEEE 123 and 8500 node test systems) modeled using GridLAB-D. In addition to simplifying testing with multiple feeders, the architecture demonstrates additional flexibility with hardware testing in one location linked via the Internet to software modeling in a remote location. In testing, inverter current, real and reactive power, and PCC voltage are well captured by the co-simulation platform. Testing of the inverter advanced control features is currently somewhat limited by the software model time step (1 sec) and tested communication latency (24 msec). Overshoot induced oscillations are observed with volt/VAR control delays of 0 and 1.5 sec, while 3.4 sec and 5.5 sec delays produced little or no oscillation. These limitations could be overcome using faster modeling and communication within the same co-simulation architecture.« less

  1. FOS: A Factored Operating Systems for High Assurance and Scalability on Multicores

    DTIC Science & Technology

    2012-08-01

    computing. It builds on previous work in distributed and microkernel OSes by factoring services out of the kernel, and then further distributing each...2 3.0 Methods, Assumptions, and Procedures (System Design) .................................................. 4 3.1 Microkernel ...cooperating servers. We term such a service a fleet. Figure 2 shows the high-level architecture of fos. A small microkernel runs on every core

  2. Superconcurrency: A Form of Distributed Heterogeneous Supercomputing

    DTIC Science & Technology

    1991-05-01

    and Nathaniel J. Davis IV, An Overview of the PASM Parallel Processing System, in Computer Architecture, edited by D. D. Gajski , V. M. Milutinovic, H...nianag- concurrency Research Team has been rarena in the next few months, iag optinmalyconfigured sutes of the development of the Distributed e- g ., an

  3. SSP Power Management and Distribution

    NASA Technical Reports Server (NTRS)

    Lynch, Thomas H.; Roth, A. (Technical Monitor)

    2000-01-01

    Space Solar Power is a NASA program sponsored by Marshall Space Flight Center. The Paper presented here represents the architectural study of a large power management and distribution (PMAD) system. The PMAD supplies power to a microwave array for power beaming to an earth rectenna (Rectifier Antenna). The power is in the GW level.

  4. Computer Sciences and Data Systems, volume 2

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Topics addressed include: data storage; information network architecture; VHSIC technology; fiber optics; laser applications; distributed processing; spaceborne optical disk controller; massively parallel processors; and advanced digital SAR processors.

  5. The Use of Software Agents for Autonomous Control of a DC Space Power System

    NASA Technical Reports Server (NTRS)

    May, Ryan D.; Loparo, Kenneth A.

    2014-01-01

    In order to enable manned deep-space missions, the spacecraft must be controlled autonomously using on-board algorithms. A control architecture is proposed to enable this autonomous operation for an spacecraft electric power system and then implemented using a highly distributed network of software agents. These agents collaborate and compete with each other in order to implement each of the control functions. A subset of this control architecture is tested against a steadystate power system simulation and found to be able to solve a constrained optimization problem with competing objectives using only local information.

  6. A New Signaling Architecture THREP with Autonomous Radio-Link Control for Wireless Communications Systems

    NASA Astrophysics Data System (ADS)

    Hirono, Masahiko; Nojima, Toshio

    This paper presents a new signaling architecture for radio-access control in wireless communications systems. Called THREP (for THREe-phase link set-up Process), it enables systems with low-cost configurations to provide tetherless access and wide-ranging mobility by using autonomous radio-link controls for fast cell searching and distributed call management. A signaling architecture generally consists of a radio-access part and a service-entity-access part. In THREP, the latter part is divided into two steps: preparing a communication channel, and sustaining it. Access control in THREP is thus composed of three separated parts, or protocol phases. The specifications of each phase are determined independently according to system requirements. In the proposed architecture, the first phase uses autonomous radio-link control because we want to construct low-power indoor wireless communications systems. Evaluation of channel usage efficiency and hand-over loss probability in the personal handy-phone system (PHS) shows that THREP makes the radio-access sub-system operations in a practical application model highly efficient, and the results of a field experiment show that THREP provides sufficient protection against severe fast CNR degradation in practical indoor propagation environments.

  7. Implementation of a Prototype Generalized Network Technology for Hospitals *

    PubMed Central

    Tolchin, S. G.; Stewart, R. L.; Kahn, S. A.; Bergan, E. S.; Gafke, G. P.; Simborg, D. W.; Whiting-O'Keefe, Q. E.; Chadwick, M. G.; McCue, G. E.

    1981-01-01

    A demonstration implementation of a distributed data processing hospital information system using an intelligent local area communications network (LACN) technology is described. This system is operational at the UCSF Medical Center and integrates four heterogeneous, stand-alone minicomputers. The applications systems are PID/Registration, Outpatient Pharmacy, Clinical Laboratory and Radiology/Medical Records. Functional autonomy of these systems has been maintained, and no operating system changes have been required. The LACN uses a fiber-optic communications medium and provides extensive communications protocol support within the network, based on the ISO/OSI Model. The architecture is reconfigurable and expandable. This paper describes system architectural issues, the applications environment and the local area network.

  8. Advances in Distributed Operations and Mission Activity Planning for Mars Surface Exploration

    NASA Technical Reports Server (NTRS)

    Fox, Jason M.; Norris, Jeffrey S.; Powell, Mark W.; Rabe, Kenneth J.; Shams, Khawaja

    2006-01-01

    A centralized mission activity planning system for any long-term mission, such as the Mars Exploration Rover Mission (MER), is completely infeasible due to budget and geographic constraints. A distributed operations system is key to addressing these constraints; therefore, future system and software engineers must focus on the problem of how to provide a secure, reliable, and distributed mission activity planning system. We will explain how Maestro, the next generation mission activity planning system, with its heavy emphasis on portability and distributed operations has been able to meet these design challenges. MER has been an excellent proving ground for Maestro's new approach to distributed operations. The backend that has been developed for Maestro could benefit many future missions by reducing the cost of centralized operations system architecture.

  9. A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.

    PubMed Central

    Law, V.; Goldberg, H. S.; Jones, P.; Safran, C.

    1998-01-01

    One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system. PMID:9929252

  10. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    NASA Technical Reports Server (NTRS)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  11. A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.

    PubMed

    Law, V; Goldberg, H S; Jones, P; Safran, C

    1998-01-01

    One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system.

  12. The middleware architecture supports heterogeneous network systems for module-based personal robot system

    NASA Astrophysics Data System (ADS)

    Choo, Seongho; Li, Vitaly; Choi, Dong Hee; Jung, Gi Deck; Park, Hong Seong; Ryuh, Youngsun

    2005-12-01

    On developing the personal robot system presently, the internal architecture is every module those occupy separated functions are connected through heterogeneous network system. This module-based architecture supports specialization and division of labor at not only designing but also implementation, as an effect of this architecture, it can reduce developing times and costs for modules. Furthermore, because every module is connected among other modules through network systems, we can get easy integrations and synergy effect to apply advanced mutual functions by co-working some modules. In this architecture, one of the most important technologies is the network middleware that takes charge communications among each modules connected through heterogeneous networks systems. The network middleware acts as the human nerve system inside of personal robot system; it relays, transmits, and translates information appropriately between modules that are similar to human organizations. The network middleware supports various hardware platform, heterogeneous network systems (Ethernet, Wireless LAN, USB, IEEE 1394, CAN, CDMA-SMS, RS-232C). This paper discussed some mechanisms about our network middleware to intercommunication and routing among modules, methods for real-time data communication and fault-tolerant network service. There have designed and implemented a layered network middleware scheme, distributed routing management, network monitoring/notification technology on heterogeneous networks for these goals. The main theme is how to make routing information in our network middleware. Additionally, with this routing information table, we appended some features. Now we are designing, making a new version network middleware (we call 'OO M/W') that can support object-oriented operation, also are updating program sources itself for object-oriented architecture. It is lighter, faster, and can support more operation systems and heterogeneous network systems, but other general purposed middlewares like CORBA, UPnP, etc. can support only one network protocol or operating system.

  13. Software/hardware distributed processing network supporting the Ada environment

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.; Pryk, Zen

    1993-09-01

    A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.

  14. Autonomous docking system for space structures and satellites

    NASA Astrophysics Data System (ADS)

    Prasad, Guru; Tajudeen, Eddie; Spenser, James

    2005-05-01

    Aximetric proposes Distributed Command and Control (C2) architecture for autonomous on-orbit assembly in space with our unique vision and sensor driven docking mechanism. Aximetric is currently working on ip based distributed control strategies, docking/mating plate, alignment and latching mechanism, umbilical structure/cord designs, and hardware/software in a closed loop architecture for smart autonomous demonstration utilizing proven developments in sensor and docking technology. These technologies can be effectively applied to many transferring/conveying and on-orbit servicing applications to include the capturing and coupling of space bound vehicles and components. The autonomous system will be a "smart" system that will incorporate a vision system used for identifying, tracking, locating and mating the transferring device to the receiving device. A robustly designed coupler for the transfer of the fuel will be integrated. Advanced sealing technology will be utilized for isolation and purging of resulting cavities from the mating process and/or from the incorporation of other electrical and data acquisition devices used as part of the overall smart system.

  15. CHRONOS architecture: Experiences with an open-source services-oriented architecture for geoinformatics

    USGS Publications Warehouse

    Fils, D.; Cervato, C.; Reed, J.; Diver, P.; Tang, X.; Bohling, G.; Greer, D.

    2009-01-01

    CHRONOS's purpose is to transform Earth history research by seamlessly integrating stratigraphic databases and tools into a virtual on-line stratigraphic record. In this paper, we describe the various components of CHRONOS's distributed data system, including the encoding of semantic and descriptive data into a service-based architecture. We give examples of how we have integrated well-tested resources available from the open-source and geoinformatic communities, like the GeoSciML schema and the simple knowledge organization system (SKOS), into the services-oriented architecture to encode timescale and phylogenetic synonymy data. We also describe on-going efforts to use geospatially enhanced data syndication and informally including semantic information by embedding it directly into the XHTML Document Object Model (DOM). XHTML DOM allows machine-discoverable descriptive data such as licensing and citation information to be incorporated directly into data sets retrieved by users. ?? 2008 Elsevier Ltd. All rights reserved.

  16. An Attack-Resilient Middleware Architecture for Grid Integration of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yifu; Mendis, Gihan J.; He, Youbiao

    In recent years, the increasing penetration of Distributed Energy Resources (DERs) has made an impact on the operation of the electric power systems. In the grid integration of DERs, data acquisition systems and communications infrastructure are crucial technologies to maintain system economic efficiency and reliability. Since most of these generators are relatively small, dedicated communications investments for every generator are capital cost prohibitive. Combining real-time attack-resilient communications middleware with Internet of Things (IoTs) technologies allows for the use of existing infrastructure. In our paper, we propose an intelligent communication middleware that utilizes the Quality of Experience (QoE) metrics to complementmore » the conventional Quality of Service (QoS) evaluation. Furthermore, our middleware employs deep learning techniques to detect and defend against congestion attacks. The simulation results illustrate the efficiency of our proposed communications middleware architecture.« less

  17. Hardware/software codesign for embedded RISC core

    NASA Astrophysics Data System (ADS)

    Liu, Peng

    2001-12-01

    This paper describes hardware/software codesign method of the extendible embedded RISC core VIRGO, which based on MIPS-I instruction set architecture. VIRGO is described by Verilog hardware description language that has five-stage pipeline with shared 32-bit cache/memory interface, and it is controlled by distributed control scheme. Every pipeline stage has one small controller, which controls the pipeline stage status and cooperation among the pipeline phase. Since description use high level language and structure is distributed, VIRGO core has highly extension that can meet the requirements of application. We take look at the high-definition television MPEG2 MPHL decoder chip, constructed the hardware/software codesign virtual prototyping machine that can research on VIRGO core instruction set architecture, and system on chip memory size requirements, and system on chip software, etc. We also can evaluate the system on chip design and RISC instruction set based on the virtual prototyping machine platform.

  18. Privacy-Aware Location Database Service for Granular Queries

    NASA Astrophysics Data System (ADS)

    Kiyomoto, Shinsaku; Martin, Keith M.; Fukushima, Kazuhide

    Future mobile markets are expected to increasingly embrace location-based services. This paper presents a new system architecture for location-based services, which consists of a location database and distributed location anonymizers. The service is privacy-aware in the sense that the location database always maintains a degree of anonymity. The location database service permits three different levels of query and can thus be used to implement a wide range of location-based services. Furthermore, the architecture is scalable and employs simple functions that are similar to those found in general database systems.

  19. Development of a space-systems network testbed

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan; Alger, Linda; Adams, Stuart; Burkhardt, Laura; Nagle, Gail; Murray, Nicholas

    1988-01-01

    This paper describes a communications network testbed which has been designed to allow the development of architectures and algorithms that meet the functional requirements of future NASA communication systems. The central hardware components of the Network Testbed are programmable circuit switching communication nodes which can be adapted by software or firmware changes to customize the testbed to particular architectures and algorithms. Fault detection, isolation, and reconfiguration has been implemented in the Network with a hybrid approach which utilizes features of both centralized and distributed techniques to provide efficient handling of faults within the Network.

  20. A technique system for the measurement, reconstruction and character extraction of rice plant architecture

    PubMed Central

    Li, Xumeng; Wang, Xiaohui; Wei, Hailin; Zhu, Xinguang; Peng, Yulin; Li, Ming; Li, Tao; Huang, Huang

    2017-01-01

    This study developed a technique system for the measurement, reconstruction, and trait extraction of rice canopy architectures, which have challenged functional–structural plant modeling for decades and have become the foundation of the design of ideo-plant architectures. The system uses the location-separation-measurement method (LSMM) for the collection of data on the canopy architecture and the analytic geometry method for the reconstruction and visualization of the three-dimensional (3D) digital architecture of the rice plant. It also uses the virtual clipping method for extracting the key traits of the canopy architecture such as the leaf area, inclination, and azimuth distribution in spatial coordinates. To establish the technique system, we developed (i) simple tools to measure the spatial position of the stem axis and azimuth of the leaf midrib and to capture images of tillers and leaves; (ii) computer software programs for extracting data on stem diameter, leaf nodes, and leaf midrib curves from the tiller images and data on leaf length, width, and shape from the leaf images; (iii) a database of digital architectures that stores the measured data and facilitates the reconstruction of the 3D visual architecture and the extraction of architectural traits; and (iv) computation algorithms for virtual clipping to stratify the rice canopy, to extend the stratified surface from the horizontal plane to a general curved surface (including a cylindrical surface), and to implement in silico. Each component of the technique system was quantitatively validated and visually compared to images, and the sensitivity of the virtual clipping algorithms was analyzed. This technique is inexpensive and accurate and provides high throughput for the measurement, reconstruction, and trait extraction of rice canopy architectures. The technique provides a more practical method of data collection to serve functional–structural plant models of rice and for the optimization of rice canopy types. Moreover, the technique can be easily adapted for other cereal crops such as wheat, which has numerous stems and leaves sheltering each other. PMID:28558045

  1. Design of a multisensor data fusion system for target detection

    NASA Astrophysics Data System (ADS)

    Thomopoulos, Stelios C.; Okello, Nickens N.; Kadar, Ivan; Lovas, Louis A.

    1993-09-01

    The objective of this paper is to discuss the issues that are involved in the design of a multisensor fusion system and provide a systematic analysis and synthesis methodology for the design of the fusion system. The system under consideration consists of multifrequency (similar) radar sensors. However, the fusion design must be flexible to accommodate additional dissimilar sensors such as IR, EO, ESM, and Ladar. The motivation for the system design is the proof of the fusion concept for enhancing the detectability of small targets in clutter. In the context of down-selecting the proper configuration for multisensor (similar and dissimilar, and centralized vs. distributed) data fusion, the issues of data modeling, fusion approaches, and fusion architectures need to be addressed for the particular application being considered. Although the study of different approaches may proceed in parallel, the interplay among them is crucial in selecting a fusion configuration for a given application. The natural sequence for addressing the three different issues is to begin from the data modeling, in order to determine the information content of the data. This information will dictate the appropriate fusion approach. This, in turn, will lead to a global fusion architecture. Both distributed and centralized fusion architectures are used to illustrate the design issues along with Monte-Carlo simulation performance comparison of a single sensor versus a multisensor centrally fused system.

  2. Emergent latent symbol systems in recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Monner, Derek; Reggia, James A.

    2012-12-01

    Fodor and Pylyshyn [(1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2), 3-71] famously argued that neural networks cannot behave systematically short of implementing a combinatorial symbol system. A recent response from Frank et al. [(2009). Connectionist semantic systematicity. Cognition, 110(3), 358-379] claimed to have trained a neural network to behave systematically without implementing a symbol system and without any in-built predisposition towards combinatorial representations. We believe systems like theirs may in fact implement a symbol system on a deeper and more interesting level: one where the symbols are latent - not visible at the level of network structure. In order to illustrate this possibility, we demonstrate our own recurrent neural network that learns to understand sentence-level language in terms of a scene. We demonstrate our model's learned understanding by testing it on novel sentences and scenes. By paring down our model into an architecturally minimal version, we demonstrate how it supports combinatorial computation over distributed representations by using the associative memory operations of Vector Symbolic Architectures. Knowledge of the model's memory scheme gives us tools to explain its errors and construct superior future models. We show how the model designs and manipulates a latent symbol system in which the combinatorial symbols are patterns of activation distributed across the layers of a neural network, instantiating a hybrid of classical symbolic and connectionist representations that combines advantages of both.

  3. Architecture for an artificial immune system.

    PubMed

    Hofmeyr, S A; Forrest, S

    2000-01-01

    An artificial immune system (ARTIS) is described which incorporates many properties of natural immune systems, including diversity, distributed computation, error tolerance, dynamic learning and adaptation, and self-monitoring. ARTIS is a general framework for a distributed adaptive system and could, in principle, be applied to many domains. In this paper, ARTIS is applied to computer security in the form of a network intrusion detection system called LISYS. LISYS is described and shown to be effective at detecting intrusions, while maintaining low false positive rates. Finally, similarities and differences between ARTIS and Holland's classifier systems are discussed.

  4. Deep Phenotyping of Coarse Root Architecture in R. pseudoacacia Reveals That Tree Root System Plasticity Is Confined within Its Architectural Model

    PubMed Central

    Danjon, Frédéric; Khuder, Hayfa; Stokes, Alexia

    2013-01-01

    This study aims at assessing the influence of slope angle and multi-directional flexing and their interaction on the root architecture of Robinia pseudoacacia seedlings, with a particular focus on architectural model and trait plasticity. 36 trees were grown from seed in containers inclined at 0° (control) or 45° (slope) in a glasshouse. The shoots of half the plants were gently flexed for 5 minutes a day. After 6 months, root systems were excavated and digitized in 3D, and biomass measured. Over 100 root architectural traits were determined. Both slope and flexing increased significantly plant size. Non-flexed trees on 45° slopes developed shallow roots which were largely aligned perpendicular to the slope. Compared to the controls, flexed trees on 0° slopes possessed a shorter and thicker taproot held in place by regularly distributed long and thin lateral roots. Flexed trees on the 45° slope also developed a thick vertically aligned taproot, with more volume allocated to upslope surface lateral roots, due to the greater soil volume uphill. We show that there is an inherent root system architectural model, but that a certain number of traits are highly plastic. This plasticity will permit root architectural design to be modified depending on external mechanical signals perceived by young trees. PMID:24386227

  5. Sensing and Measurement Architecture for Grid Modernization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taft, Jeffrey D.; De Martini, Paul

    2016-02-01

    This paper addresses architecture for grid sensor networks, with primary emphasis on distribution grids. It describes a forward-looking view of sensor network architecture for advanced distribution grids, and discusses key regulatory, financial, and planning issues.

  6. Knowledge Management System Model for Learning Organisations

    ERIC Educational Resources Information Center

    Amin, Yousif; Monamad, Roshayu

    2017-01-01

    Based on the literature of knowledge management (KM), this paper reports on the progress of developing a new knowledge management system (KMS) model with components architecture that are distributed over the widely-recognised socio-technical system (STS) aspects to guide developers for selecting the most applicable components to support their KM…

  7. Agent Collaborative Target Localization and Classification in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng

    2007-01-01

    Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.

  8. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2014-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink(R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  9. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2015-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  10. A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia Mae; Culley, Dennis E.; Aretskin-Hariton, Eliot D.

    2014-01-01

    Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (40,000 pound force thrust) (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink (R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.

  11. Distributed virtual environment for emergency medical training

    NASA Astrophysics Data System (ADS)

    Stytz, Martin R.; Banks, Sheila B.; Garcia, Brian W.; Godsell-Stytz, Gayl M.

    1997-07-01

    In many professions where individuals must work in a team in a high stress environment to accomplish a time-critical task, individual and team performance can benefit from joint training using distributed virtual environments (DVEs). One professional field that lacks but needs a high-fidelity team training environment is the field of emergency medicine. Currently, emergency department (ED) medical personnel train by using words to create a metal picture of a situation for the physician and staff, who then cooperate to solve the problems portrayed by the word picture. The need in emergency medicine for realistic virtual team training is critical because ED staff typically encounter rarely occurring but life threatening situations only once in their careers and because ED teams currently have no realistic environment in which to practice their team skills. The resulting lack of experience and teamwork makes diagnosis and treatment more difficult. Virtual environment based training has the potential to redress these shortfalls. The objective of our research is to develop a state-of-the-art virtual environment for emergency medicine team training. The virtual emergency room (VER) allows ED physicians and medical staff to realistically prepare for emergency medical situations by performing triage, diagnosis, and treatment on virtual patients within an environment that provides them with the tools they require and the team environment they need to realistically perform these three tasks. There are several issues that must be addressed before this vision is realized. The key issues deal with distribution of computations; the doctor and staff interface to the virtual patient and ED equipment; the accurate simulation of individual patient organs' response to injury, medication, and treatment; and an accurate modeling of the symptoms and appearance of the patient while maintaining a real-time interaction capability. Our ongoing work addresses all of these issues. In this paper we report on our prototype VER system and its distributed system architecture for an emergency department distributed virtual environment for emergency medical staff training. The virtual environment enables emergency department physicians and staff to develop their diagnostic and treatment skills using the virtual tools they need to perform diagnostic and treatment tasks. Virtual human imagery, and real-time virtual human response are used to create the virtual patient and present a scenario. Patient vital signs are available to the emergency department team as they manage the virtual case. The work reported here consists of the system architectures we developed for the distributed components of the virtual emergency room. The architectures we describe consist of the network level architecture as well as the software architecture for each actor within the virtual emergency room. We describe the role of distributed interactive simulation and other enabling technologies within the virtual emergency room project.

  12. Newborn screening healthcare information system based on service-oriented architecture.

    PubMed

    Hsieh, Sung-Huai; Hsieh, Sheau-Ling; Chien, Yin-Hsiu; Weng, Yung-Ching; Hsu, Kai-Ping; Chen, Chi-Huang; Tu, Chien-Ming; Wang, Zhenyu; Lai, Feipei

    2010-08-01

    In this paper, we established a newborn screening system under the HL7/Web Services frameworks. We rebuilt the NTUH Newborn Screening Laboratory's original standalone architecture, having various heterogeneous systems operating individually, and restructured it into a Service-Oriented Architecture (SOA), distributed platform for further integrity and enhancements of sample collections, testing, diagnoses, evaluations, treatments or follow-up services, screening database management, as well as collaboration, communication among hospitals; decision supports and improving screening accuracy over the Taiwan neonatal systems are also addressed. In addition, the new system not only integrates the newborn screening procedures among phlebotomy clinics, referral hospitals, as well as the newborn screening center in Taiwan, but also introduces new models of screening procedures for the associated, medical practitioners. Furthermore, it reduces the burden of manual operations, especially the reporting services, those were heavily dependent upon previously. The new system can accelerate the whole procedures effectively and efficiently. It improves the accuracy and the reliability of the screening by ensuring the quality control during the processing as well.

  13. MIDEX Advanced Modular and Distributed Spacecraft Avionics Architecture

    NASA Technical Reports Server (NTRS)

    Ruffa, John A.; Castell, Karen; Flatley, Thomas; Lin, Michael

    1998-01-01

    MIDEX (Medium Class Explorer) is the newest line in NASA's Explorer spacecraft development program. As part of the MIDEX charter, the MIDEX spacecraft development team has developed a new modular, distributed, and scaleable spacecraft architecture that pioneers new spaceflight technologies and implementation approaches, all designed to reduce overall spacecraft cost while increasing overall functional capability. This resultant "plug and play" system dramatically decreases the complexity and duration of spacecraft integration and test, providing a basic framework that supports spacecraft modularity and scalability for missions of varying size and complexity. Together, these subsystems form a modular, flexible avionics suite that can be modified and expanded to support low-end and very high-end mission requirements with a minimum of redesign, as well as allowing a smooth, continuous infusion of new technologies as they are developed without redesigning the system. This overall approach has the net benefit of allowing a greater portion of the overall mission budget to be allocated to mission science instead of a spacecraft bus. The MIDEX scaleable architecture is currently being manufactured and tested for use on the Microwave Anisotropy Probe (MAP), an inhouse program at GSFC.

  14. Application of a distributed systems architecture for increased speed in image processing on an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Wright, Adam A.; Momin, Orko; Shin, Young Ho; Shakya, Rahul; Nepal, Kumud; Ahlgren, David J.

    2010-01-01

    This paper presents the application of a distributed systems architecture to an autonomous ground vehicle, Q, that participates in both the autonomous and navigation challenges of the Intelligent Ground Vehicle Competition. In the autonomous challenge the vehicle is required to follow a course, while avoiding obstacles and staying within the course boundaries, which are marked by white lines. For the navigation challenge, the vehicle is required to reach a set of target destinations, known as way points, with given GPS coordinates and avoid obstacles that it encounters in the process. Previously the vehicle utilized a single laptop to execute all processing activities including image processing, sensor interfacing and data processing, path planning and navigation algorithms and motor control. National Instruments' (NI) LabVIEW served as the programming language for software implementation. As an upgrade to last year's design, a NI compact Reconfigurable Input/Output system (cRIO) was incorporated to the system architecture. The cRIO is NI's solution for rapid prototyping that is equipped with a real time processor, an FPGA and modular input/output. Under the current system, the real time processor handles the path planning and navigation algorithms, the FPGA gathers and processes sensor data. This setup leaves the laptop to focus on running the image processing algorithm. Image processing as previously presented by Nepal et. al. is a multi-step line extraction algorithm and constitutes the largest processor load. This distributed approach results in a faster image processing algorithm which was previously Q's bottleneck. Additionally, the path planning and navigation algorithms are executed more reliably on the real time processor due to the deterministic nature of operation. The implementation of this architecture required exploration of various inter-system communication techniques. Data transfer between the laptop and the real time processor using UDP packets was established as the most reliable protocol after testing various options. Improvement can be made to the system by migrating more algorithms to the hardware based FPGA to further speed up the operations of the vehicle.

  15. Modeling and Analysis of Mixed Synchronous/Asynchronous Systems

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan

    2012-01-01

    Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.

  16. Towards an Open, Distributed Software Architecture for UxS Operations

    NASA Technical Reports Server (NTRS)

    Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Allen, B. Danette

    2015-01-01

    To address the growing need to evaluate, test, and certify an ever expanding ecosystem of UxS platforms in preparation of cultural integration, NASA Langley Research Center's Autonomy Incubator (AI) has taken on the challenge of developing a software framework in which UxS platforms developed by third parties can be integrated into a single system which provides evaluation and testing, mission planning and operation, and out-of-the-box autonomy and data fusion capabilities. This software framework, named AEON (Autonomous Entity Operations Network), has two main goals. The first goal is the development of a cross-platform, extensible, onboard software system that provides autonomy at the mission execution and course-planning level, a highly configurable data fusion framework sensitive to the platform's available sensor hardware, and plug-and-play compatibility with a wide array of computer systems, sensors, software, and controls hardware. The second goal is the development of a ground control system that acts as a test-bed for integration of the proposed heterogeneous fleet, and allows for complex mission planning, tracking, and debugging capabilities. The ground control system should also be highly extensible and allow plug-and-play interoperability with third party software systems. In order to achieve these goals, this paper proposes an open, distributed software architecture which utilizes at its core the Data Distribution Service (DDS) standards, established by the Object Management Group (OMG), for inter-process communication and data flow. The design decisions proposed herein leverage the advantages of existing robotics software architectures and the DDS standards to develop software that is scalable, high-performance, fault tolerant, modular, and readily interoperable with external platforms and software.

  17. Communication Optimizations for a Wireless Distributed Prognostic Framework

    NASA Technical Reports Server (NTRS)

    Saha, Sankalita; Saha, Bhaskar; Goebel, Kai

    2009-01-01

    Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.

  18. Architecture and Programming Models for High Performance Intensive Computation

    DTIC Science & Technology

    2016-06-29

    Applications Systems and Large-Scale-Big-Data & Large-Scale-Big-Computing (DDDAS- LS ). ICCS 2015, June 2015. Reykjavk, Ice- land. 2. Bo YT, Wang P, Guo ZL...The Mahali project,” Communications Magazine , vol. 52, pp. 111–133, Aug 2014. 14 DISTRIBUTION A: Distribution approved for public release. Response ID

  19. Design alternatives for process group membership and multicast

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry

    1991-01-01

    Process groups are a natural tool for distributed programming, and are increasingly important in distributed computing environments. However, there is little agreement on the most appropriate semantics for process group membership and group communication. These issues are of special importance in the Isis system, a toolkit for distributed programming. Isis supports several styles of process group, and a collection of group communication protocols spanning a range of atomicity and ordering properties. This flexibility makes Isis adaptable to a variety of applications, but is also a source of complexity that limits performance. This paper reports on a new architecture that arose from an effort to simplify Isis process group semantics. Our findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the casuality domain. As an illustration, we apply the architecture to the problem of converting processes into fault-tolerant process groups in a manner that is 'transparent' to other processes in the system.

  20. Advanced algorithms for distributed fusion

    NASA Astrophysics Data System (ADS)

    Gelfand, A.; Smith, C.; Colony, M.; Bowman, C.; Pei, R.; Huynh, T.; Brown, C.

    2008-03-01

    The US Military has been undergoing a radical transition from a traditional "platform-centric" force to one capable of performing in a "Network-Centric" environment. This transformation will place all of the data needed to efficiently meet tactical and strategic goals at the warfighter's fingertips. With access to this information, the challenge of fusing data from across the batttlespace into an operational picture for real-time Situational Awareness emerges. In such an environment, centralized fusion approaches will have limited application due to the constraints of real-time communications networks and computational resources. To overcome these limitations, we are developing a formalized architecture for fusion and track adjudication that allows the distribution of fusion processes over a dynamically created and managed information network. This network will support the incorporation and utilization of low level tracking information within the Army Distributed Common Ground System (DCGS-A) or Future Combat System (FCS). The framework is based on Bowman's Dual Node Network (DNN) architecture that utilizes a distributed network of interlaced fusion and track adjudication nodes to build and maintain a globally consistent picture across all assets.

  1. DMS Advanced Applications for Accommodating High Penetrations of DERs and Microgrids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, Annabelle; Veda, Santosh; Maitra, Arindam

    Efficient and effective management of the electrical distribution system requires an integrated system approach for Distribution Management Systems (DMS), Distributed Energy Resources (DERs), Distributed Energy Resources Management System (DERMS), and microgrids to work in harmony. This paper highlights some of the outcomes from a U.S. Department of Energy (DOE), Office of Electricity (OE) project, including 1) Architecture of these integrated systems, and 2) Expanded functions of two example DMS applications, Volt-VAR optimization (VVO) and Fault Location, Isolation and Service Restoration (FLISR), to accommodate DER. For these two example applications, the relevant DER Group Functions necessary to support communication between DMSmore » and Microgrid Controller (MC) in grid-tied mode are identified.« less

  2. DMS Advanced Applications for Accommodating High Penetrations of DERs and Microgrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, Annabelle; Veda, Santosh; Maitra, Arindam

    Efficient and effective management of the electric distribution system requires an integrated approach to allow various systems to work in harmony, including distribution management systems (DMS), distributed energy resources (DERs), distributed energy resources management systems, and microgrids. This study highlights some outcomes from a recent project sponsored by the US Department of Energy, Office of Electricity Delivery and Energy Reliability, including information about (i) the architecture of these integrated systems and (ii) expanded functions of two example DMS applications to accommodate DERs: volt-var optimisation and fault location, isolation, and service restoration. In addition, the relevant DER group functions necessary tomore » support communications between the DMS and a microgrid controller in grid-tied mode are identified.« less

  3. Internet of Things: a possible change in the distributed modeling and simulation architecture paradigm

    NASA Astrophysics Data System (ADS)

    Riecken, Mark; Lessmann, Kurt; Schillero, David

    2016-05-01

    The Data Distribution Service (DDS) was started by the Object Management Group (OMG) in 2004. Currently, DDS is one of the contenders to support the Internet of Things (IoT) and the Industrial IOT (IIoT). DDS has also been used as a distributed simulation architecture. Given the anticipated proliferation of IoT and II devices, along with the explosive growth of sensor technology, can we expect this to have an impact on the broader community of distributed simulation? If it does, what is the impact and which distributed simulation domains will be most affected? DDS shares many of the same goals and characteristics of distributed simulation such as the need to support scale and an emphasis on Quality of Service (QoS) that can be tailored to meet the end user's needs. In addition, DDS has some built-in features such as security that are not present in traditional distributed simulation protocols. If the IoT and II realize their potential application, we predict a large base of technology to be built around this distributed data paradigm, much of which could be directly beneficial to the distributed M&S community. In this paper we compare some of the perceived gaps and shortfalls of current distributed M&S technology to the emerging capabilities of DDS built around the IoT. Although some trial work has been conducted in this area, we propose a more focused examination of the potential of these new technologies and their applicability to current and future problems in distributed M&S. The Internet of Things (IoT) and its data communications mechanisms such as the Data Distribution System (DDS) share properties in common with distributed modeling and simulation (M&S) and its protocols such as the High Level Architecture (HLA) and the Test and Training Enabling Architecture (TENA). This paper proposes a framework based on the sensor use case for how the two communities of practice (CoP) can benefit from one another and achieve greater capability in practical distributed computing.

  4. The NASA Integrated Information Technology Architecture

    NASA Technical Reports Server (NTRS)

    Baldridge, Tim

    1997-01-01

    This document defines an Information Technology Architecture for the National Aeronautics and Space Administration (NASA), where Information Technology (IT) refers to the hardware, software, standards, protocols and processes that enable the creation, manipulation, storage, organization and sharing of information. An architecture provides an itemization and definition of these IT structures, a view of the relationship of the structures to each other and, most importantly, an accessible view of the whole. It is a fundamental assumption of this document that a useful, interoperable and affordable IT environment is key to the execution of the core NASA scientific and project competencies and business practices. This Architecture represents the highest level system design and guideline for NASA IT related activities and has been created on the authority of the NASA Chief Information Officer (CIO) and will be maintained under the auspices of that office. It addresses all aspects of general purpose, research, administrative and scientific computing and networking throughout the NASA Agency and is applicable to all NASA administrative offices, projects, field centers and remote sites. Through the establishment of five Objectives and six Principles this Architecture provides a blueprint for all NASA IT service providers: civil service, contractor and outsourcer. The most significant of the Objectives and Principles are the commitment to customer-driven IT implementations and the commitment to a simpler, cost-efficient, standards-based, modular IT infrastructure. In order to ensure that the Architecture is presented and defined in the context of the mission, project and business goals of NASA, this Architecture consists of four layers in which each subsequent layer builds on the previous layer. They are: 1) the Business Architecture: the operational functions of the business, or Enterprise, 2) the Systems Architecture: the specific Enterprise activities within the context of IT systems, 3) the Technical Architecture: a common, vendor-independent framework for design, integration and implementation of IT systems and 4) the Product Architecture: vendor=specific IT solutions. The Systems Architecture is effectively a description of the end-user "requirements". Generalized end-user requirements are discussed and subsequently organized into specific mission and project functions. The Technical Architecture depicts the framework, and relationship, of the specific IT components that enable the end-user functionality as described in the Systems Architecture. The primary components as described in the Technical Architecture are: 1) Applications: Basic Client Component, Object Creation Applications, Collaborative Applications, Object Analysis Applications, 2) Services: Messaging, Information Broker, Collaboration, Distributed Processing, and 3) Infrastructure: Network, Security, Directory, Certificate Management, Enterprise Management and File System. This Architecture also provides specific Implementation Recommendations, the most significant of which is the recognition of IT as core to NASA activities and defines a plan, which is aligned with the NASA strategic planning processes, for keeping the Architecture alive and useful.

  5. Coupling root architecture and pore network modeling - an attempt towards better understanding root-soil interactions

    NASA Astrophysics Data System (ADS)

    Leitner, Daniel; Bodner, Gernot; Raoof, Amir

    2013-04-01

    Understanding root-soil interactions is of high importance for environmental and agricultural management. Root uptake is an essential component in water and solute transport modeling. The amount of groundwater recharge and solute leaching significantly depends on the demand based plant extraction via its root system. Plant uptake however not only responds to the potential demand, but in most situations is limited by supply form the soil. The ability of the plant to access water and solutes in the soil is governed mainly by root distribution. Particularly under conditions of heterogeneous distribution of water and solutes in the soil, it is essential to capture the interaction between soil and roots. Root architecture models allow studying plant uptake from soil by describing growth and branching of root axes in the soil. Currently root architecture models are able to respond dynamically to water and nutrient distribution in the soil by directed growth (tropism), modified branching and enhanced exudation. The porous soil medium as rooting environment in these models is generally described by classical macroscopic water retention and sorption models, average over the pore scale. In our opinion this simplified description of the root growth medium implies several shortcomings for better understanding root-soil interactions: (i) It is well known that roots grow preferentially in preexisting pores, particularly in more rigid/dry soil. Thus the pore network contributes to the architectural form of the root system; (ii) roots themselves can influence the pore network by creating preferential flow paths (biopores) which are an essential element of structural porosity with strong impact on transport processes; (iii) plant uptake depend on both the spatial location of water/solutes in the pore network as well as the spatial distribution of roots. We therefore consider that for advancing our understanding in root-soil interactions, we need not only to extend our root models, but also improve the description of the rooting environment. Until now there have been no attempts to couple root architecture and pore network models. In our work we present a first attempt to join both types of models using the root architecture model of Leitner et al., (2010) and a pore network model presented by Raoof et al. (2010). The two main objectives of coupling both models are: (i) Representing the effect of root induced biopores on flow and transport processes: For this purpose a fixed root architecture created by the root model is superimposed as a secondary root induced pore network to the primary soil network, thus influencing the final pore topology in the network generation. (ii) Representing the influence of pre-existing pores on root branching: Using a given network of (rigid) pores, the root architecture model allocates its root axes into these preexisting pores as preferential growth paths with thereby shape the final root architecture. The main objective of our study is to reveal the potential of using a pore scale description of the plant growth medium for an improved representation of interaction processes at the interface of root and soil. References Raoof, A., Hassanizadeh, S.M. 2010. A New Method for Generating Pore-Network Models. Transp. Porous Med. 81, 391-407. Leitner, D, Klepsch, S., Bodner, G., Schnepf, S. 2010. A dynamic root system growth model based on L-Systems. Tropisms and coupling to nutrient uptake from soil. Plant Soil 332, 177-192.

  6. Agent-oriented privacy-based information brokering architecture for healthcare environments.

    PubMed

    Masaud-Wahaishi, Abdulmutalib; Ghenniwa, Hamada

    2009-01-01

    Healthcare industry is facing a major reform at all levels-locally, regionally, nationally, and internationally. Healthcare services and systems become very complex and comprise of a vast number of components (software systems, doctors, patients, etc.) that are characterized by shared, distributed and heterogeneous information sources with varieties of clinical and other settings. The challenge now faced with decision making, and management of care is to operate effectively in order to meet the information needs of healthcare personnel. Currently, researchers, developers, and systems engineers are working toward achieving better efficiency and quality of service in various sectors of healthcare, such as hospital management, patient care, and treatment. This paper presents a novel information brokering architecture that supports privacy-based information gathering in healthcare. Architecturally, the brokering is viewed as a layer of services where a brokering service is modeled as an agent with a specific architecture and interaction protocol that are appropriate to serve various requests. Within the context of brokering, we model privacy in terms of the entities ability to hide or reveal information related to its identities, requests, and/or capabilities. A prototype of the proposed architecture has been implemented to support information-gathering capabilities in healthcare environments using FIPA-complaint platform JADE.

  7. Comparison of Communication Architectures for Spacecraft Modular Avionics Systems

    NASA Technical Reports Server (NTRS)

    Gwaltney, D. A.; Briscoe, J. M.

    2006-01-01

    This document is a survey of publicly available information concerning serial communication architectures used, or proposed to be used, in aeronautic and aerospace applications. It focuses on serial communication architectures that are suitable for low-latency or real-time communication between physically distributed nodes in a system. Candidates for the study have either extensive deployment in the field, or appear to be viable for near-term deployment. Eleven different serial communication architectures are considered, and a brief description of each is given with the salient features summarized in a table in appendix A. This survey is a product of the Propulsion High Impact Avionics Technology (PHIAT) Project at NASA Marshall Space Flight Center (MSFC). PHIAT was originally funded under the Next Generation Launch Technology (NGLT) Program to develop avionics technologies for control of next generation reusable rocket engines. After the announcement of the Space Exploration Initiative, the scope of the project was expanded to include vehicle systems control for human and robotics missions. As such, a section is included presenting the rationale used for selection of a time-triggered architecture for implementation of the avionics demonstration hardware developed by the project team

  8. A neural network approach to burst detection.

    PubMed

    Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J

    2002-01-01

    This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.

  9. Evolutionary Telemetry and Command Processor (TCP) architecture

    NASA Technical Reports Server (NTRS)

    Schneider, John R.

    1992-01-01

    A low cost, modular, high performance, and compact Telemetry and Command Processor (TCP) is being built as the foundation of command and data handling subsystems for the next generation of satellites. The TCP product line will support command and telemetry requirements for small to large spacecraft and from low to high rate data transmission. It is compatible with the latest TDRSS, STDN and SGLS transponders and provides CCSDS protocol communications in addition to standard TDM formats. Its high performance computer provides computing resources for hosted flight software. Layered and modular software provides common services using standardized interfaces to applications thereby enhancing software re-use, transportability, and interoperability. The TCP architecture is based on existing standards, distributed networking, distributed and open system computing, and packet technology. The first TCP application is planned for the 94 SDIO SPAS 3 mission. The architecture enhances rapid tailoring of functions thereby reducing costs and schedules developed for individual spacecraft missions.

  10. Diamond Eye: a distributed architecture for image data mining

    NASA Astrophysics Data System (ADS)

    Burl, Michael C.; Fowlkes, Charless; Roden, Joe; Stechert, Andre; Mukhtar, Saleem

    1999-02-01

    Diamond Eye is a distributed software architecture, which enables users (scientists) to analyze large image collections by interacting with one or more custom data mining servers via a Java applet interface. Each server is coupled with an object-oriented database and a computational engine, such as a network of high-performance workstations. The database provides persistent storage and supports querying of the 'mined' information. The computational engine provides parallel execution of expensive image processing, object recognition, and query-by-content operations. Key benefits of the Diamond Eye architecture are: (1) the design promotes trial evaluation of advanced data mining and machine learning techniques by potential new users (all that is required is to point a web browser to the appropriate URL), (2) software infrastructure that is common across a range of science mining applications is factored out and reused, and (3) the system facilitates closer collaborations between algorithm developers and domain experts.

  11. A support architecture for reliable distributed computing systems

    NASA Technical Reports Server (NTRS)

    Mckendry, Martin S.

    1986-01-01

    The Clouds kernel design was through several design phases and is nearly complete. The object manager, the process manager, the storage manager, the communications manager, and the actions manager are examined.

  12. Distributed Prognostics and Health Management with a Wireless Network Architecture

    NASA Technical Reports Server (NTRS)

    Goebel, Kai; Saha, Sankalita; Sha, Bhaskar

    2013-01-01

    A heterogeneous set of system components monitored by a varied suite of sensors and a particle-filtering (PF) framework, with the power and the flexibility to adapt to the different diagnostic and prognostic needs, has been developed. Both the diagnostic and prognostic tasks are formulated as a particle-filtering problem in order to explicitly represent and manage uncertainties in state estimation and remaining life estimation. Current state-of-the-art prognostic health management (PHM) systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to a loss in functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become for a number of reasons somewhat ungainly for successful deployment, and efficient distributed architectures can be more beneficial. The distributed health management architecture is comprised of a network of smart sensor devices. These devices monitor the health of various subsystems or modules. They perform diagnostics operations and trigger prognostics operations based on user-defined thresholds and rules. The sensor devices, called computing elements (CEs), consist of a sensor, or set of sensors, and a communication device (i.e., a wireless transceiver beside an embedded processing element). The CE runs in either a diagnostic or prognostic operating mode. The diagnostic mode is the default mode where a CE monitors a given subsystem or component through a low-weight diagnostic algorithm. If a CE detects a critical condition during monitoring, it raises a flag. Depending on availability of resources, a networked local cluster of CEs is formed that then carries out prognostics and fault mitigation by efficient distribution of the tasks. It should be noted that the CEs are expected not to suspend their previous tasks in the prognostic mode. When the prognostics task is over, and after appropriate actions have been taken, all CEs return to their original default configuration. Wireless technology-based implementation would ensure more flexibility in terms of sensor placement. It would also allow more sensors to be deployed because the overhead related to weights of wired systems is not present. Distributed architectures are furthermore generally robust with regard to recovery from node failures.

  13. A Multi Agent Based Approach for Prehospital Emergency Management.

    PubMed

    Safdari, Reza; Shoshtarian Malak, Jaleh; Mohammadzadeh, Niloofar; Danesh Shahraki, Azimeh

    2017-07-01

    To demonstrate an architecture to automate the prehospital emergency process to categorize the specialized care according to the situation at the right time for reducing the patient mortality and morbidity. Prehospital emergency process were analyzed using existing prehospital management systems, frameworks and the extracted process were modeled using sequence diagram in Rational Rose software. System main agents were identified and modeled via component diagram, considering the main system actors and by logically dividing business functionalities, finally the conceptual architecture for prehospital emergency management was proposed. The proposed architecture was simulated using Anylogic simulation software. Anylogic Agent Model, State Chart and Process Model were used to model the system. Multi agent systems (MAS) had a great success in distributed, complex and dynamic problem solving environments, and utilizing autonomous agents provides intelligent decision making capabilities.  The proposed architecture presents prehospital management operations. The main identified agents are: EMS Center, Ambulance, Traffic Station, Healthcare Provider, Patient, Consultation Center, National Medical Record System and quality of service monitoring agent. In a critical condition like prehospital emergency we are coping with sophisticated processes like ambulance navigation health care provider and service assignment, consultation, recalling patients past medical history through a centralized EHR system and monitoring healthcare quality in a real-time manner. The main advantage of our work has been the multi agent system utilization. Our Future work will include proposed architecture implementation and evaluation of its impact on patient quality care improvement.

  14. A Multi Agent Based Approach for Prehospital Emergency Management

    PubMed Central

    Safdari, Reza; Shoshtarian Malak, Jaleh; Mohammadzadeh, Niloofar; Danesh Shahraki, Azimeh

    2017-01-01

    Objective: To demonstrate an architecture to automate the prehospital emergency process to categorize the specialized care according to the situation at the right time for reducing the patient mortality and morbidity. Methods: Prehospital emergency process were analyzed using existing prehospital management systems, frameworks and the extracted process were modeled using sequence diagram in Rational Rose software. System main agents were identified and modeled via component diagram, considering the main system actors and by logically dividing business functionalities, finally the conceptual architecture for prehospital emergency management was proposed. The proposed architecture was simulated using Anylogic simulation software. Anylogic Agent Model, State Chart and Process Model were used to model the system. Results: Multi agent systems (MAS) had a great success in distributed, complex and dynamic problem solving environments, and utilizing autonomous agents provides intelligent decision making capabilities.  The proposed architecture presents prehospital management operations. The main identified agents are: EMS Center, Ambulance, Traffic Station, Healthcare Provider, Patient, Consultation Center, National Medical Record System and quality of service monitoring agent. Conclusion: In a critical condition like prehospital emergency we are coping with sophisticated processes like ambulance navigation health care provider and service assignment, consultation, recalling patients past medical history through a centralized EHR system and monitoring healthcare quality in a real-time manner. The main advantage of our work has been the multi agent system utilization. Our Future work will include proposed architecture implementation and evaluation of its impact on patient quality care improvement. PMID:28795061

  15. GOES-R GS Product Generation Infrastructure Operations

    NASA Astrophysics Data System (ADS)

    Blanton, M.; Gundy, J.

    2012-12-01

    GOES-R GS Product Generation Infrastructure Operations: The GOES-R Ground System (GS) will produce a much larger set of products with higher data density than previous GOES systems. This requires considerably greater compute and memory resources to achieve the necessary latency and availability for these products. Over time, new algorithms could be added and existing ones removed or updated, but the GOES-R GS cannot go down during this time. To meet these GOES-R GS processing needs, the Harris Corporation will implement a Product Generation (PG) infrastructure that is scalable, extensible, extendable, modular and reliable. The primary parts of the PG infrastructure are the Service Based Architecture (SBA), which includes the Distributed Data Fabric (DDF). The SBA is the middleware that encapsulates and manages science algorithms that generate products. The SBA is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. The SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DDF to provide this data communication layer between algorithms. The DDF provides an abstract interface over a distributed and persistent multi-layered storage system (memory based caching above disk-based storage) and an event system that allows algorithm services to know when data is available and to get the data that they need to begin processing when they need it. Together, the SBA and the DDF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.

  16. Architecture of a spatial data service system for statistical analysis and visualization of regional climate changes

    NASA Astrophysics Data System (ADS)

    Titov, A. G.; Okladnikov, I. G.; Gordov, E. P.

    2017-11-01

    The use of large geospatial datasets in climate change studies requires the development of a set of Spatial Data Infrastructure (SDI) elements, including geoprocessing and cartographical visualization web services. This paper presents the architecture of a geospatial OGC web service system as an integral part of a virtual research environment (VRE) general architecture for statistical processing and visualization of meteorological and climatic data. The architecture is a set of interconnected standalone SDI nodes with corresponding data storage systems. Each node runs a specialized software, such as a geoportal, cartographical web services (WMS/WFS), a metadata catalog, and a MySQL database of technical metadata describing geospatial datasets available for the node. It also contains geospatial data processing services (WPS) based on a modular computing backend realizing statistical processing functionality and, thus, providing analysis of large datasets with the results of visualization and export into files of standard formats (XML, binary, etc.). Some cartographical web services have been developed in a system’s prototype to provide capabilities to work with raster and vector geospatial data based on OGC web services. The distributed architecture presented allows easy addition of new nodes, computing and data storage systems, and provides a solid computational infrastructure for regional climate change studies based on modern Web and GIS technologies.

  17. FALCON: A distributed scheduler for MIMD architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimshaw, A.S.; Vivas, V.E. Jr.

    1991-01-01

    This paper describes FALCON (Fully Automatic Load COordinator for Networks), the scheduler for the Mentat parallel processing system. FALCON has a modular structure and is designed for systems that use a task scheduling mechanism. FALCON is distributed, stable, supports system heterogeneities, and employs a sender-initiated adaptive load sharing policy with static task assignment. FALCON is parameterizable and is implemented in Mentat, a working distributed system. We present the design and implementation of FALCON as well as a brief introduction to those features of the Mentat run-time system that influence FALCON. Performance measures under different scheduler configurations are also presented andmore » analyzed with respect to the system parameters. 36 refs., 8 figs.« less

  18. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) Technology Infrastructure for a Distributed Data Network

    PubMed Central

    Schilling, Lisa M.; Kwan, Bethany M.; Drolshagen, Charles T.; Hosokawa, Patrick W.; Brandt, Elias; Pace, Wilson D.; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R.O.; Stephens, William E.; George, Joseph M.; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K.; Kahn, Michael G.

    2013-01-01

    Introduction: Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. Methods: The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. Discussion: SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions. PMID:25848567

  19. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) Technology Infrastructure for a Distributed Data Network.

    PubMed

    Schilling, Lisa M; Kwan, Bethany M; Drolshagen, Charles T; Hosokawa, Patrick W; Brandt, Elias; Pace, Wilson D; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R O; Stephens, William E; George, Joseph M; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K; Kahn, Michael G

    2013-01-01

    Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions.

  20. A Framework for the Development of Scalable Heterogeneous Robot Teams with Dynamically Distributed Processing

    NASA Astrophysics Data System (ADS)

    Martin, Adrian

    As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.

  1. First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)

    NASA Technical Reports Server (NTRS)

    Griffin, Sandy (Editor)

    1987-01-01

    Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered.

  2. Designing an architectural style for dynamic medical Cross-Organizational Workflow management system: an approach based on agents and web services.

    PubMed

    Bouzguenda, Lotfi; Turki, Manel

    2014-04-01

    This paper shows how the combined use of agent and web services technologies can help to design an architectural style for dynamic medical Cross-Organizational Workflow (COW) management system. Medical COW aims at supporting the collaboration between several autonomous and possibly heterogeneous medical processes, distributed over different organizations (Hospitals, Clinic or laboratories). Dynamic medical COW refers to occasional cooperation between these health organizations, free of structural constraints, where the medical partners involved and their number are not pre-defined. More precisely, this paper proposes a new architecture style based on agents and web services technologies to deal with two key coordination issues of dynamic COW: medical partners finding and negotiation between them. It also proposes how the proposed architecture for dynamic medical COW management system can connect to a multi-agent system coupling the Clinical Decision Support System (CDSS) with Computerized Prescriber Order Entry (CPOE). The idea is to assist the health professionals such as doctors, nurses and pharmacists with decision making tasks, as determining diagnosis or patient data analysis without stopping their clinical processes in order to act in a coherent way and to give care to the patient.

  3. Programming distributed memory architectures using Kali

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Vanrosendale, John

    1990-01-01

    Programming nonshared memory systems is more difficult than programming shared memory systems, in part because of the relatively low level of current programming environments for such machines. A new programming environment is presented, Kali, which provides a global name space and allows direct access to remote data values. In order to retain efficiency, Kali provides a system on annotations, allowing the user to control those aspects of the program critical to performance, such as data distribution and load balancing. The primitives and constructs provided by the language is described, and some of the issues raised in translating a Kali program for execution on distributed memory systems are also discussed.

  4. The development of a post-test diagnostic system for rocket engines

    NASA Technical Reports Server (NTRS)

    Zakrajsek, June F.

    1991-01-01

    An effort was undertaken by NASA to develop an automated post-test, post-flight diagnostic system for rocket engines. The automated system is designed to be generic and to automate the rocket engine data review process. A modular, distributed architecture with a generic software core was chosen to meet the design requirements. The diagnostic system is initially being applied to the Space Shuttle Main Engine data review process. The system modules currently under development are the session/message manager, and portions of the applications section, the component analysis section, and the intelligent knowledge server. An overview is presented of a rocket engine data review process, the design requirements and guidelines, the architecture and modules, and the projected benefits of the automated diagnostic system.

  5. Surveillance and Datalink Communication Performance Analysis for Distributed Separation Assurance System Architectures

    NASA Technical Reports Server (NTRS)

    Chung, William W.; Linse, Dennis J.; Alaverdi, Omeed; Ifarraguerri, Carlos; Seifert, Scott C.; Salvano, Dan; Calender, Dale

    2012-01-01

    This study investigates the effects of two technical enablers: Automatic Dependent Surveillance - Broadcast (ADS-B) and digital datalink communication, of the Federal Aviation Administration s Next Generation Air Transportation System (NextGen) under two separation assurance (SA) system architectures: ground-based SA and airborne SA, on overall separation assurance performance. Datalink performance such as successful reception probability in both surveillance and communication messages, and surveillance accuracy are examined in various operational conditions. Required SA performance is evaluated as a function of subsystem performance, using availability, continuity, and integrity metrics to establish overall required separation assurance performance, under normal and off-nominal conditions.

  6. Set-Based Design: Fleet Architecture and Design 2030-2035

    DTIC Science & Technology

    2017-12-01

    choose any quantity between 250 - 350 HP for the final system design without suffering the same consequences in PBD. Figure 2 visually compares SBD... comparing the coverages in 2035 to those in 2017. This report does not advocate for a larger or smaller domain grid factor for overall fleet design , as...Distribution is unlimited. SET-BASED DESIGN : FLEET ARCHITECTURE AND DESIGN 2030–2035 by David Alessandria, Isa Al-Jawder, Eric Clow, Carlos

  7. SimWorx: An ADA Distributed Simulation Application Framework Supporting HLA and DIS

    DTIC Science & Technology

    1996-12-01

    The authors emphasize that most real systems have elements of several architectural styles; these are called heterogeneous architectures. Typically...In order for frameworks to be used, understood, and maintained, Adair emphasizes they must be clearly documented. 37 2.5.2.2 Framework Use Issues...0a) cuE U)) 00 Z64 Support Category Classes I Component-Type, Max Size _ Item-Type, Max-Size Bounded Buffer ProtectedContainer +Get() +Add() +Put

  8. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1988-01-01

    Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  9. Implementation of Single Source Based Hospital Information System for the Catholic Medical Center Affiliated Hospitals

    PubMed Central

    Choi, Inyoung; Choi, Ran; Lee, Jonghyun

    2010-01-01

    Objectives The objective of this research is to introduce the unique approach of the Catholic Medical Center (CMC) integrate network hospitals with organizational and technical methodologies adopted for seamless implementation. Methods The Catholic Medical Center has developed a new hospital information system to connect network hospitals and adopted new information technology architecture which uses single source for multiple distributed hospital systems. Results The hospital information system of the CMC was developed to integrate network hospitals adopting new system development principles; one source, one route and one management. This information architecture has reduced the cost for system development and operation, and has enhanced the efficiency of the management process. Conclusions Integrating network hospital through information system was not simple; it was much more complicated than single organization implementation. We are still looking for more efficient communication channel and decision making process, and also believe that our new system architecture will be able to improve CMC health care system and provide much better quality of health care service to patients and customers. PMID:21818432

  10. Implementation of system intelligence in a 3-tier telemedicine/PACS hierarchical storage management system

    NASA Astrophysics Data System (ADS)

    Chao, Woodrew; Ho, Bruce K. T.; Chao, John T.; Sadri, Reza M.; Huang, Lu J.; Taira, Ricky K.

    1995-05-01

    Our tele-medicine/PACS archive system is based on a three-tier distributed hierarchical architecture, including magnetic disk farms, optical jukebox, and tape jukebox sub-systems. The hierarchical storage management (HSM) architecture, built around a low cost high performance platform [personal computers (PC) and Microsoft Windows NT], presents a very scaleable and distributed solution ideal for meeting the needs of client/server environments such as tele-medicine, tele-radiology, and PACS. These image based systems typically require storage capacities mirroring those of film based technology (multi-terabyte with 10+ years storage) and patient data retrieval times at near on-line performance as demanded by radiologists. With the scaleable architecture, storage requirements can be easily configured to meet the needs of the small clinic (multi-gigabyte) to those of a major hospital (multi-terabyte). The patient data retrieval performance requirement was achieved by employing system intelligence to manage migration and caching of archived data. Relevant information from HIS/RIS triggers prefetching of data whenever possible based on simple rules. System intelligence embedded in the migration manger allows the clustering of patient data onto a single tape during data migration from optical to tape medium. Clustering of patient data on the same tape eliminates multiple tape loading and associated seek time during patient data retrieval. Optimal tape performance can then be achieved by utilizing the tape drives high performance data streaming capabilities thereby reducing typical data retrieval delays associated with streaming tape devices.

  11. Objects Architecture: A Comprehensive Design Approach for Real-Time, Distributed, Fault-Tolerant, Reactive Operating Systems.

    DTIC Science & Technology

    1987-09-01

    real - time operating system should be efficient from the real-time point...5,8]) system naming scheme. 3.2 Protecting Objects Real-time embedded systems usually neglect protection mechanisms. However, a real - time operating system cannot...allocation mechanism should adhere to application constraints. This strong relationship between a real - time operating system and the application

  12. Galileo Timing Applications

    DTIC Science & Technology

    2007-11-01

    available architecture for time and synchronization information distribution was at that time implemented with a single Master Clock. The signal of...a hierarchical approach. Moreover, analyzing this architecture , it is clear that there is signal performance degradation due to the distribution...applications. Figure 2 depicts the time distribution architecture implemented via GNSS. The main difference with respect to the previous one is that all the

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsmith, Steven Y.; Spires, Shannon V.

    There are currently two proposed standards for agent communication languages, namely, KQML (Finin, Lobrou, and Mayfield 1994) and the FIPA ACL. Neither standard has yet achieved primacy, and neither has been evaluated extensively in an open environment such as the Internet. It seems prudent therefore to design a general-purpose agent communications facility for new agent architectures that is flexible yet provides an architecture that accepts many different specializations. In this paper we exhibit the salient features of an agent communications architecture based on distributed metaobjects. This architecture captures design commitments at a metaobject level, leaving the base-level design and implementationmore » up to the agent developer. The scope of the metamodel is broad enough to accommodate many different communication protocols, interaction protocols, and knowledge sharing regimes through extensions to the metaobject framework. We conclude that with a powerful distributed object substrate that supports metaobject communications, a general framework can be developed that will effectively enable different approaches to agent communications in the same agent system. We have implemented a KQML-based communications protocol and have several special-purpose interaction protocols under development.« less

  14. OsPIN5b modulates rice (Oryza sativa) plant architecture and yield by changing auxin homeostasis, transport and distribution.

    PubMed

    Lu, Guangwen; Coneva, Viktoriya; Casaretto, José A; Ying, Shan; Mahmood, Kashif; Liu, Fang; Nambara, Eiji; Bi, Yong-Mei; Rothstein, Steven J

    2015-09-01

    Plant architecture attributes such as tillering, plant height and panicle size are important agronomic traits that determine rice (Oryza sativa) productivity. Here, we report that altered auxin content, transport and distribution affect these traits, and hence rice yield. Overexpression of the auxin efflux carrier-like gene OsPIN5b causes pleiotropic effects, mainly reducing plant height, leaf and tiller number, shoot and root biomass, seed-setting rate, panicle length and yield parameters. Conversely, reduced expression of OsPIN5b results in higher tiller number, more vigorous root system, longer panicles and increased yield. We show that OsPIN5b is an endoplasmic reticulum (ER) -localized protein that participates in auxin homeostasis, transport and distribution in vivo. This work describes an example of an auxin-related gene where modulating its expression can simultaneously improve plant architecture and yield potential in rice, and reveals an important effect of hormonal signaling on these traits. © 2015 The Authors The Plant Journal © 2015 John Wiley & Sons Ltd.

  15. The GOES-R Product Generation Architecture - Post CDR Update

    NASA Astrophysics Data System (ADS)

    Dittberner, G.; Kalluri, S.; Weiner, A.

    2012-12-01

    The GOES-R system will substantially improve the accuracy of information available to users by providing data from significantly enhanced instruments, which will generate an increased number and diversity of products with higher resolution, and much shorter relook times. Considerably greater compute and memory resources are necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.

  16. Perspective on intelligent avionics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, H.L.

    1987-01-01

    Technical issues which could potentially limit the capability and acceptibility of expert systems decision-making for avionics applications are addressed. These issues are: real-time AI, mission-critical software, conventional algorithms, pilot interface, knowledge acquisition, and distributed expert systems. Examples from on-going expert system development programs are presented to illustrate likely architectures and applications of future intelligent avionic systems. 13 references.

  17. Energy Systems Integration News | Energy Systems Integration Facility |

    Science.gov Websites

    hierarchical control architecture that enables a hybrid control approach, where centralized control systems will be complemented by distributed control algorithms for solar inverters and autonomous control of ), involves developing a novel control scheme that provides system-wide monitoring and control using a small

  18. Adaptive architectures for resilient control of networked multiagent systems in the presence of misbehaving agents

    NASA Astrophysics Data System (ADS)

    Torre, Gerardo De La; Yucelen, Tansel

    2018-03-01

    Control algorithms of networked multiagent systems are generally computed distributively without having a centralised entity monitoring the activity of agents; and therefore, unforeseen adverse conditions such as uncertainties or attacks to the communication network and/or failure of agent-wise components can easily result in system instability and prohibit the accomplishment of system-level objectives. In this paper, we study resilient coordination of networked multiagent systems in the presence of misbehaving agents, i.e. agents that are subject to exogenous disturbances that represent a class of adverse conditions. In particular, a distributed adaptive control architecture is presented for directed and time-varying graph topologies to retrieve a desired networked multiagent system behaviour. Apart from the existing relevant literature that make specific assumptions on the graph topology and/or the fraction of misbehaving agents, we show that the considered class of adverse conditions can be mitigated by the proposed adaptive control approach that utilises a local state emulator - even if all agents are misbehaving. Illustrative numerical examples are provided to demonstrate the theoretical findings.

  19. Semantic interoperability--HL7 Version 3 compared to advanced architecture standards.

    PubMed

    Blobel, B G M E; Engel, K; Pharow, P

    2006-01-01

    To meet the challenge for high quality and efficient care, highly specialized and distributed healthcare establishments have to communicate and co-operate in a semantically interoperable way. Information and communication technology must be open, flexible, scalable, knowledge-based and service-oriented as well as secure and safe. For enabling semantic interoperability, a unified process for defining and implementing the architecture, i.e. structure and functions of the cooperating systems' components, as well as the approach for knowledge representation, i.e. the used information and its interpretation, algorithms, etc. have to be defined in a harmonized way. Deploying the Generic Component Model, systems and their components, underlying concepts and applied constraints must be formally modeled, strictly separating platform-independent from platform-specific models. As HL7 Version 3 claims to represent the most successful standard for semantic interoperability, HL7 has been analyzed regarding the requirements for model-driven, service-oriented design of semantic interoperable information systems, thereby moving from a communication to an architecture paradigm. The approach is compared with advanced architectural approaches for information systems such as OMG's CORBA 3 or EHR systems such as GEHR/openEHR and CEN EN 13606 Electronic Health Record Communication. HL7 Version 3 is maturing towards an architectural approach for semantic interoperability. Despite current differences, there is a close collaboration between the teams involved guaranteeing a convergence between competing approaches.

  20. Sedimentary architecture of a sub-lacustrine debris fan: Eocene Dongying Depression, Bohai Bay Basin, east China

    NASA Astrophysics Data System (ADS)

    Liu, Jianping; Xian, Benzhong; Wang, Junhui; Ji, Youliang; Lu, Zhiyong; Liu, Saijun

    2017-12-01

    The sedimentary architectures of submarine/sublacustrine fans are controlled by sedimentary processes, geomorphology and sediment composition in sediment gravity flows. To advance understanding of sedimentary architecture of debris fans formed predominantly by debris flows in deep-water environments, a sub-lacustrine fan (Y11 fan) within a lacustrine succession has been identified and studied through the integration of core data, well logging data and 3D seismic data in the Eocene Dongying Depression, Bohai Bay Basin, east China. Six types of resedimented lithofacies can be recognized, which are further grouped into five broad lithofacies associations. Quantification of gravity flow processes on the Y11 fan is suggested by quantitative lithofacies analysis, which demonstrates that the fan is dominated by debris flows, while turbidity currents and sandy slumps are less important. The distribution, geometry and sedimentary architecture are documented using well data and 3D seismic data. A well-developed depositional lobe with a high aspect ratio is identified based on a sandstone isopach map. Canyons and/or channels are absent, which is probably due to the unsteady sediment supply from delta-front collapse. Distributary tongue-shaped debris flow deposits can be observed at different stages of fan growth, suggesting a lobe constructed by debrite tongue complexes. Within each stage of the tongue complexes, architectural elements are interpreted by wireline log motifs showing amalgamated debrite tongues, which constitute the primary fan elements. Based on lateral lithofacies distribution and vertical sequence analysis, it is proposed that lakefloor erosion, entrainment and dilution in the flow direction lead to an organized distribution of sandy debrites, muddy debrites and turbidites on individual debrite tongues. Plastic rheology of debris flows combined with fault-related topography are considered the major factors that control sediment distribution and fan architecture. An important implication of this study is that a deep-water depositional model for debrite-dominated systems was proposed, which may be applicable to other similar deep-water environments.

  1. Development of life prediction capabilities for liquid propellant rocket engines. Post-fire diagnostic system for the SSME system architecture study

    NASA Technical Reports Server (NTRS)

    Gage, Mark; Dehoff, Ronald

    1991-01-01

    This system architecture task (1) analyzed the current process used to make an assessment of engine and component health after each test or flight firing of an SSME, (2) developed an approach and a specific set of objectives and requirements for automated diagnostics during post fire health assessment, and (3) listed and described the software applications required to implement this system. The diagnostic system described is a distributed system with a database management system to store diagnostic information and test data, a CAE package for visual data analysis and preparation of plots of hot-fire data, a set of procedural applications for routine anomaly detection, and an expert system for the advanced anomaly detection and evaluation.

  2. A RESTful Service Oriented Architecture for Science Data Processing

    NASA Astrophysics Data System (ADS)

    Duggan, B.; Tilmes, C.; Durbin, P.; Masuoka, E.

    2012-12-01

    The Atmospheric Composition Processing System is an implementation of a RESTful Service Oriented Architecture which handles incoming data from the Ozone Monitoring Instrument and the Ozone Monitoring and Profiler Suite aboard the Aura and NPP spacecrafts respectively. The system has been built entirely from open source components, such as Postgres, Perl, and SQLite and has leveraged the vast resources of the Comprehensive Perl Archive Network (CPAN). The modular design of the system also allows for many of the components to be easily released and integrated into the CPAN ecosystem and reused independently. At minimal expense, the CPAN infrastructure and community provide peer review, feedback and continuous testing in a wide variety of environments and architectures. A well defined set of conventions also facilitates dependency management, packaging, and distribution of code. Test driven development also provides a way to ensure stability despite a continuously changing base of dependencies.

  3. Decentralized Formation Flying Control in a Multiple-Team Hierarchy

    NASA Technical Reports Server (NTRS)

    Mueller, Joseph .; Thomas, Stephanie J.

    2005-01-01

    This paper presents the prototype of a system that addresses these objectives-a decentralized guidance and control system that is distributed across spacecraft using a multiple-team framework. The objective is to divide large clusters into teams of manageable size, so that the communication and computational demands driven by N decentralized units are related to the number of satellites in a team rather than the entire cluster. The system is designed to provide a high-level of autonomy, to support clusters with large numbers of satellites, to enable the number of spacecraft in the cluster to change post-launch, and to provide for on-orbit software modification. The distributed guidance and control system will be implemented in an object-oriented style using MANTA (Messaging Architecture for Networking and Threaded Applications). In this architecture, tasks may be remotely added, removed or replaced post-launch to increase mission flexibility and robustness. This built-in adaptability will allow software modifications to be made on-orbit in a robust manner. The prototype system, which is implemented in MATLAB, emulates the object-oriented and message-passing features of the MANTA software. In this paper, the multiple-team organization of the cluster is described, and the modular software architecture is presented. The relative dynamics in eccentric reference orbits is reviewed, and families of periodic, relative trajectories are identified, expressed as sets of static geometric parameters. The guidance law design is presented, and an example reconfiguration scenario is used to illustrate the distributed process of assigning geometric goals to the cluster. Next, a decentralized maneuver planning approach is presented that utilizes linear-programming methods to enact reconfiguration and coarse formation keeping maneuvers. Finally, a method for performing online collision avoidance is discussed, and an example is provided to gauge its performance.

  4. Distributed Control of Turbofan Engines

    DTIC Science & Technology

    2009-08-01

    performance of the engine. Thus the Full Authority Digital Engine Controller ( FADEC ) still remains the central arbiter of the engine’s dynamic behavior...instance, if the control laws are not distributed the dependence on the FADEC remains high, and system reliability can only be insured through many...if distributed computing is used at the local level and only coordinated by the FADEC . Such an architecture must be studied in the context of noisy

  5. From experiments to simulations: tracing Na+ distribution around roots under different transpiration rates and salinity levels

    NASA Astrophysics Data System (ADS)

    Perelman, Adi; Jorda, Helena; Vanderborght, Jan; Pohlmeier, Andreas; Lazarovitch, Naftali

    2017-04-01

    When salinity increases beyond a certain threshold it will result in reduced crop yield at a fixed rate, according to Maas and Hoffman model (1976). Thus, there is a great importance of predicting salinization and its impact on crops. Current models do not consider the impact of environmental conditions on plants salt tolerance, even though these conditions are affecting plant water uptake and therefore salt accumulation around the roots. Different factors, such as transpiration rates, can influence the plant sensitivity to salinity by influencing salt concentrations around the roots. Better parametrization of a model can help improving predicting the real effects of salinity on crop growth and yield. The aim of this research is to study Na+ distribution around roots at different scales using different non-invasive methods, and study how this distribution is being affected by transpiration rate and plant water uptake. Results from tomato plants growing on Rhizoslides (capillary paper growth system), show that Na+ concentration is higher at the root- substrate interface, compared with the bulk. Also, Na+ accumulation around the roots decreased under low transpiration rate, which is supporting our hypothesis. Additionally, Rhizoslides enable to study roots' growth rate and architecture under different salinity levels. Root system architecture was retrieved from photos taken during the experiment and enabled us to incorporate real root systems into a simulation. To observe the correlation of root system architectures and Na+ distribution in three dimensions, we used magnetic resonance imaging (MRI). MRI provides fine resolution of Na+ accumulation around a single root without disturbing the root system. With time, Na+ was accumulating only where roots were found in the soil and later on around specific roots. These data are being used for model calibration, which is expected to predict root water uptake in saline soils for different climatic conditions and different soil water availabilities.

  6. Fault Management Architectures and the Challenges of Providing Software Assurance

    NASA Technical Reports Server (NTRS)

    Savarino, Shirley; Fitz, Rhonda; Fesq, Lorraine; Whitman, Gerek

    2015-01-01

    Fault Management (FM) is focused on safety, the preservation of assets, and maintaining the desired functionality of the system. How FM is implemented varies among missions. Common to most missions is system complexity due to a need to establish a multi-dimensional structure across hardware, software and spacecraft operations. FM is necessary to identify and respond to system faults, mitigate technical risks and ensure operational continuity. Generally, FM architecture, implementation, and software assurance efforts increase with mission complexity. Because FM is a systems engineering discipline with a distributed implementation, providing efficient and effective verification and validation (V&V) is challenging. A breakout session at the 2012 NASA Independent Verification & Validation (IV&V) Annual Workshop titled "V&V of Fault Management: Challenges and Successes" exposed this issue in terms of V&V for a representative set of architectures. NASA's Software Assurance Research Program (SARP) has provided funds to NASA IV&V to extend the work performed at the Workshop session in partnership with NASA's Jet Propulsion Laboratory (JPL). NASA IV&V will extract FM architectures across the IV&V portfolio and evaluate the data set, assess visibility for validation and test, and define software assurance methods that could be applied to the various architectures and designs. This SARP initiative focuses efforts on FM architectures from critical and complex projects within NASA. The identification of particular FM architectures and associated V&V/IV&V techniques provides a data set that can enable improved assurance that a system will adequately detect and respond to adverse conditions. Ultimately, results from this activity will be incorporated into the NASA Fault Management Handbook providing dissemination across NASA, other agencies and the space community. This paper discusses the approach taken to perform the evaluations and preliminary findings from the research.

  7. Project Integration Architecture: Distributed Lock Management, Deadlock Detection, and Set Iteration

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The migration of the Project Integration Architecture (PIA) to the distributed object environment of the Common Object Request Broker Architecture (CORBA) brings with it the nearly unavoidable requirements of multiaccessor, asynchronous operations. In order to maintain the integrity of data structures in such an environment, it is necessary to provide a locking mechanism capable of protecting the complex operations typical of the PIA architecture. This paper reports on the implementation of a locking mechanism to treat that need. Additionally, the ancillary features necessary to make the distributed lock mechanism work are discussed.

  8. The Application of Hardware in the Loop Testing for Distributed Engine Control

    NASA Technical Reports Server (NTRS)

    Thomas, George L.; Culley, Dennis E.; Brand, Alex

    2016-01-01

    The essence of a distributed control system is the modular partitioning of control function across a hardware implementation. This type of control architecture requires embedding electronics in a multitude of control element nodes for the execution of those functions, and their integration as a unified system. As the field of distributed aeropropulsion control moves toward reality, questions about building and validating these systems remain. This paper focuses on the development of hardware-in-the-loop (HIL) test techniques for distributed aero engine control, and the application of HIL testing as it pertains to potential advanced engine control applications that may now be possible due to the intelligent capability embedded in the nodes.

  9. A broadband multimedia TeleLearning system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ruiping; Karmouch, A.

    1996-12-31

    In this paper we discuss a broadband multimedia TeleLearning system under development in the Multimedia Information Research Laboratory at the University of Ottawa. The system aims at providing a seamless environment for TeleLearning using the latest telecommunication and multimedia information processing technology. It basically consists of a media production center, a courseware author site, a courseware database, a courseware user site, and an on-line facilitator site. All these components are distributed over an ATM network and work together to offer a multimedia interactive courseware service. An MHEG-based model is exploited in designing the system architecture to achieve the real-time, interactive,more » and reusable information interchange through heterogeneous platforms. The system architecture, courseware processing strategies, courseware document models are presented.« less

  10. Towards a distributed information architecture for avionics data

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris; Freeborn, Dana; Crichton, Dan

    2003-01-01

    Avionics data at the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL consists of distributed, unmanaged, and heterogeneous information that is hard for flight system design engineers to find and use on new NASA/JPL missions. The development of a systematic approach for capturing, accessing and sharing avionics data critical to the support of NASA/JPL missions and projects is required. We propose a general information architecture for managing the existing distributed avionics data sources and a method for querying and retrieving avionics data using the Object Oriented Data Technology (OODT) framework. OODT uses XML messaging infrastructure that profiles data products and their locations using the ISO-11179 data model for describing data products. Queries against a common data dictionary (which implements the ISO model) are translated to domain dependent source data models, and distributed data products are returned asynchronously through the OODT middleware. Further work will include the ability to 'plug and play' new manufacturer data sources, which are distributed at avionics component manufacturer locations throughout the United States.

  11. Facilitating the Specification Capture and Transformation Process in the Development of Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Filho, Aluzio Haendehen; Caminada, Numo; Haeusler, Edward Hermann; vonStaa, Arndt

    2004-01-01

    To support the development of flexible and reusable MAS, we have built a framework designated MAS-CF. MAS-CF is a component framework that implements a layered architecture based on contextual composition. Interaction rules, controlled by architecture mechanisms, ensure very low coupling, making possible the sharing of distributed services in a transparent, dynamic and independent way. These properties propitiate large-scale reuse, since organizational abstractions can be reused and propagated to all instances created from a framework. The objective is to reduce complexity and development time of multi-agent systems through the reuse of generic organizational abstractions.

  12. Controlling multiple security robots in a warehouse environment

    NASA Technical Reports Server (NTRS)

    Everett, H. R.; Gilbreath, G. A.; Heath-Pastore, T. A.; Laird, R. T.

    1994-01-01

    The Naval Command Control and Ocean Surveillance Center (NCCOSC) has developed an architecture to provide coordinated control of multiple autonomous vehicles from a single host console. The multiple robot host architecture (MRHA) is a distributed multiprocessing system that can be expanded to accommodate as many as 32 robots. The initial application will employ eight Cybermotion K2A Navmaster robots configured as remote security platforms in support of the Mobile Detection Assessment and Response System (MDARS) Program. This paper discusses developmental testing of the MRHA in an operational warehouse environment, with two actual and four simulated robotic platforms.

  13. Final Report for File System Support for Burst Buffers on HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, W.; Mohror, K.

    Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respectivemore » efforts are elaborated further in this report.« less

  14. Medical Data Architecture (MDA) Project Status

    NASA Technical Reports Server (NTRS)

    Krihak, M.; Middour, C.; Gurram, M.; Wolfe, S.; Marker, N.; Winther, S.; Ronzano, K.; Bolles, D.; Toscano, W.; Shaw, T.

    2018-01-01

    The Medical Data Architecture (MDA) project supports the Exploration Medical Capability (ExMC) risk to minimize or reduce the risk of adverse health outcomes and decrements in performance due to in-flight medical capabilities on human exploration missions. To mitigate this risk, the ExMC MDA project addresses the technical limitations identified in ExMC Gap Med 07: We do not have the capability to comprehensively process medically-relevant information to support medical operations during exploration missions. This gap identifies that the current in-flight medical data management includes a combination of data collection and distribution methods that are minimally integrated with on-board medical devices and systems. Furthermore, there are a variety of data sources and methods of data collection. For an exploration mission, the seamless management of such data will enable a more medically autonomous crew than the current paradigm. The medical system requirements are being developed in parallel with the exploration mission architecture and vehicle design. ExMC has recognized that in order to make informed decisions about a medical data architecture framework, current methods for medical data management must not only be understood, but an architecture must also be identified that provides the crew with actionable insight to medical conditions. This medical data architecture will provide the necessary functionality to address the challenges of executing a self-contained medical system that approaches crew health care delivery without assistance from ground support. Hence, the products supported by current prototype development will directly inform exploration medical system requirements.

  15. Medical Data Architecture Project Status

    NASA Technical Reports Server (NTRS)

    Krihak, M.; Middour, C.; Gurram, M.; Wolfe, S.; Marker, N.; Winther, S.; Ronzano, K.; Bolles, D.; Toscano, W.; Shaw, T.

    2018-01-01

    The Medical Data Architecture (MDA) project supports the Exploration Medical Capability (ExMC) risk to minimize or reduce the risk of adverse health outcomes and decrements in performance due to in-flight medical capabilities on human exploration missions. To mitigate this risk, the ExMC MDA project addresses the technical limitations identified in ExMC Gap Med 07: We do not have the capability to comprehensively process medically-relevant information to support medical operations during exploration missions. This gap identifies that the current in-flight medical data management includes a combination of data collection and distribution methods that are minimally integrated with on-board medical devices and systems. Furthermore, there are a variety of data sources and methods of data collection. For an exploration mission, the seamless management of such data will enable a more medically autonomous crew than the current paradigm. The medical system requirements are being developed in parallel with the exploration mission architecture and vehicle design. ExMC has recognized that in order to make informed decisions about a medical data architecture framework, current methods for medical data management must not only be understood, but an architecture must also be identified that provides the crew with actionable insight to medical conditions. This medical data architecture will provide the necessary functionality to address the challenges of executing a self-contained medical system that approaches crew health care delivery without assistance from ground support. Hence, the products supported by current prototype development will directly inform exploration medical system requirements.

  16. New Generation Power System for Space Applications

    NASA Technical Reports Server (NTRS)

    Jones, Loren; Carr, Greg; Deligiannis, Frank; Lam, Barbara; Nelson, Ron; Pantaleon, Jose; Ruiz, Ian; Treicler, John; Wester, Gene; Sauers, Jim; hide

    2004-01-01

    The Deep Space Avionics (DSA) Project is developing a new generation of power system building blocks. Using application specific integrated circuits (ASICs) and power switching modules a scalable power system can be constructed for use on multiple deep space missions including future missions to Mars, comets, Jupiter and its moons. The key developments of the DSA power system effort are five power ASICs and a mod ule for power switching. These components enable a modular and scalab le design approach, which can result in a wide variety of power syste m architectures to meet diverse mission requirements and environments . Each component is radiation hardened to one megarad) total dose. The power switching module can be used for power distribution to regular spacecraft loads, to propulsion valves and actuation of pyrotechnic devices. The number of switching elements per load, pyrotechnic firin gs and valve drivers can be scaled depending on mission needs. Teleme try data is available from the switch module via an I2C data bus. The DSA power system components enable power management and distribution for a variety of power buses and power system architectures employing different types of energy storage and power sources. This paper will describe each power ASIC#s key performance characteristics as well a s recent prototype test results. The power switching module test results will be discussed and will demonstrate its versatility as a multip urpose switch. Finally, the combination of these components will illu strate some of the possible power system architectures achievable fro m small single string systems to large fully redundant systems.

  17. A DRM based on renewable broadcast encryption

    NASA Astrophysics Data System (ADS)

    Ramkumar, Mahalingam; Memon, Nasir

    2005-07-01

    We propose an architecture for digital rights management based on a renewable, random key pre-distribution (KPD) scheme, HARPS (hashed random preloaded subsets). The proposed architecture caters for broadcast encryption by a trusted authority (TA) and by "parent" devices (devices used by vendors who manufacture compliant devices) for periodic revocation of devices. The KPD also facilitates broadcast encryption by peer devices, which permits peers to distribute content, and efficiently control access to the content encryption secret using subscription secrets. The underlying KPD also caters for broadcast authentication and mutual authentication of any two devices, irrespective of the vendors manufacturing the device, and thus provides a comprehensive solution for securing interactions between devices taking part in a DRM system.

  18. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.

  19. A multiprocessing architecture for real-time monitoring

    NASA Technical Reports Server (NTRS)

    Schmidt, James L.; Kao, Simon M.; Read, Jackson Y.; Weitzenkamp, Scott M.; Laffey, Thomas J.

    1988-01-01

    A multitasking architecture for performing real-time monitoring and analysis using knowledge-based problem solving techniques is described. To handle asynchronous inputs and perform in real time, the system consists of three or more distributed processes which run concurrently and communicate via a message passing scheme. The Data Management Process acquires, compresses, and routes the incoming sensor data to other processes. The Inference Process consists of a high performance inference engine that performs a real-time analysis on the state and health of the physical system. The I/O Process receives sensor data from the Data Management Process and status messages and recommendations from the Inference Process, updates its graphical displays in real time, and acts as the interface to the console operator. The distributed architecture has been interfaced to an actual spacecraft (NASA's Hubble Space Telescope) and is able to process the incoming telemetry in real-time (i.e., several hundred data changes per second). The system is being used in two locations for different purposes: (1) in Sunnyville, California at the Space Telescope Test Control Center it is used in the preflight testing of the vehicle; and (2) in Greenbelt, Maryland at NASA/Goddard it is being used on an experimental basis in flight operations for health and safety monitoring.

  20. Spaceborne Processor Array

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  1. First 3 years of operation of RIACS (Research Institute for Advanced Computer Science) (1983-1985)

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    The focus of the Research Institute for Advanced Computer Science (RIACS) is to explore matches between advanced computing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

  2. Parallel, Distributed Scripting with Python

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, P J

    2002-05-24

    Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI librarymore » gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.« less

  3. Achieving High Performance With TCP Over 40 GbE on NUMA Architectures for CMS Data Acquisition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bawej, Tomasz; et al.

    2014-01-01

    TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multicore era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities.more » During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.« less

  4. Distributed Social Bookmarking Web Service Architecture. SOAP vs. iCamp FeedBack

    ERIC Educational Resources Information Center

    Afonin, Andrej

    2011-01-01

    Social bookmarking services became very popular recently. Easy of use, possibility to share and discover in addition to accessibility though the Internet, turns social bookmarking systems into powerful repository of shared knowledge. Obviously this attracts attention of educational institutions and recently such systems started to appear under…

  5. Risk-Based Neuro-Grid Architecture for Multimodal Biometrics

    NASA Astrophysics Data System (ADS)

    Venkataraman, Sitalakshmi; Kulkarni, Siddhivinayak

    Recent research indicates that multimodal biometrics is the way forward for a highly reliable adoption of biometric identification systems in various applications, such as banks, businesses, government and even home environments. However, such systems would require large distributed datasets with multiple computational realms spanning organisational boundaries and individual privacies.

  6. Advanced software integration: The case for ITV facilities

    NASA Technical Reports Server (NTRS)

    Garman, John R.

    1990-01-01

    The array of technologies and methodologies involved in the development and integration of avionics software has moved almost as rapidly as computer technology itself. Future avionics systems involve major advances and risks in the following areas: (1) Complexity; (2) Connectivity; (3) Security; (4) Duration; and (5) Software engineering. From an architectural standpoint, the systems will be much more distributed, involve session-based user interfaces, and have the layered architectures typified in the layers of abstraction concepts popular in networking. Typified in the NASA Space Station Freedom will be the highly distributed nature of software development itself. Systems composed of independent components developed in parallel must be bound by rigid standards and interfaces, the clean requirements and specifications. Avionics software provides a challenge in that it can not be flight tested until the first time it literally flies. It is the binding of requirements for such an integration environment into the advances and risks of future avionics systems that form the basis of the presented concept and the basic Integration, Test, and Verification concept within the development and integration life cycle of Space Station Mission and Avionics systems.

  7. Evaluating Cloud Computing in the Proposed NASA DESDynI Ground Data System

    NASA Technical Reports Server (NTRS)

    Tran, John J.; Cinquini, Luca; Mattmann, Chris A.; Zimdars, Paul A.; Cuddy, David T.; Leung, Kon S.; Kwoun, Oh-Ig; Crichton, Dan; Freeborn, Dana

    2011-01-01

    The proposed NASA Deformation, Ecosystem Structure and Dynamics of Ice (DESDynI) mission would be a first-of-breed endeavor that would fundamentally change the paradigm by which Earth Science data systems at NASA are built. DESDynI is evaluating a distributed architecture where expert science nodes around the country all engage in some form of mission processing and data archiving. This is compared to the traditional NASA Earth Science missions where the science processing is typically centralized. What's more, DESDynI is poised to profoundly increase the amount of data collection and processing well into the 5 terabyte/day and tens of thousands of job range, both of which comprise a tremendous challenge to DESDynI's proposed distributed data system architecture. In this paper, we report on a set of architectural trade studies and benchmarks meant to inform the DESDynI mission and the broader community of the impacts of these unprecedented requirements. In particular, we evaluate the benefits of cloud computing and its integration with our existing NASA ground data system software called Apache Object Oriented Data Technology (OODT). The preliminary conclusions of our study suggest that the use of the cloud and OODT together synergistically form an effective, efficient and extensible combination that could meet the challenges of NASA science missions requiring DESDynI-like data collection and processing volumes at reduced costs.

  8. Affordable multisensor digital video architecture for 360° situational awareness displays

    NASA Astrophysics Data System (ADS)

    Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana

    2011-06-01

    One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.

  9. Space Flight Middleware: Remote AMS over DTN for Delay-Tolerant Messaging

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott

    2011-01-01

    This paper describes a technique for implementing scalable, reliable, multi-source multipoint data distribution in space flight communications -- Delay-Tolerant Reliable Multicast (DTRM) -- that is fully supported by the "Remote AMS" (RAMS) protocol of the Asynchronous Message Service (AMS) proposed for standardization within the Consultative Committee for Space Data Systems (CCSDS). The DTRM architecture enables applications to easily "publish" messages that will be reliably and efficiently delivered to an arbitrary number of "subscribing" applications residing anywhere in the space network, whether in the same subnet or in a subnet on a remote planet or vehicle separated by many light minutes of interplanetary space. The architecture comprises multiple levels of protocol, each included for a specific purpose and allocated specific responsibilities: "application AMS" traffic performs end-system data introduction and delivery subject to access control; underlying "remote AMS" directs this application traffic to populations of recipients at remote locations in a multicast distribution tree, enabling the architecture to scale up to large networks; further underlying Delay-Tolerant Networking (DTN) Bundle Protocol (BP) advances RAMS protocol data units through the distribution tree using delay-tolerant storeand- forward methods; and further underlying reliable "convergence-layer" protocols ensure successful data transfer over each segment of the end-to-end route. The result is scalable, reliable, delay-tolerant multi-source multicast that is largely self-configuring.

  10. Enterprise Management Network Architecture Distributed Knowledge Base Support

    DTIC Science & Technology

    1990-11-01

    Advantages Potentially, this makes a distributed system more powerful than a conventional, centralized one in two ways: " First, it can be more reliable...does not completely apply [35]. The grain size of the processors measures the individual problem-solving power of the agents. In this definition...problem-solving power amounts to the conceptual size of a single action taken by an agent visible to the other agents in the system. If the grain is coarse

  11. GBU-X bounding requirements for highly flexible munitions

    NASA Astrophysics Data System (ADS)

    Bagby, Patrick T.; Shaver, Jonathan; White, Reed; Cafarelli, Sergio; Hébert, Anthony J.

    2017-04-01

    This paper will present the results of an investigation into requirements for existing software and hardware solutions for open digital communication architectures that support weapon subsystem integration. The underlying requirements of such a communication architecture would be to achieve the lowest latency possible at a reasonable cost point with respect to the mission objective of the weapon. The determination of the latency requirements of the open architecture software and hardware were derived through the use of control system and stability margins analyses. Studies were performed on the throughput and latency of different existing communication transport methods. The two architectures that were tested in this study include Data Distribution Service (DDS) and Modular Open Network Architecture (MONARCH). This paper defines what levels of latency can be achieved with current technology and how this capability may translate to future weapons. The requirements moving forward within communications solutions are discussed.

  12. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1987-01-01

    The results of ongoing research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a spatial distributed computer environment is presented. This model is identified by the acronym ATAMM (Algorithm/Architecture Mapping Model). The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to optimize computational concurrency in the multiprocessor environment and to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.

  13. Use of Open Architecture Middleware for Autonomous Platforms

    NASA Astrophysics Data System (ADS)

    Naranjo, Hector; Diez, Sergio; Ferrero, Francisco

    2011-08-01

    Network Enabled Capabilities (NEC) is the vision for next-generation systems in the defence domain formulated by governments, the European Defence Agency (EDA) and the North Atlantic Treaty Organization (NATO). It involves the federation of military information systems, rather than just a simple interconnection, to provide each user with the "right information, right place, right time - and not too much". It defines openness, standardization and flexibility principles in military systems, likewise applicable in the civilian space applications.This paper provides the conclusions drawn from "Architecture for Embarked Middleware" (EMWARE) study, funded by the European Defence Agency (EDA).The aim of the EMWARE project was to provide the information and understanding to facilitate the adoption of informed decisions regarding the specification and implementation of Open Architecture Middleware in future distributed systems, linking it with the NEC goal.EMWARE project included the definition of four business cases, each devoted to a different field of application (Unmanned Aerial Vehicles, Helicopters, Unmanned Ground Vehicles and the Satellite Ground Segment).

  14. Architecture for reactive planning of robot actions

    NASA Astrophysics Data System (ADS)

    Riekki, Jukka P.; Roening, Juha

    1995-01-01

    In this article, a reactive system for planning robot actions is described. The described hierarchical control system architecture consists of planning-executing-monitoring-modelling elements (PEMM elements). A PEMM element is a goal-oriented, combined processing and data element. It includes a planner, an executor, a monitor, a modeler, and a local model. The elements form a tree-like structure. An element receives tasks from its ancestor and sends subtasks to its descendants. The model knowledge is distributed into the local models, which are connected to each other. The elements can be synchronized. The PEMM architecture is strictly hierarchical. It integrated planning, sensing, and modelling into a single framework. A PEMM-based control system is reactive, as it can cope with asynchronous events and operate under time constraints. The control system is intended to be used primarily to control mobile robots and robot manipulators in dynamic and partially unknown environments. It is suitable especially for applications consisting of physically separated devices and computing resources.

  15. Automated monitoring of medical protocols: a secure and distributed architecture.

    PubMed

    Alsinet, T; Ansótegui, C; Béjar, R; Fernández, C; Manyà, F

    2003-03-01

    The control of the right application of medical protocols is a key issue in hospital environments. For the automated monitoring of medical protocols, we need a domain-independent language for their representation and a fully, or semi, autonomous system that understands the protocols and supervises their application. In this paper we describe a specification language and a multi-agent system architecture for monitoring medical protocols. We model medical services in hospital environments as specialized domain agents and interpret a medical protocol as a negotiation process between agents. A medical service can be involved in multiple medical protocols, and so specialized domain agents are independent of negotiation processes and autonomous system agents perform monitoring tasks. We present the detailed architecture of the system agents and of an important domain agent, the database broker agent, that is responsible of obtaining relevant information about the clinical history of patients. We also describe how we tackle the problems of privacy, integrity and authentication during the process of exchanging information between agents.

  16. micROS: a morphable, intelligent and collective robot operating system.

    PubMed

    Yang, Xuejun; Dai, Huadong; Yi, Xiaodong; Wang, Yanzhen; Yang, Shaowu; Zhang, Bo; Wang, Zhiyuan; Zhou, Yun; Peng, Xuefeng

    2016-01-01

    Robots are developing in much the same way that personal computers did 40 years ago, and robot operating system is the critical basis. Current robot software is mainly designed for individual robots. We present in this paper the design of micROS, a morphable, intelligent and collective robot operating system for future collective and collaborative robots. We first present the architecture of micROS, including the distributed architecture for collective robot system as a whole and the layered architecture for every single node. We then present the design of autonomous behavior management based on the observe-orient-decide-act cognitive behavior model and the design of collective intelligence including collective perception, collective cognition, collective game and collective dynamics. We also give the design of morphable resource management, which first categorizes robot resources into physical, information, cognitive and social domains, and then achieve morphability based on self-adaptive software technology. We finally deploy micROS on NuBot football robots and achieve significant improvement in real-time performance.

  17. Fiber-Optic Network Architectures for Onboard Avionics Applications Investigated

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung D.; Ngo, Duc H.

    2003-01-01

    This project is part of a study within the Advanced Air Transportation Technologies program undertaken at the NASA Glenn Research Center. The main focus of the program is the improvement of air transportation, with particular emphasis on air transportation safety. Current and future advances in digital data communications between an aircraft and the outside world will require high-bandwidth onboard communication networks. Radiofrequency (RF) systems, with their interconnection network based on coaxial cables and waveguides, increase the complexity of communication systems onboard modern civil and military aircraft with respect to weight, power consumption, and safety. In addition, safety and reliability concerns from electromagnetic interference between the RF components embedded in these communication systems exist. A simple, reliable, and lightweight network that is free from the effects of electromagnetic interference and capable of supporting the broadband communications needs of future onboard digital avionics systems cannot be easily implemented using existing coaxial cable-based systems. Fiber-optical communication systems can meet all these challenges of modern avionics applications in an efficient, cost-effective manner. The objective of this project is to present a number of optical network architectures for onboard RF signal distribution. Because of the emergence of a number of digital avionics devices requiring high-bandwidth connectivity, fiber-optic RF networks onboard modern aircraft will play a vital role in ensuring a low-noise, highly reliable RF communication system. Two approaches are being used for network architectures for aircraft onboard fiber-optic distribution systems: a hybrid RF-optical network and an all-optical wavelength division multiplexing (WDM) network.

  18. A Distributed Data Architecture for 2001 Mars Odyssey Data Distribution

    NASA Technical Reports Server (NTRS)

    Crichton, Daniel J.; Hughes, J. Steven; Kelly, Sean

    2003-01-01

    Newer instruments and communications techniques have given scientists unprecedented amounts of data, more than can be feasibly distributed through traditional methods such as mailed CD-ROM's. Leveraging the web makes sense since it enables scientists to request specific data and retrieve products as soon as they're available. Yet defining the middleware system to support such an application has remained just out of reach, until Odyssey. For the first time ever, data from all Odyssey mission instruments were made available through a single system immediately upon delivery to the Planetary Data System (PDS). The Object Oriented Data Technology (OODT) software made such an application possible.

  19. Distributed architecture and distributed processing mode in urban sewage treatment

    NASA Astrophysics Data System (ADS)

    Zhou, Ruipeng; Yang, Yuanming

    2017-05-01

    Decentralized rural sewage treatment facility over the broad area, a larger operation and management difficult, based on the analysis of rural sewage treatment model based on the response to these challenges, we describe the principle, structure and function in networking technology and network communications technology as the core of distributed remote monitoring system, through the application of case analysis to explore remote monitoring system features in a decentralized rural sewage treatment facilities in the daily operation and management. Practice shows that the remote monitoring system to provide technical support for the long-term operation and effective supervision of the facilities, and reduced operating, maintenance and supervision costs for development.

  20. Live Virtual Constructive Distributed Test Environment Characterization Report

    NASA Technical Reports Server (NTRS)

    Murphy, Jim; Kim, Sam K.

    2013-01-01

    This report documents message latencies observed over various Live, Virtual, Constructive, (LVC) simulation environment configurations designed to emulate possible system architectures for the Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project integrated tests. For each configuration, four scenarios with progressively increasing air traffic loads were used to determine system throughput and bandwidth impacts on message latency.

  1. An overview of the NASA electronic components information management system

    NASA Technical Reports Server (NTRS)

    Kramer, G.; Waterbury, S.

    1991-01-01

    The NASA Parts Project Office (NPPO) comprehensive data system to support all NASA Electric, Electronic, and Electromechanical (EEE) parts management and technical data requirements is described. A phase delivery approach is adopted, comprising four principal phases. Phases 1 and 2 support Space Station Freedom (SSF) and use a centralized architecture with all data and processing kept on a mainframe computer. Phases 3 and 4 support all NASA centers and projects and implement a distributed system architecture, in which data and processing are shared among networked database servers. The Phase 1 system, which became operational in February of 1990, implements a core set of functions. Phase 2, scheduled for release in 1991, adds functions to the Phase 1 system. Phase 3, to be prototyped beginning in 1991 and delivered in 1992, introduces a distributed system, separate from the Phase 1 and 2 system, with a refined semantic data model. Phase 4 extends the data model and functionality of the Phase 3 system to provide support for the NASA design community, including integration with Computer Aided Design (CAD) environments. Phase 4 is scheduled for prototyping in 1992 to 93 and delivery in 1994.

  2. Integrating Computing Resources: A Shared Distributed Architecture for Academics and Administrators.

    ERIC Educational Resources Information Center

    Beltrametti, Monica; English, Will

    1994-01-01

    Development and implementation of a shared distributed computing architecture at the University of Alberta (Canada) are described. Aspects discussed include design of the architecture, users' views of the electronic environment, technical and managerial challenges, and the campuswide human infrastructures needed to manage such an integrated…

  3. Decentralized and self-centered estimation architecture for formation flying of spacecraft

    NASA Technical Reports Server (NTRS)

    Kang, B. H.; Hadaegh, F. Y.; Scharf, D. P.; Ke, N. -P.

    2001-01-01

    Formation estimation methodologies for distributed spacecraft systems are formulated and analyzed. A generic form of the formation estimation problem is described by defining a common hardware configuration, observation graph, and feasible estimation topologies.

  4. A Proposed Information Architecture for Telehealth System Interoperability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, S.; Craft, R.L.; Parks, R.C.

    1999-04-07

    Telemedicine technology is rapidly evolving. Whereas early telemedicine consultations relied primarily on video conferencing, consultations today may utilize video conferencing, medical peripherals, store-and-forward capabilities, electronic patient record management software, and/or a host of other emerging technologies. These remote care systems rely increasingly on distributed, collaborative information technology during the care delivery process, in its many forms. While these leading-edge systems are bellwethers for highly advanced telemedicine, the remote care market today is still immature. Most telemedicine systems are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that a single vendor providesmore » and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver entire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. We propose a secure, object-oriented information architecture for telemedicine systems that promotes plug-and-play interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a lego-like fashion to achieve the desired device or system functionality. The architecture will support various ongoing standards work in the medical device arena.« less

  5. Mitigation of Remedial Action Schemes by Decentralized Robust Governor Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elizondo, Marcelo A.; Marinovici, Laurentiu D.; Lian, Jianming

    This paper presents transient stability improvement by a new distributed hierarchical control architecture (DHC). The integration of remedial action schemes (RAS) to the distributed hierarchical control architecture is studied. RAS in power systems are designed to maintain stability and avoid undesired system conditions by rapidly switching equipment and/or changing operating points according to predetermined rules. The acceleration trend relay currently in use in the US western interconnection is an example of RAS that trips generators to maintain transient stability. The link between RAS and DHC is through fast acting robust turbine/governor control that can also improve transient stability. In thismore » paper, the influence of the decentralized robust turbine/governor control on the design of RAS is studied. Benefits of combining these two schemes are increasing power transfer capability and mitigation of RAS generator tripping actions; the later benefit is shown through simulations.« less

  6. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  7. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  8. Proposal of a Methodology for Implementing a Service-Oriented Architecture in Distributed Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Medina, I.; Garcia-Dominguez, A.; Aguayo, F.; Sevilla, L.; Marcos, M.

    2009-11-01

    As envisioned by Intelligent Manufacturing Systems (IMS), Next Generation Manufacturing Systems (NGMS) will satisfy the needs of an increasingly fast-paced and demanding market by dynamically integrating systems from inside and outside the manufacturing firm itself into a so-called extended enterprise. However, organizing these systems to ensure the maximum flexibility and interoperability with those from other organizations is difficult. Additionally, a defect in the system would have a great impact: it would affect not only its owner, but also its partners. For these reasons, we argue that a service-oriented architecture (SOA) would be a good candidate. It should be designed following a methodology where services play a central role, instead of being an implementation detail. In order for the architecture to be reliable enough as a whole, the methodology will need to help find errors before they arise in a production environment. In this paper we propose using SOA-specific testing techniques, compare some of the existing methodologies and outline several extensions upon one of them to integrate testing techniques.

  9. Brahms Mobile Agents: Architecture and Field Tests

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2002-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.

  10. Centralized vs decentralized lunar power system study

    NASA Astrophysics Data System (ADS)

    Metcalf, Kenneth; Harty, Richard B.; Perronne, Gerald E.

    1991-09-01

    Three power-system options are considered with respect to utilization on a lunar base: the fully centralized option, the fully decentralized option, and a hybrid comprising features of the first two options. Power source, power conditioning, and power transmission are considered separately, and each architecture option is examined with ac and dc distribution, high and low voltage transmission, and buried and suspended cables. Assessments are made on the basis of mass, technological complexity, cost, reliability, and installation complexity, however, a preferred power-system architecture is not proposed. Preferred options include transmission based on ac, transmission voltages of 2000-7000 V with buried high-voltage lines and suspended low-voltage lines. Assessments of the total cost associated with the installations are required to determine the most suitable power system.

  11. EOS: A project to investigate the design and construction of real-time distributed embedded operating systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Essick, R. B.; Grass, J.; Johnston, G.; Kenny, K.; Russo, V.

    1986-01-01

    The EOS project is investigating the design and construction of a family of real-time distributed embedded operating systems for reliable, distributed aerospace applications. Using the real-time programming techniques developed in co-operation with NASA in earlier research, the project staff is building a kernel for a multiple processor networked system. The first six months of the grant included a study of scheduling in an object-oriented system, the design philosophy of the kernel, and the architectural overview of the operating system. In this report, the operating system and kernel concepts are described. An environment for the experiments has been built and several of the key concepts of the system have been prototyped. The kernel and operating system is intended to support future experimental studies in multiprocessing, load-balancing, routing, software fault-tolerance, distributed data base design, and real-time processing.

  12. The ALMA software architecture

    NASA Astrophysics Data System (ADS)

    Schwarz, Joseph; Farris, Allen; Sommer, Heiko

    2004-09-01

    The software for the Atacama Large Millimeter Array (ALMA) is being developed by many institutes on two continents. The software itself will function in a distributed environment, from the 0.5-14 kmbaselines that separate antennas to the larger distances that separate the array site at the Llano de Chajnantor in Chile from the operations and user support facilities in Chile, North America and Europe. Distributed development demands 1) interfaces that allow separated groups to work with minimal dependence on their counterparts at other locations; and 2) a common architecture to minimize duplication and ensure that developers can always perform similar tasks in a similar way. The Container/Component model provides a blueprint for the separation of functional from technical concerns: application developers concentrate on implementing functionality in Components, which depend on Containers to provide them with services such as access to remote resources, transparent serialization of entity objects to XML, logging, error handling and security. Early system integrations have verified that this architecture is sound and that developers can successfully exploit its features. The Containers and their services are provided by a system-orienteddevelopment team as part of the ALMA Common Software (ACS), middleware that is based on CORBA.

  13. A High Power Density Power System Electronics for NASA's Lunar Reconnaissance Orbiter

    NASA Technical Reports Server (NTRS)

    Hernandez-Pellerano, A.; Stone, R.; Travis, J.; Kercheval, B.; Alkire, G.; Ter-Minassian, V.

    2009-01-01

    A high power density, modular and state-of-the-art Power System Electronics (PSE) has been developed for the Lunar Reconnaissance Orbiter (LRO) mission. This paper addresses the hardware architecture and performance, the power handling capabilities, and the fabrication technology. The PSE was developed by NASA s Goddard Space Flight Center (GSFC) and is the central location for power handling and distribution of the LRO spacecraft. The PSE packaging design manages and distributes 2200W of solar array input power in a volume less than a cubic foot. The PSE architecture incorporates reliable standard internal and external communication buses, solid state circuit breakers and LiIon battery charge management. Although a single string design, the PSE achieves high reliability by elegantly implementing functional redundancy and internal fault detection and correction. The PSE has been environmentally tested and delivered to the LRO spacecraft for the flight Integration and Test. This modular design is scheduled to flight in early 2009 on board the LRO and Lunar Crater Observation and Sensing Satellite (LCROSS) spacecrafts and is the baseline architecture for future NASA missions such as Global Precipitation Measurement (GPM) and Magnetospheric MultiScale (MMS).

  14. Description of the PMAD systems test bed facility and data system

    NASA Technical Reports Server (NTRS)

    Trase, Larry; Fong, Don; Adkins, Vicki; Birchenough, Arthur

    1992-01-01

    The power management and distribution (PMAD) systems test bed facility, including the power sources and loads available, is discussed, and the PMAD data system (PDS) is described. The PDS controls the test-bed facility hardware, and monitors and records the electric power system control data bus and external data. The PDS architecture is discussed, and each of the subsystems is described.

  15. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  16. A Framework System for Intelligent Support in Open Distributed Learning Environments--A Look Back from 16 Years Later

    ERIC Educational Resources Information Center

    Hoppe, H. Ulrich

    2016-01-01

    The 1998 paper by Martin Mühlenbrock, Frank Tewissen, and myself introduced a multi-agent architecture and a component engineering approach for building open distributed learning environments to support group learning in different types of classroom settings. It took up prior work on "multiple student modeling" as a method to configure…

  17. Alternative Future Fleet Platform Architecture Study

    DTIC Science & Technology

    2016-10-27

    establishing sea control - projecting power - winning decisively To accomplish these missions, the Navy Project Team derived a ‘Distributed...allies and partners, and deter potential aggressors. The Distributed Fleet was further conceived to deliver decisive combat power , as part of a joint...global information system – the information that rides on the servers, undersea cables, satellites, and wireless networks that increasingly envelop

  18. Computer-generated forces in distributed interactive simulation

    NASA Astrophysics Data System (ADS)

    Petty, Mikel D.

    1995-04-01

    Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.

  19. A failure management prototype: DR/Rx

    NASA Technical Reports Server (NTRS)

    Hammen, David G.; Baker, Carolyn G.; Kelly, Christine M.; Marsh, Christopher A.

    1991-01-01

    This failure management prototype performs failure diagnosis and recovery management of hierarchical, distributed systems. The prototype, which evolved from a series of previous prototypes following a spiral model for development, focuses on two functions: (1) the diagnostic reasoner (DR) performs integrated failure diagnosis in distributed systems; and (2) the recovery expert (Rx) develops plans to recover from the failure. Issues related to expert system prototype design and the previous history of this prototype are discussed. The architecture of the current prototype is described in terms of the knowledge representation and functionality of its components.

  20. A distributed agent architecture for real-time knowledge-based systems: Real-time expert systems project, phase 1

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel

    1990-01-01

    We propose a distributed agent architecture (DAA) that can support a variety of paradigms based on both traditional real-time computing and artificial intelligence. DAA consists of distributed agents that are classified into two categories: reactive and cognitive. Reactive agents can be implemented directly in Ada to meet hard real-time requirements and be deployed on on-board embedded processors. A traditional real-time computing methodology under consideration is the rate monotonic theory that can guarantee schedulability based on analytical methods. AI techniques under consideration for reactive agents are approximate or anytime reasoning that can be implemented using Bayesian belief networks as in Guardian. Cognitive agents are traditional expert systems that can be implemented in ART-Ada to meet soft real-time requirements. During the initial design of cognitive agents, it is critical to consider the migration path that would allow initial deployment on ground-based workstations with eventual deployment on on-board processors. ART-Ada technology enables this migration while Lisp-based technologies make it difficult if not impossible. In addition to reactive and cognitive agents, a meta-level agent would be needed to coordinate multiple agents and to provide meta-level control.

  1. On-Board Fiber-Optic Network Architectures for Radar and Avionics Signal Distribution

    NASA Technical Reports Server (NTRS)

    Alam, Mohammad F.; Atiquzzaman, Mohammed; Duncan, Bradley B.; Nguyen, Hung; Kunath, Richard

    2000-01-01

    Continued progress in both civil and military avionics applications is overstressing the capabilities of existing radio-frequency (RF) communication networks based on coaxial cables on board modem aircrafts. Future avionics systems will require high-bandwidth on- board communication links that are lightweight, immune to electromagnetic interference, and highly reliable. Fiber optic communication technology can meet all these challenges in a cost-effective manner. Recently, digital fiber-optic communication systems, where a fiber-optic network acts like a local area network (LAN) for digital data communications, have become a topic of extensive research and development. Although a fiber-optic system can be designed to transport radio-frequency (RF) signals, the digital fiber-optic systems under development today are not capable of transporting microwave and millimeter-wave RF signals used in radar and avionics systems on board an aircraft. Recent advances in fiber optic technology, especially wavelength division multiplexing (WDM), has opened a number of possibilities for designing on-board fiber optic networks, including all-optical networks for radar and avionics RF signal distribution. In this paper, we investigate a number of different novel approaches for fiber-optic transmission of on-board VHF and UHF RF signals using commercial off-the-shelf (COTS) components. The relative merits and demerits of each architecture are discussed, and the suitability of each architecture for particular applications is pointed out. All-optical approaches show better performance than other traditional approaches in terms of signal-to-noise ratio, power consumption, and weight requirements.

  2. SharP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkata, Manjunath Gorentla; Aderholdt, William F

    The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less

  3. A Distributed Laboratory for Event-Driven Coastal Prediction and Hazard Planning

    NASA Astrophysics Data System (ADS)

    Bogden, P.; Allen, G.; MacLaren, J.; Creager, G. J.; Flournoy, L.; Sheng, Y. P.; Graber, H.; Graves, S.; Conover, H.; Luettich, R.; Perrie, W.; Ramakrishnan, L.; Reed, D. A.; Wang, H. V.

    2006-12-01

    The 2005 Atlantic hurricane season was the most active in recorded history. Collectively, 2005 hurricanes caused more than 2,280 deaths and record damages of over 100 billion dollars. Of the storms that made landfall, Dennis, Emily, Katrina, Rita, and Wilma caused most of the destruction. Accurate predictions of storm-driven surge, wave height, and inundation can save lives and help keep recovery costs down, provided the information gets to emergency response managers in time. The information must be available well in advance of landfall so that responders can weigh the costs of unnecessary evacuation against the costs of inadequate preparation. The SURA Coastal Ocean Observing and Prediction (SCOOP) Program is a multi-institution collaboration implementing a modular, distributed service-oriented architecture for real time prediction and visualization of the impacts of extreme atmospheric events. The modular infrastructure enables real-time prediction of multi- scale, multi-model, dynamic, data-driven applications. SURA institutions are working together to create a virtual and distributed laboratory integrating coastal models, simulation data, and observations with computational resources and high speed networks. The loosely coupled architecture allows teams of computer and coastal scientists at multiple institutions to innovate complex system components that are interconnected with relatively stable interfaces. The operational system standardizes at the interface level to enable substantial innovation by complementary communities of coastal and computer scientists. This architectural philosophy solves a long-standing problem associated with the transition from research to operations. The SCOOP Program thereby implements a prototype laboratory consistent with the vision of a national, multi-agency initiative called the Integrated Ocean Observing System (IOOS). Several service- oriented components of the SCOOP enterprise architecture have already been designed and implemented, including data archive and transport services, metadata registry and retrieval (catalog), resource management, and portal interfaces. SCOOP partners are integrating these at the service level and implementing reconfigurable workflows for several kinds of user scenarios, and are working with resource providers to prototype new policies and technologies for on-demand computing.

  4. Implementation of a Fully-Balanced Periodic Tridiagonal Solver on a Parallel Distributed Memory Architecture

    DTIC Science & Technology

    1994-05-01

    PARALLEL DISTRIBUTED MEMORY ARCHITECTURE LTJh T. M. Eidson 0 - 8 l 9 5 " G. Erlebacher _ _ _. _ DTIe QUALITY INSPECTED a Contract NAS I - 19480 May 1994...DISTRIBUTED MEMORY ARCHITECTURE T.M. Eidson * High Technology Corporation Hampton, VA 23665 G. Erlebachert Institute for Computer Applications in Science and...developed and evaluated. Simple model calculations as well as timing results are pres.nted to evaluate the various strategies. The particular

  5. Advanced Launch System Multi-Path Redundant Avionics Architecture Analysis and Characterization

    NASA Technical Reports Server (NTRS)

    Baker, Robert L.

    1993-01-01

    The objective of the Multi-Path Redundant Avionics Suite (MPRAS) program is the development of a set of avionic architectural modules which will be applicable to the family of launch vehicles required to support the Advanced Launch System (ALS). To enable ALS cost/performance requirements to be met, the MPRAS must support autonomy, maintenance, and testability capabilities which exceed those present in conventional launch vehicles. The multi-path redundant or fault tolerance characteristics of the MPRAS are necessary to offset a reduction in avionics reliability due to the increased complexity needed to support these new cost reduction and performance capabilities and to meet avionics reliability requirements which will provide cost-effective reductions in overall ALS recurring costs. A complex, real-time distributed computing system is needed to meet the ALS avionics system requirements. General Dynamics, Boeing Aerospace, and C.S. Draper Laboratory have proposed system architectures as candidates for the ALS MPRAS. The purpose of this document is to report the results of independent performance and reliability characterization and assessment analyses of each proposed candidate architecture and qualitative assessments of testability, maintainability, and fault tolerance mechanisms. These independent analyses were conducted as part of the MPRAS Part 2 program and were carried under NASA Langley Research Contract NAS1-17964, Task Assignment 28.

  6. A Comparative Study of the Traditional Houses Kaili and Bugis-Makassar in Indonesia

    NASA Astrophysics Data System (ADS)

    Suharto, M. F.; Kawet, R. S. S. I.; Tumanduk, M. S. S. S.

    2018-02-01

    In this study, I compared the physical elements of two Indonesian traditional houses between a Kaili tribe (Central Sulawesi) and a Bugis-Makassar tribe (South Sulawesi). If we viewed of the name, meaning and function from both traditional houses have similarities, namely the Souraja/Saoraja house (House of the King), however, observed more detail the physical elements of architecture also show the differences. The spatial, physical and stylistic systems (N. John Habraken’s theory) were applied to analyze their differences and the similarities of the physical elements of architecture on those two traditional houses. The results of the analysis identified that the physical elements of architecture such as the orientation, the function and distribution of rooms (the spatial system), the constructions and materials of floor, wall and roof (the physical system) and the opening types of the door and window as well as ornaments used showed similarities. Meanwhile the physical elements of architecture such as the arrangement of columns, form and spatial pattern as well as the placement of the stairs (the spatial system), the constructions and materials of foundation, column and beam (the physical system) as well as the form of the roof and façade found differences of both traditional houses.

  7. Integration of Geographical Information Systems and Geophysical Applications with Distributed Computing Technologies.

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.

    2005-12-01

    We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.

  8. Decentralized operating procedures for orchestrating data and behavior across distributed military systems and assets

    NASA Astrophysics Data System (ADS)

    Peach, Nicholas

    2011-06-01

    In this paper, we present a method for a highly decentralized yet structured and flexible approach to achieve systems interoperability by orchestrating data and behavior across distributed military systems and assets with security considerations addressed from the beginning. We describe an architecture of a tool-based design of business processes called Decentralized Operating Procedures (DOP) and the deployment of DOPs onto run time nodes, supporting the parallel execution of each DOP at multiple implementation nodes (fixed locations, vehicles, sensors and soldiers) throughout a battlefield to achieve flexible and reliable interoperability. The described method allows the architecture to; a) provide fine grain control of the collection and delivery of data between systems; b) allow the definition of a DOP at a strategic (or doctrine) level by defining required system behavior through process syntax at an abstract level, agnostic of implementation details; c) deploy a DOP into heterogeneous environments by the nomination of actual system interfaces and roles at a tactical level; d) rapidly deploy new DOPs in support of new tactics and systems; e) support multiple instances of a DOP in support of multiple missions; f) dynamically add or remove run-time nodes from a specific DOP instance as missions requirements change; g) model the passage of, and business reasons for the transmission of each data message to a specific DOP instance to support accreditation; h) run on low powered computers with lightweight tactical messaging. This approach is designed to extend the capabilities of existing standards, such as the Generic Vehicle Architecture (GVA).

  9. Can diversity in root architecture explain plant water use efficiency? A modeling study

    PubMed Central

    Tron, Stefania; Bodner, Gernot; Laio, Francesco; Ridolfi, Luca; Leitner, Daniel

    2015-01-01

    Drought stress is a dominant constraint to crop production. Breeding crops with adapted root systems for effective uptake of water represents a novel strategy to increase crop drought resistance. Due to complex interaction between root traits and high diversity of hydrological conditions, modeling provides important information for trait based selection. In this work we use a root architecture model combined with a soil-hydrological model to analyze whether there is a root system ideotype of general adaptation to drought or water uptake efficiency of root systems is a function of specific hydrological conditions. This was done by modeling transpiration of 48 root architectures in 16 drought scenarios with distinct soil textures, rainfall distributions, and initial soil moisture availability. We find that the efficiency in water uptake of root architecture is strictly dependent on the hydrological scenario. Even dense and deep root systems are not superior in water uptake under all hydrological scenarios. Our results demonstrate that mere architectural description is insufficient to find root systems of optimum functionality. We find that in environments with sufficient rainfall before the growing season, root depth represents the key trait for the exploration of stored water, especially in fine soils. Root density, instead, especially near the soil surface, becomes the most relevant trait for exploiting soil moisture when plant water supply is mainly provided by rainfall events during the root system development. We therefore concluded that trait based root breeding has to consider root systems with specific adaptation to the hydrology of the target environment. PMID:26412932

  10. Can diversity in root architecture explain plant water use efficiency? A modeling study.

    PubMed

    Tron, Stefania; Bodner, Gernot; Laio, Francesco; Ridolfi, Luca; Leitner, Daniel

    2015-09-24

    Drought stress is a dominant constraint to crop production. Breeding crops with adapted root systems for effective uptake of water represents a novel strategy to increase crop drought resistance. Due to complex interaction between root traits and high diversity of hydrological conditions, modeling provides important information for trait based selection. In this work we use a root architecture model combined with a soil-hydrological model to analyze whether there is a root system ideotype of general adaptation to drought or water uptake efficiency of root systems is a function of specific hydrological conditions. This was done by modeling transpiration of 48 root architectures in 16 drought scenarios with distinct soil textures, rainfall distributions, and initial soil moisture availability. We find that the efficiency in water uptake of root architecture is strictly dependent on the hydrological scenario. Even dense and deep root systems are not superior in water uptake under all hydrological scenarios. Our results demonstrate that mere architectural description is insufficient to find root systems of optimum functionality. We find that in environments with sufficient rainfall before the growing season, root depth represents the key trait for the exploration of stored water, especially in fine soils. Root density, instead, especially near the soil surface, becomes the most relevant trait for exploiting soil moisture when plant water supply is mainly provided by rainfall events during the root system development. We therefore concluded that trait based root breeding has to consider root systems with specific adaptation to the hydrology of the target environment.

  11. Multi-agent systems and their applications

    DOE PAGES

    Xie, Jing; Liu, Chen-Ching

    2017-07-14

    The number of distributed energy components and devices continues to increase globally. As a result, distributed control schemes are desirable for managing and utilizing these devices, together with the large amount of data. In recent years, agent-based technology becomes a powerful tool for engineering applications. As a computational paradigm, multi agent systems (MASs) provide a good solution for distributed control. Here in this paper, MASs and applications are discussed. A state-of-the-art literature survey is conducted on the system architecture, consensus algorithm, and multi-agent platform, framework, and simulator. In addition, a distributed under-frequency load shedding (UFLS) scheme is proposed using themore » MAS. Simulation results for a case study are presented. The future of MASs is discussed in the conclusion.« less

  12. Multi-agent systems and their applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Jing; Liu, Chen-Ching

    The number of distributed energy components and devices continues to increase globally. As a result, distributed control schemes are desirable for managing and utilizing these devices, together with the large amount of data. In recent years, agent-based technology becomes a powerful tool for engineering applications. As a computational paradigm, multi agent systems (MASs) provide a good solution for distributed control. Here in this paper, MASs and applications are discussed. A state-of-the-art literature survey is conducted on the system architecture, consensus algorithm, and multi-agent platform, framework, and simulator. In addition, a distributed under-frequency load shedding (UFLS) scheme is proposed using themore » MAS. Simulation results for a case study are presented. The future of MASs is discussed in the conclusion.« less

  13. Using heterogeneous wireless sensor networks in a telemonitoring system for healthcare.

    PubMed

    Corchado, Juan M; Bajo, Javier; Tapia, Dante I; Abraham, Ajith

    2010-03-01

    Ambient intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a distributed telemonitoring system, aimed at improving healthcare and assistance to dependent people at their homes. The system implements a service-oriented architecture based platform, which allows heterogeneous wireless sensor networks to communicate in a distributed way independent of time and location restrictions. This approach provides the system with a higher ability to recover from errors and a better flexibility to change their behavior at execution time. Preliminary results are presented in this paper.

  14. Using an object-based grid system to evaluate a newly developed EP approach to formulate SVMs as applied to the classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun

    2004-04-01

    This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.

  15. Power Management and Distribution (PMAD) Model Development: Final Report

    NASA Technical Reports Server (NTRS)

    Metcalf, Kenneth J.

    2011-01-01

    Power management and distribution (PMAD) models were developed in the early 1990's to model candidate architectures for various Space Exploration Initiative (SEI) missions. They were used to generate "ballpark" component mass estimates to support conceptual PMAD system design studies. The initial set of models was provided to NASA Lewis Research Center (since renamed Glenn Research Center) in 1992. They were developed to estimate the characteristics of power conditioning components predicted to be available in the 2005 timeframe. Early 90's component and device designs and material technologies were projected forward to the 2005 timeframe, and algorithms reflecting those design and material improvements were incorporated into the models to generate mass, volume, and efficiency estimates for circa 2005 components. The models are about ten years old now and NASA GRC requested a review of them to determine if they should be updated to bring them into agreement with current performance projections or to incorporate unforeseen design or technology advances. This report documents the results of this review and the updated power conditioning models and new transmission line models generated to estimate post 2005 PMAD system masses and sizes. This effort continues the expansion and enhancement of a library of PMAD models developed to allow system designers to assess future power system architectures and distribution techniques quickly and consistently.

  16. Web-Based Distributed Simulation of Aeronautical Propulsion System

    NASA Technical Reports Server (NTRS)

    Zheng, Desheng; Follen, Gregory J.; Pavlik, William R.; Kim, Chan M.; Liu, Xianyou; Blaser, Tammy M.; Lopez, Isaac

    2001-01-01

    An application was developed to allow users to run and view the Numerical Propulsion System Simulation (NPSS) engine simulations from web browsers. Simulations were performed on multiple INFORMATION POWER GRID (IPG) test beds. The Common Object Request Broker Architecture (CORBA) was used for brokering data exchange among machines and IPG/Globus for job scheduling and remote process invocation. Web server scripting was performed by JavaServer Pages (JSP). This application has proven to be an effective and efficient way to couple heterogeneous distributed components.

  17. Executing CLIPS expert systems in a distributed environment

    NASA Technical Reports Server (NTRS)

    Taylor, James; Myers, Leonard

    1990-01-01

    This paper describes a framework for running cooperating agents in a distributed environment to support the Intelligent Computer Aided Design System (ICADS), a project in progress at the CAD Research Unit of the Design Institute at the California Polytechnic State University. Currently, the systems aids an architectural designer in creating a floor plan that satisfies some general architectural constraints and project specific requirements. At the core of ICADS is the Blackboard Control System. Connected to the blackboard are any number of domain experts called Intelligent Design Tools (IDT). The Blackboard Control System monitors the evolving design as it is being drawn and helps resolve conflicts from the domain experts. The user serves as a partner in this system by manipulating the floor plan in the CAD system and validating recommendations made by the domain experts. The primary components of the Blackboard Control System are two expert systems executed by a modified CLIPS shell. The first is the Message Handler. The second is the Conflict Resolver. The Conflict Resolver synthesizes the suggestions made by the domain experts, which can be either CLIPS expert systems, or compiled C programs. In DEMO1, the current ICADS prototype, the CLIPS domain expert systems are Acoustics, Lighting, Structural, and Thermal; the compiled C domain experts are the CAD system and the User Interface.

  18. Archive Storage Media Alternatives.

    ERIC Educational Resources Information Center

    Ranade, Sanjay

    1990-01-01

    Reviews requirements for a data archive system and describes storage media alternatives that are currently available. Topics discussed include data storage; data distribution; hierarchical storage architecture, including inline storage, online storage, nearline storage, and offline storage; magnetic disks; optical disks; conventional magnetic…

  19. Cardea: Dynamic Access Control in Distributed Systems

    NASA Technical Reports Server (NTRS)

    Lepro, Rebekah

    2004-01-01

    Modern authorization systems span domains of administration, rely on many different authentication sources, and manage complex attributes as part of the authorization process. This . paper presents Cardea, a distributed system that facilitates dynamic access control, as a valuable piece of an inter-operable authorization framework. First, the authorization model employed in Cardea and its functionality goals are examined. Next, critical features of the system architecture and its handling of the authorization process are then examined. Then the S A M L and XACML standards, as incorporated into the system, are analyzed. Finally, the future directions of this project are outlined and connection points with general components of an authorization system are highlighted.

  20. Cybersecurity Technology R&D | Energy Systems Integration Facility | NREL

    Science.gov Websites

    and development (R&D) in cybersecurity is focused on distributed energy resources and the control equipment. The team is focusing on integrity for command and control messages in transit to and from systems and control architectures. Moving Target Defense In collaboration with Kansas State University

  1. Space station data system analysis/architecture study. Task 5: Program plan

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Cost estimates for both the on-board and ground segments of the Space Station Data System (SSDS) are presented along with summary program schedules. Advanced technology development recommendations are provided in the areas of distributed data base management, end-to-end protocols, command/resource management, and flight qualified artificial intelligence machines.

  2. Mobile-IT Education (MIT.EDU): M-Learning Applications for Classroom Settings

    ERIC Educational Resources Information Center

    Sung, M.; Gips, J.; Eagle, N.; Madan, A.; Caneel, R.; DeVaul, R.; Bonsen, J.; Pentland, A.

    2005-01-01

    In this paper, we describe the Mobile-IT Education (MIT.EDU) system, which demonstrates the potential of using a distributed mobile device architecture for rapid prototyping of wireless mobile multi-user applications for use in classroom settings. MIT.EDU is a stable, accessible system that combines inexpensive, commodity hardware, a flexible…

  3. A subsumptive, hierarchical, and distributed vision-based architecture for smart robotics.

    PubMed

    DeSouza, Guilherme N; Kak, Avinash C

    2004-10-01

    We present a distributed vision-based architecture for smart robotics that is composed of multiple control loops, each with a specialized level of competence. Our architecture is subsumptive and hierarchical, in the sense that each control loop can add to the competence level of the loops below, and in the sense that the loops can present a coarse-to-fine gradation with respect to vision sensing. At the coarsest level, the processing of sensory information enables a robot to become aware of the approximate location of an object in its field of view. On the other hand, at the finest end, the processing of stereo information enables a robot to determine more precisely the position and orientation of an object in the coordinate frame of the robot. The processing in each module of the control loops is completely independent and it can be performed at its own rate. A control Arbitrator ranks the results of each loop according to certain confidence indices, which are derived solely from the sensory information. This architecture has clear advantages regarding overall performance of the system, which is not affected by the "slowest link," and regarding fault tolerance, since faults in one module does not affect the other modules. At this time we are able to demonstrate the utility of the architecture for stereoscopic visual servoing. The architecture has also been applied to mobile robot navigation and can easily be extended to tasks such as "assembly-on-the-fly."

  4. Architecture for a 1-GHz Digital RADAR

    NASA Technical Reports Server (NTRS)

    Mallik, Udayan

    2011-01-01

    An architecture for a Direct RF-digitization Type Digital Mode RADAR was developed at GSFC in 2008. Two variations of a basic architecture were developed for use on RADAR imaging missions using aircraft and spacecraft. Both systems can operate with a pulse repetition rate up to 10 MHz with 8 received RF samples per pulse repetition interval, or at up to 19 kHz with 4K received RF samples per pulse repetition interval. The first design describes a computer architecture for a Continuous Mode RADAR transceiver with a real-time signal processing and display architecture. The architecture can operate at a high pulse repetition rate without interruption for an infinite amount of time. The second design describes a smaller and less costly burst mode RADAR that can transceive high pulse repetition rate RF signals without interruption for up to 37 seconds. The burst-mode RADAR was designed to operate on an off-line signal processing paradigm. The temporal distribution of RF samples acquired and reported to the RADAR processor remains uniform and free of distortion in both proposed architectures. The majority of the RADAR's electronics is implemented in digital CMOS (complementary metal oxide semiconductor), and analog circuits are restricted to signal amplification operations and analog to digital conversion. An implementation of the proposed systems will create a 1-GHz, Direct RF-digitization Type, L-Band Digital RADAR--the highest band achievable for Nyquist Rate, Direct RF-digitization Systems that do not implement an electronic IF downsample stage (after the receiver signal amplification stage), using commercially available off-the-shelf integrated circuits.

  5. Design and evaluation of cellular power converter architectures

    NASA Astrophysics Data System (ADS)

    Perreault, David John

    Power electronic technology plays an important role in many energy conversion and storage applications, including machine drives, power supplies, frequency changers and UPS systems. Increases in performance and reductions in cost have been achieved through the development of higher performance power semiconductor devices and integrated control devices with increased functionality. Manufacturing techniques, however, have changed little. High power is typically achieved by paralleling multiple die in a sing!e package, producing the physical equivalent of a single large device. Consequently, both the device package and the converter in which the device is used continue to require large, complex mechanical structures, and relatively sophisticated heat transfer systems. An alternative to this approach is the use of a cellular power converter architecture, which is based upon the parallel connection of a large number of quasi-autonomous converters, called cells, each of which is designed for a fraction of the system rating. The cell rating is chosen such that single-die devices in inexpensive packages can be used, and the cell fabricated with an automated assembly process. The use of quasi-autonomous cells means that system performance is not compromised by the failure of a cell. This thesis explores the design of cellular converter architectures with the objective of achieving improvements in performance, reliability, and cost over conventional converter designs. New approaches are developed and experimentally verified for highly distributed control of cellular converters, including methods for ripple cancellation and current-sharing control. The performance of these techniques are quantified, and their dynamics are analyzed. Cell topologies suitable to the cellular architecture are investigated, and their use for systems in the 5-500 kVA range is explored. The design, construction, and experimental evaluation of a 6 kW cellular switched-mode rectifier is also addressed. This cellular system implements entirely distributed control, and achieves performance levels unattainable with an equivalent single converter. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  6. Some fast elliptic solvers on parallel architectures and their complexities

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Y.

    1989-01-01

    The discretization of separable elliptic partial differential equations leads to linear systems with special block tridiagonal matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconstant coefficients. A method was recently proposed to parallelize and vectorize BCR. In this paper, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational compelxity lower than that of parallel BCR.

  7. Some fast elliptic solvers on parallel architectures and their complexities

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    The discretization of separable elliptic partial differential equations leads to linear systems with special block triangular matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconsistant coefficients. A method was recently proposed to parallelize and vectorize BCR. Here, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches, including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational complexity lower than that of parallel BCR.

  8. From Experiments to Simulations: Downscaling Measurements of Na+ Distribution at the Root-Soil Interface

    NASA Astrophysics Data System (ADS)

    Perelman, A.; Guerra, H. J.; Pohlmeier, A. J.; Vanderborght, J.; Lazarovitch, N.

    2017-12-01

    When salinity increases beyond a certain threshold, crop yield will decrease at a fixed rate, according to the Maas and Hoffman model (1976). Thus, it is highly important to predict salinization and its impact on crops. Current models do not consider the impact of the transpiration rate on plant salt tolerance, although it affects plant water uptake and thus salt accumulation around the roots, consequently influencing the plant's sensitivity to salinity. Better model parametrization can improve the prediction of real salinity effects on crop growth and yield. The aim of this research is to study Na+ distribution around roots at different scales using different non-invasive methods, and to examine how this distribution is affected by the transpiration rate and plant water uptake. Results from tomato plants that were grown on rhizoslides (a capillary paper growth system) showed that the Na+ concentration was higher at the root-substrate interface than in the bulk. Also, Na+ accumulation around the roots decreased under a low transpiration rate, supporting our hypothesis. The rhizoslides enabled the root growth rate and architecture to be studied under different salinity levels. The root system architecture was retrieved from photos taken during the experiment, enabling us to incorporate real root systems into a simulation. Magnetic resonance imaging (MRI) was used to observe correlations between root system architectures and Na+ distribution. The MRI provided fine resolution of the Na+ accumulation around a single root without disturbing the root system. With time, Na+ accumulated only where roots were found in the soil and later around specific roots. Rhizoslides allow the root systems of larger plants to be investigated, but this method is limited by the medium (paper) and the dimension (2D). The MRI can create a 3D image of Na+ accumulation in soil on a microscopic scale. These data are being used for model calibration, which is expected to enable the prediction of root water uptake in saline soils for different climatic conditions and different soil water availabilities.

  9. The SysMan monitoring service and its management environment

    NASA Astrophysics Data System (ADS)

    Debski, Andrzej; Janas, Ekkehard

    1996-06-01

    Management of modern information systems is becoming more and more complex. There is a growing need for powerful, flexible and affordable management tools to assist system managers in maintaining such systems. It is at the same time evident that effective management should integrate network management, system management and application management in a uniform way. Object oriented OSI management architecture with its four basic modelling concepts (information, organization, communication and functional models) together with widely accepted distribution platforms such as ANSA/CORBA, constitutes a reliable and modern framework for the implementation of a management toolset. This paper focuses on the presentation of concepts and implementation results of an object oriented management toolset developed and implemented within the framework of the ESPRIT project 7026 SysMan. An overview is given of the implemented SysMan management services including the System Management Service, Monitoring Service, Network Management Service, Knowledge Service, Domain and Policy Service, and the User Interface. Special attention is paid to the Monitoring Service which incorporates the architectural key entity responsible for event management. Its architecture and building components, especially filters, are emphasized and presented in detail.

  10. Utilization of Internet Protocol-Based Voice Systems in Remote Payload Operations

    NASA Technical Reports Server (NTRS)

    Best, Susan; Nichols, Kelvin; Bradford, Robert

    2003-01-01

    This viewgraph presentation provides an overview of a proposed voice communication system for use in remote payload operations performed on the International Space Station. The system, Internet Voice Distribution System (IVoDS), would make use of existing Internet protocols, and offer a number of advantages over the system currently in use. Topics covered include: system description and operation, system software and hardware, system architecture, project status, and technology transfer applications.

  11. Decentralized and Modular Electrical Architecture

    NASA Astrophysics Data System (ADS)

    Elisabelar, Christian; Lebaratoux, Laurence

    2014-08-01

    This paper presents the studies made on the definition and design of a decentralized and modular electrical architecture that can be used for power distribution, active thermal control (ATC), standard inputs-outputs electrical interfaces.Traditionally implemented inside central unit like OBC or RTU, these interfaces can be dispatched in the satellite by using MicroRTU.CNES propose a similar approach of MicroRTU. The system is based on a bus called BRIO (Bus Réparti des IO), which is composed, by a power bus and a RS485 digital bus. BRIO architecture is made with several miniature terminals called BTCU (BRIO Terminal Control Unit) distributed in the spacecraft.The challenge was to design and develop the BTCU with very little volume, low consumption and low cost. The standard BTCU models are developed and qualified with a configuration dedicated to ATC, while the first flight model will fly on MICROSCOPE for PYRO actuations and analogue acquisitions. The design of the BTCU is made in order to be easily adaptable for all type of electric interface needs.Extension of this concept is envisaged for power conditioning and distribution unit, and a Modular PCDU based on BRIO concept is proposed.

  12. Guest Editors' introduction

    NASA Astrophysics Data System (ADS)

    Magee, Jeff; Moffett, Jonathan

    1996-06-01

    Special Issue on Management This special issue contains seven papers originally presented at an International Workshop on Services for Managing Distributed Systems (SMDS'95), held in September 1995 in Karslruhe, Germany. The workshop was organized to present the results of two ESPRIT III funded projects, Sysman and IDSM, and more generally to bring together work in the area of distributed systems management. The workshop focused on the tools and techniques necessary for managing future large-scale, multi-organizational distributed systems. The open call for papers attracted a large number of submissions and the subsequent attendance at the workshop, which was larger than expected, clearly indicated that the topics addressed by the workshop were of considerable interest both to industry and academia. The papers selected for this special issue represent an excellent coverage of the issues addressed by the workshop. A particular focus of the workshop was the need to help managers deal with the size and complexity of modern distributed systems by the provision of automated support. This automation must have two prime characteristics: it must provide a flexible management system which responds rapidly to changing organizational needs, and it must provide both human managers and automated management components with the information that they need, in a form which can be used for decision-making. These two characteristics define the two main themes of this special issue. To satisfy the requirement for a flexible management system, workers in both industry and universities have turned to architectures which support policy directed management. In these architectures policy is explicitly represented and can be readily modified to meet changing requirements. The paper `Towards implementing policy-based systems management' by Meyer, Anstötz and Popien describes an approach whereby policy is enforced by event-triggered rules. Krause and Zimmermann in their paper `Implementing configuration management policies for distributed applications' present a system in which the configuration of the system in terms of its constituent components and their interconnections can be controlled by reconfiguration rules. Neumair and Wies in the paper `Case study: applying management policies to manage distributed queuing systems' examine how high-level policies can be transformed into practical and efficient implementations for the case of distributed job queuing systems. Koch and Krämer in `Rules and agents for automated management of distributed systems' describe the results of an experiment in using the software development environment Marvel to provide a rule based implementation of management policy. The paper by Jardin, `Supporting scalability and flexibility in a distributed management platform' reports on the experience of using a policy directed approach in the industrial strength TeMIP management platform. Both human managers and automated management components rely on a comprehensive monitoring system to provide accurate and timely information on which decisions are made to modify the operation of a system. The monitoring service must deal with condensing and summarizing the vast amount of data available to produce the events of interest to the controlling components of the overall management system. The paper `Distributed intelligent monitoring and reporting facilities' by Pavlou, Mykoniatis and Sanchez describes a flexible monitoring system in which the monitoring agents themselves are policy directed. Their monitoring system has been implemented in the context of the OSIMIS management platform. Debski and Janas in `The SysMan monitoring service and its management environment' describe the overall SysMan management system architecture and then concentrate on how event processing and distribution is supported in that architecture. The collection of papers gives a good overview of the current state of the art in distributed system management. It has reached a point at which a first generation of systems, based on policy representation within systems and automated monitoring systems, are coming into practical use. The papers also serve to identify many of the issues which are open research questions. In particular, as management systems increase in complexity, how far can we automate the refinement of high-level policies into implementations? How can we detect and resolve conflicts between policies? And how can monitoring services deal efficiently with ever-growing complexity and volume? We wish to acknowledge the many contributors, besides the authors, who have made this issue possible: the anonymous reviewers who have done much to assure the quality of these papers, Morris Sloman and his Programme Committee who convened the Workshop, and Thomas Usländer and his team at the Fraunhofer Institute in Karlsruhe who acted as hosts.

  13. Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic

    NASA Astrophysics Data System (ADS)

    Narendran, S.; Selvakumar, J.

    2018-04-01

    Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.

  14. Authenticating cache

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Tyler Barratt; Urrea, Jorge Mario

    2012-06-01

    The aim of the Authenticating Cache architecture is to ensure that machine instructions in a Read Only Memory (ROM) are legitimate from the time the ROM image is signed (immediately after compilation) to the time they are placed in the cache for the processor to consume. The proposed architecture allows the detection of ROM image modifications during distribution or when it is loaded into memory. It also ensures that modified instructions will not execute in the processor-as the cache will not be loaded with a page that fails an integrity check. The authenticity of the instruction stream can also bemore » verified in this architecture. The combination of integrity and authenticity assurance greatly improves the security profile of a system.« less

  15. Design challenges and methodology for developing new integrated circuits for the robotics exploration of the solar system

    NASA Technical Reports Server (NTRS)

    Mojarradi, Mohammad M.; Kolawa, Elizabeth; Blalock, Benjamin; Johnson, R. Wayne

    2005-01-01

    Next generation space-based robotics systems will be constructed using distributed architectures where electronics capable of working in the extreme environments of the planets of the solar system are integrated with the sensors and actuators in plug-and-play modules and are connected through common multiple redundant data and power buses.

  16. An operating system for future aerospace vehicle computer systems

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  17. Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; Lowe, Dawn; Lindsay, Francis; Lynnes, Chris; Mitchell, Andrew

    2016-01-01

    EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users.

  18. Distributed visualization framework architecture

    NASA Astrophysics Data System (ADS)

    Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger

    2010-01-01

    An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this interface. One of the main features is an interactive shader designer. This allows rapid prototyping of new visualization renderings that are shader-based and greatly accelerates the development and debug cycle.

  19. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  20. Optimal causal inference: estimating stored information and approximating causal architecture.

    PubMed

    Still, Susanne; Crutchfield, James P; Ellison, Christopher J

    2010-09-01

    We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.

Top