Shift changes, updates, and the on-call architecture in space shuttle mission control.
Patterson, E S; Woods, D D
2001-01-01
In domains such as nuclear power, industrial process control, and space shuttle mission control, there is increased interest in reducing personnel during nominal operations. An essential element in maintaining safe operations in high risk environments with this 'on-call' organizational architecture is to understand how to bring called-in practitioners up to speed quickly during escalating situations. Targeted field observations were conducted to investigate what it means to update a supervisory controller on the status of a continuous, anomaly-driven process in a complex, distributed environment. Sixteen shift changes, or handovers, at the NASA Johnson Space Center were observed during the STS-76 Space Shuttle mission. The findings from this observational study highlight the importance of prior knowledge in the updates and demonstrate how missing updates can leave flight controllers vulnerable to being unprepared. Implications for mitigating risk in the transition to 'on-call' architectures are discussed.
Shift changes, updates, and the on-call architecture in space shuttle mission control
NASA Technical Reports Server (NTRS)
Patterson, E. S.; Woods, D. D.
2001-01-01
In domains such as nuclear power, industrial process control, and space shuttle mission control, there is increased interest in reducing personnel during nominal operations. An essential element in maintaining safe operations in high risk environments with this 'on-call' organizational architecture is to understand how to bring called-in practitioners up to speed quickly during escalating situations. Targeted field observations were conducted to investigate what it means to update a supervisory controller on the status of a continuous, anomaly-driven process in a complex, distributed environment. Sixteen shift changes, or handovers, at the NASA Johnson Space Center were observed during the STS-76 Space Shuttle mission. The findings from this observational study highlight the importance of prior knowledge in the updates and demonstrate how missing updates can leave flight controllers vulnerable to being unprepared. Implications for mitigating risk in the transition to 'on-call' architectures are discussed.
Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés
2015-02-25
This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems.
Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés
2015-01-01
This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems. PMID:25723145
Comparing architectural solutions of IPT application SDKs utilizing H.323 and SIP
NASA Astrophysics Data System (ADS)
Keskinarkaus, Anja; Korhonen, Jani; Ohtonen, Timo; Kilpelanaho, Vesa; Koskinen, Esa; Sauvola, Jaakko J.
2001-07-01
This paper presents two approaches to efficient service development for Internet Telephony. In first approach we consider services ranging from core call signaling features and media control as stated in ITU-T's H.323 to end user services that supports user interaction. The second approach supports IETF's SIP protocol. We compare these from differing architectural perspectives, economy of network and terminal development, and propose efficient architecture models for both protocols. In their design, the main criteria were component independence, lightweight operation and portability in heterogeneous end-to-end environments. In proposed architecture, the vertical division of call signaling and streaming media control logic allows for using the components either individually or combined, depending on the level of functionality required by an application.
Leveraging Executable Architectures in a Joint Environment
2009-01-01
Support of Type 2/3 Terminal Attack Control Call Wing Operations Center (WOC) to Task On-Call Aircraft Call Air Command and Control Agency ( ACCA ) to...MIDS MIDS MIDS X = Existing capability P1 = Partial - requires voice ack P2 = Partial - only some F/A-18s None P3 = remarks only TARGET LOCATION
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-08-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-09-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called
Architecture of conference control functions
NASA Astrophysics Data System (ADS)
Kausar, Nadia; Crowcroft, Jon
1999-11-01
Conference control is an integral part in many-to-many communications that is used to manage and co-ordinate multiple users in conferences. There are different types of conferences which require different types of control. Some of the features of conference control may be user invoked while others are for internal management of a conference. In recent years, ITU (International Telecommunication Union) and IETF (Internet Engineering Task Force) have standardized two main models of conferencing, each system providing a set of conference control functionalities that are not easily provided in the other one. This paper analyzes the main activities appropriate for different types of conferences and presents an architecture for conference control called GCCP (Generic Conference Control Protocol). GCCP interworks different types of conferencing and provides a set of conference control functions that can be invoked by users directly. As an example of interworking, interoperation of IETF's SIP and ITU's H.323 call control functions have been examined here. This paper shows that a careful analysis of a conferencing architecture can provide a set of control functions essential for any group communication model that can be extensible if needed.
NASA Astrophysics Data System (ADS)
Hirono, Masahiko; Nojima, Toshio
This paper presents a new signaling architecture for radio-access control in wireless communications systems. Called THREP (for THREe-phase link set-up Process), it enables systems with low-cost configurations to provide tetherless access and wide-ranging mobility by using autonomous radio-link controls for fast cell searching and distributed call management. A signaling architecture generally consists of a radio-access part and a service-entity-access part. In THREP, the latter part is divided into two steps: preparing a communication channel, and sustaining it. Access control in THREP is thus composed of three separated parts, or protocol phases. The specifications of each phase are determined independently according to system requirements. In the proposed architecture, the first phase uses autonomous radio-link control because we want to construct low-power indoor wireless communications systems. Evaluation of channel usage efficiency and hand-over loss probability in the personal handy-phone system (PHS) shows that THREP makes the radio-access sub-system operations in a practical application model highly efficient, and the results of a field experiment show that THREP provides sufficient protection against severe fast CNR degradation in practical indoor propagation environments.
Multi-Agent Architecture with Support to Quality of Service and Quality of Control
NASA Astrophysics Data System (ADS)
Poza-Luján, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, Jose-Enrique
Multi Agent Systems (MAS) are one of the most suitable frameworks for the implementation of intelligent distributed control system. Agents provide suitable flexibility to give support to implied heterogeneity in cyber-physical systems. Quality of Service (QoS) and Quality of Control (QoC) parameters are commonly utilized to evaluate the efficiency of the communications and the control loop. Agents can use the quality measures to take a wide range of decisions, like suitable placement on the control node or to change the workload to save energy. This article describes the architecture of a multi agent system that provides support to QoS and QoC parameters to optimize de system. The architecture uses a Publish-Subscriber model, based on Data Distribution Service (DDS) to send the control messages. Due to the nature of the Publish-Subscribe model, the architecture is suitable to implement event-based control (EBC) systems. The architecture has been called FSACtrl.
Reliability Engineering for Service Oriented Architectures
2013-02-01
Common Object Request Broker Architecture Ecosystem In software , an ecosystem is a set of applications and/or services that grad- ually build up over time...Enterprise Service Bus Foreign In an SOA context: Any SOA, service or software which the owners of the calling software do not have control of, either...SOA Service Oriented Architecture SRE Software Reliability Engineering System Mode Many systems exhibit different modes of operation. E.g. the cockpit
Bio-inspired adaptive feedback error learning architecture for motor control.
Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo
2012-10-01
This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).
2015-09-01
Gateway 2 4. Voice Packet Flow: SIP , Session Description Protocol (SDP), and RTP 3 5. Voice Data Analysis 5 6. Call Analysis 6 7. Call Metrics 6...analysis processing is designed for a general VoIP system architecture based on Session Initiation Protocol ( SIP ) for negotiating call sessions and...employs Skinny Client Control Protocol for network communication between the phone and the local CallManager (e.g., for each dialed digit), SIP
Metrics of a Paradigm for Intelligent Control
NASA Technical Reports Server (NTRS)
Hexmoor, Henry
1999-01-01
We present metrics for quantifying organizational structures of complex control systems intended for controlling long-lived robotic or other autonomous applications commonly found in space applications. Such advanced control systems are often called integration platforms or agent architectures. Reported metrics span concerns about time, resources, software engineering, and complexities in the world.
NASA Technical Reports Server (NTRS)
Klarer, Paul
1993-01-01
An approach for a robotic control system which implements so called 'behavioral' control within a realtime multitasking architecture is proposed. The proposed system would attempt to ameliorate some of the problems noted by some researchers when implementing subsumptive or behavioral control systems, particularly with regard to multiple processor systems and realtime operations. The architecture is designed to allow synchronous operations between various behavior modules by taking advantage of a realtime multitasking system's intertask communications channels, and by implementing each behavior module and each interconnection node as a stand-alone task. The potential advantages of this approach over those previously described in the field are discussed. An implementation of the architecture is planned for a prototype Robotic All Terrain Lunar Exploration Rover (RATLER) currently under development and is briefly described.
Knowledge-based processing for aircraft flight control
NASA Technical Reports Server (NTRS)
Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul
1994-01-01
This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.
Artificial Intelligence for Controlling Robotic Aircraft
NASA Technical Reports Server (NTRS)
Krishnakumar, Kalmanje
2005-01-01
A document consisting mostly of lecture slides presents overviews of artificial-intelligence-based control methods now under development for application to robotic aircraft [called Unmanned Aerial Vehicles (UAVs) in the paper] and spacecraft and to the next generation of flight controllers for piloted aircraft. Following brief introductory remarks, the paper presents background information on intelligent control, including basic characteristics defining intelligent systems and intelligent control and the concept of levels of intelligent control. Next, the paper addresses several concepts in intelligent flight control. The document ends with some concluding remarks, including statements to the effect that (1) intelligent control architectures can guarantee stability of inner control loops and (2) for UAVs, intelligent control provides a robust way to accommodate an outer-loop control architecture for planning and/or related purposes.
A context management system for a cost-efficient smart home platform
NASA Astrophysics Data System (ADS)
Schneider, J.; Klein, A.; Mannweiler, C.; Schotten, H. D.
2012-09-01
This paper presents an overview of state-of-the-art architectures for integrating wireless sensor and actuators networks into the Future Internet. Furthermore, we will address advantages and disadvantages of the different architectures. With respect to these criteria, we develop a new architecture overcoming these weaknesses. Our system, called Smart Home Context Management System, will be used for intelligent home utilities, appliances, and electronics and includes physical, logical as well as network context sources within one concept. It considers important aspects and requirements of modern context management systems for smart X applications: plug and play as well as plug and trust capabilities, scalability, extensibility, security, and adaptability. As such, it is able to control roller blinds, heating systems as well as learn, for example, the user's taste w.r.t. to home entertainment (music, videos, etc.). Moreover, Smart Grid applications and Ambient Assisted Living (AAL) functions are applicable. With respect to AAL, we included an Emergency Handling function. It assures that emergency calls (police, ambulance or fire department) are processed appropriately. Our concept is based on a centralized Context Broker architecture, enhanced by a distributed Context Broker system. The goal of this concept is to develop a simple, low-priced, multi-functional, and save architecture affordable for everybody. Individual components of the architecture are well tested. Implementation and testing of the architecture as a whole is in progress.
Acquisition of Autonomous Behaviors by Robotic Assistants
NASA Technical Reports Server (NTRS)
Peters, R. A., II; Sarkar, N.; Bodenheimer, R. E.; Brown, E.; Campbell, C.; Hambuchen, K.; Johnson, C.; Koku, A. B.; Nilas, P.; Peng, J.
2005-01-01
Our research achievements under the NASA-JSC grant contributed significantly in the following areas. Multi-agent based robot control architecture called the Intelligent Machine Architecture (IMA) : The Vanderbilt team received a Space Act Award for this research from NASA JSC in October 2004. Cognitive Control and the Self Agent : Cognitive control in human is the ability to consciously manipulate thoughts and behaviors using attention to deal with conflicting goals and demands. We have been updating the IMA Self Agent towards this goal. If opportunity arises, we would like to work with NASA to empower Robonaut to do cognitive control. Applications 1. SES for Robonaut, 2. Robonaut Fault Diagnostic System, 3. ISAC Behavior Generation and Learning, 4. Segway Research.
Quantum error correction in crossbar architectures
NASA Astrophysics Data System (ADS)
Helsen, Jonas; Steudtner, Mark; Veldhorst, Menno; Wehner, Stephanie
2018-07-01
A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so-called crossbar architectures. Recently we made a proposal for a large-scale quantum processor (Li et al arXiv:1711.03807 (2017)) to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single-qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large-scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.
An Autonomous Autopilot Control System Design for Small-Scale UAVs
NASA Technical Reports Server (NTRS)
Ippolito, Corey; Pai, Ganeshmadhav J.; Denney, Ewen W.
2012-01-01
This paper describes the design and implementation of a fully autonomous and programmable autopilot system for small scale autonomous unmanned aerial vehicle (UAV) aircraft. This system was implemented in Reflection and has flown on the Exploration Aerial Vehicle (EAV) platform at NASA Ames Research Center, currently only as a safety backup for an experimental autopilot. The EAV and ground station are built on a component-based architecture called the Reflection Architecture. The Reflection Architecture is a prototype for a real-time embedded plug-and-play avionics system architecture which provides a transport layer for real-time communications between hardware and software components, allowing each component to focus solely on its implementation. The autopilot module described here, although developed in Reflection, contains no design elements dependent on this architecture.
Actuated Hybrid Mirrors for Space Telescopes
NASA Technical Reports Server (NTRS)
Hickey, Gregory; Ealey, Mark; Redding, David
2010-01-01
This paper describes new, large, ultra-lightweight, replicated, actively controlled mirrors, for use in space telescopes. These mirrors utilize SiC substrates, with embedded solid-state actuators, bonded to Nanolaminate metal foil reflective surfaces. Called Actuated Hybrid Mirrors (AHMs), they use replication techniques for high optical quality as well as rapid, low cost manufacturing. They enable an Active Optics space telescope architecture that uses periodic image-based wavefront sensing and control to assure diffraction-limited performance, while relaxing optical system fabrication, integration and test requirements. The proposed International Space Station Observatory seeks to demonstrate this architecture in space.
Manipulator control and mechanization: A telerobot subsystem
NASA Technical Reports Server (NTRS)
Hayati, S.; Wilcox, B.
1987-01-01
The short- and long-term autonomous robot control activities in the Robotics and Teleoperators Research Group at the Jet Propulsion Laboratory (JPL) are described. This group is one of several involved in robotics and is an integral part of a new NASA robotics initiative called Telerobot program. A description of the architecture, hardware and software, and the research direction in manipulator control is given.
2008-09-01
telephone, conference calls, emails, alert notifications, and blackberry . The RDTSF holds conference calls with its stakeholders to provide routine... tunnels ) is monitored by CCTV cameras with live feeds to WMATA’s Operations Control Center (OCC) to detect unauthorized entry into areas not intended for...message by email, blackberry and phone to the Security Coordinators. Dissemination of classified information however, is generally handled through the
Decentralized and Modular Electrical Architecture
NASA Astrophysics Data System (ADS)
Elisabelar, Christian; Lebaratoux, Laurence
2014-08-01
This paper presents the studies made on the definition and design of a decentralized and modular electrical architecture that can be used for power distribution, active thermal control (ATC), standard inputs-outputs electrical interfaces.Traditionally implemented inside central unit like OBC or RTU, these interfaces can be dispatched in the satellite by using MicroRTU.CNES propose a similar approach of MicroRTU. The system is based on a bus called BRIO (Bus Réparti des IO), which is composed, by a power bus and a RS485 digital bus. BRIO architecture is made with several miniature terminals called BTCU (BRIO Terminal Control Unit) distributed in the spacecraft.The challenge was to design and develop the BTCU with very little volume, low consumption and low cost. The standard BTCU models are developed and qualified with a configuration dedicated to ATC, while the first flight model will fly on MICROSCOPE for PYRO actuations and analogue acquisitions. The design of the BTCU is made in order to be easily adaptable for all type of electric interface needs.Extension of this concept is envisaged for power conditioning and distribution unit, and a Modular PCDU based on BRIO concept is proposed.
A safety-based decision making architecture for autonomous systems
NASA Technical Reports Server (NTRS)
Musto, Joseph C.; Lauderbaugh, L. K.
1991-01-01
Engineering systems designed specifically for space applications often exhibit a high level of autonomy in the control and decision-making architecture. As the level of autonomy increases, more emphasis must be placed on assimilating the safety functions normally executed at the hardware level or by human supervisors into the control architecture of the system. The development of a decision-making structure which utilizes information on system safety is detailed. A quantitative measure of system safety, called the safety self-information, is defined. This measure is analogous to the reliability self-information defined by McInroy and Saridis, but includes weighting of task constraints to provide a measure of both reliability and cost. An example is presented in which the safety self-information is used as a decision criterion in a mobile robot controller. The safety self-information is shown to be consistent with the entropy-based Theory of Intelligent Machines defined by Saridis.
How architecture wins technology wars.
Morris, C R; Ferguson, C H
1993-01-01
Signs of revolutionary transformation in the global computer industry are everywhere. A roll call of the major industry players reads like a waiting list in the emergency room. The usual explanations for the industry's turmoil are at best inadequate. Scale, friendly government policies, manufacturing capabilities, a strong position in desktop markets, excellent software, top design skills--none of these is sufficient, either by itself or in combination, to ensure competitive success in information technology. A new paradigm is required to explain patterns of success and failure. Simply stated, success flows to the company that manages to establish proprietary architectural control over a broad, fast-moving, competitive space. Architectural strategies have become crucial to information technology because of the astonishing rate of improvement in microprocessors and other semiconductor components. Since no single vendor can keep pace with the outpouring of cheap, powerful, mass-produced components, customers insist on stitching together their own local systems solutions. Architectures impose order on the system and make the interconnections possible. The architectural controller is the company that controls the standard by which the entire information package is assembled. Microsoft's Windows is an excellent example of this. Because of the popularity of Windows, companies like Lotus must conform their software to its parameters in order to compete for market share. In the 1990s, proprietary architectural control is not only possible but indispensable to competitive success. What's more, it has broader implications for organizational structure: architectural competition is giving rise to a new form of business organization.
Pax permanent Martian base: Space architecture for the first human habitation on Mars, volume 5
NASA Technical Reports Server (NTRS)
Huebner-Moths, Janis; Fieber, Joseph P.; Rebholz, Patrick J.; Paruleski, Kerry L.; Moore, Gary T. (Editor)
1992-01-01
America at the Threshold: Report of the Synthesis Group on America's Space Exploration Initiative (the 'Synthesis Report,' sometimes called the Stafford Report after its astronaut chair, published in 1991) recommended that NASA explore what it called four 'architectures,' i.e., four different scenarios for habitation on Mars. The Advanced Design Program in Space Architecture at the University of Wisconsin-Milwaukee supported this report and two of its scenarios--'Architecture 1' and 'Architecture 4'--during the spring of 1992. This report investigates the implications of different mission scenarios, the Martian environment, supporting technologies, and especially human factors and environment-behavior considerations for the design of the first permanent Martian base. The report is comprised of sections on mission analysis, implications of the Martian atmosphere and geologic environment, development of habitability design requirements based on environment-behavior and human factors research, and a full design proposed (concept design and design development) for the first permanent Martian base and habitat. The design is presented in terms of a base site plan, master plan based on a Mars direct scenario phased through IOC, and design development details of a complete Martian habitat for 18 crew members including all laboratory, mission control, and crew support spaces.
Duff, Armin; Fibla, Marti Sanchez; Verschure, Paul F M J
2011-06-30
Intelligence depends on the ability of the brain to acquire and apply rules and representations. At the neuronal level these properties have been shown to critically depend on the prefrontal cortex. Here we present, in the context of the Distributed Adaptive Control architecture (DAC), a biologically based model for flexible control and planning based on key physiological properties of the prefrontal cortex, i.e. reward modulated sustained activity and plasticity of lateral connectivity. We test the model in a series of pertinent tasks, including multiple T-mazes and the Tower of London that are standard experimental tasks to assess flexible control and planning. We show that the model is both able to acquire and express rules that capture the properties of the task and to quickly adapt to changes. Further, we demonstrate that this biomimetic self-contained cognitive architecture generalizes to planning. In addition, we analyze the extended DAC architecture, called DAC 6, as a model that can be applied for the creation of intelligent and psychologically believable synthetic agents. Copyright © 2010 Elsevier Inc. All rights reserved.
Telerobot local-remote control architecture for space flight program applications
NASA Technical Reports Server (NTRS)
Zimmerman, Wayne; Backes, Paul; Steele, Robert; Long, Mark; Bon, Bruce; Beahan, John
1993-01-01
The JPL Supervisory Telerobotics (STELER) Laboratory has developed and demonstrated a unique local-remote robot control architecture which enables management of intermittent communication bus latencies and delays such as those expected for ground-remote operation of Space Station robotic systems via the Tracking and Data Relay Satellite System (TDRSS) communication platform. The current work at JPL in this area has focused on enhancing the technologies and transferring the control architecture to hardware and software environments which are more compatible with projected ground and space operational environments. At the local site, the operator updates the remote worksite model using stereo video and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. This capability runs on a single Silicon Graphics Inc. machine. The operator can employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the intended object. The remote site controller, called the Modular Telerobot Task Execution System (MOTES), runs in a multi-processor VME environment and performs the task sequencing, task execution, trajectory generation, closed loop force/torque control, task parameter monitoring, and reflex action. This paper describes the new STELER architecture implementation, and also documents the results of the recent autonomous docking task execution using the local site and MOTES.
ALLIANCE: An architecture for fault tolerant, cooperative control of heterogeneous mobile robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, L.E.
1995-02-01
This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since suchmore » cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.« less
NASA Technical Reports Server (NTRS)
Subrahmanian, V. S.
1994-01-01
An architecture called hybrid knowledge system (HKS) is described that can be used to interoperate between a specification of the control laws describing a physical system, a collection of databases, knowledge bases and/or other data structures reflecting information about the world in which the physical system controlled resides, observations (e.g. sensor information) from the external world, and actions that must be taken in response to external observations.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
2004-08-01
special node in the SOS architecture that is easily reached, called the beacon. 3. The beacon forwards the packet to a “secret” node, called the secret servlet...whose identity is known to only a small subset of participants in the SOS architecture. 6 4. The secret servlet forwards the packet to...address is the secret servlet. In the following discussion, we motivate why the SOS architecture requires the series of steps described above
Distributed dynamic simulations of networked control and building performance applications.
Yahiaoui, Azzedine
2018-02-01
The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.
Distributed dynamic simulations of networked control and building performance applications
Yahiaoui, Azzedine
2017-01-01
The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper. PMID:29568135
An intelligent robotic aid system for human services
NASA Technical Reports Server (NTRS)
Kawamura, K.; Bagchi, S.; Iskarous, M.; Pack, R. T.; Saad, A.
1994-01-01
The long term goal of our research at the Intelligent Robotic Laboratory at Vanderbilt University is to develop advanced intelligent robotic aid systems for human services. As a first step toward our goal, the current thrusts of our R&D are centered on the development of an intelligent robotic aid called the ISAC (Intelligent Soft Arm Control). In this paper, we describe the overall system architecture and current activities in intelligent control, adaptive/interactive control and task learning.
Padhi, Radhakant; Unnikrishnan, Nishant; Wang, Xiaohua; Balakrishnan, S N
2006-12-01
Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the "Single Network Adaptive Critic (SNAC)" is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.
Summary of Research 1997, Department of Mechanical Engineering.
1999-01-01
Maintenance for Diesel Engines 49 Control Architectures and Non-Linear Controllers for Unmanned Underwater Vehicles 38 Creep of Fiber Reinforced Metal...Technology Demonstration (ATD) 50 Development of Delphi Visual Performance Model 25 Diffraction Methods for the Accurate Measurement of Structure Factors...literature. If this could be done, a U.S. version of ORACLE (to be called DELPHI ) could be developed and used. The result has been the development of a
Imamizu, Hiroshi; Kuroda, Tomoe; Yoshioka, Toshinori; Kawato, Mitsuo
2004-02-04
An internal model is a neural mechanism that can mimic the input-output properties of a controlled object such as a tool. Recent research interests have moved on to how multiple internal models are learned and switched under a given context of behavior. Two representative computational models for task switching propose distinct neural mechanisms, thus predicting different brain activity patterns in the switching of internal models. In one model, called the mixture-of-experts architecture, switching is commanded by a single executive called a "gating network," which is different from the internal models. In the other model, called the MOSAIC (MOdular Selection And Identification for Control), the internal models themselves play crucial roles in switching. Consequently, the mixture-of-experts model predicts that neural activities related to switching and internal models can be temporally and spatially segregated, whereas the MOSAIC model predicts that they are closely intermingled. Here, we directly examined the two predictions by analyzing functional magnetic resonance imaging activities during the switching of one common tool (an ordinary computer mouse) and two novel tools: a rotated mouse, the cursor of which appears in a rotated position, and a velocity mouse, the cursor velocity of which is proportional to the mouse position. The switching and internal model activities temporally and spatially overlapped each other in the cerebellum and in the parietal cortex, whereas the overlap was very small in the frontal cortex. These results suggest that switching mechanisms in the frontal cortex can be explained by the mixture-of-experts architecture, whereas those in the cerebellum and the parietal cortex are explained by the MOSAIC model.
Activity-Centric Approach to Distributed Programming
NASA Technical Reports Server (NTRS)
Levy, Renato; Satapathy, Goutam; Lang, Jun
2004-01-01
The first phase of an effort to develop a NASA version of the Cybele software system has been completed. To give meaning to even a highly abbreviated summary of the modifications to be embodied in the NASA version, it is necessary to present the following background information on Cybele: Cybele is a proprietary software infrastructure for use by programmers in developing agent-based application programs [complex application programs that contain autonomous, interacting components (agents)]. Cybele provides support for event handling from multiple sources, multithreading, concurrency control, migration, and load balancing. A Cybele agent follows a programming paradigm, called activity-centric programming, that enables an abstraction over system-level thread mechanisms. Activity centric programming relieves application programmers of the complex tasks of thread management, concurrency control, and event management. In order to provide such functionality, activity-centric programming demands support of other layers of software. This concludes the background information. In the first phase of the present development, a new architecture for Cybele was defined. In this architecture, Cybele follows a modular service-based approach to coupling of the programming and service layers of software architecture. In a service-based approach, the functionalities supported by activity-centric programming are apportioned, according to their characteristics, among several groups called services. A well-defined interface among all such services serves as a path that facilitates the maintenance and enhancement of such services without adverse effect on the whole software framework. The activity-centric application-program interface (API) is part of a kernel. The kernel API calls the services by use of their published interface. This approach makes it possible for any application code written exclusively under the API to be portable for any configuration of Cybele.
Control Architecture for Robotic Agent Command and Sensing
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Aghazarian, Hrand; Estlin, Tara; Gaines, Daniel
2008-01-01
Control Architecture for Robotic Agent Command and Sensing (CARACaS) is a recent product of a continuing effort to develop architectures for controlling either a single autonomous robotic vehicle or multiple cooperating but otherwise autonomous robotic vehicles. CARACaS is potentially applicable to diverse robotic systems that could include aircraft, spacecraft, ground vehicles, surface water vessels, and/or underwater vessels. CARACaS incudes an integral combination of three coupled agents: a dynamic planning engine, a behavior engine, and a perception engine. The perception and dynamic planning en - gines are also coupled with a memory in the form of a world model. CARACaS is intended to satisfy the need for two major capabilities essential for proper functioning of an autonomous robotic system: a capability for deterministic reaction to unanticipated occurrences and a capability for re-planning in the face of changing goals, conditions, or resources. The behavior engine incorporates the multi-agent control architecture, called CAMPOUT, described in An Architecture for Controlling Multiple Robots (NPO-30345), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 65. CAMPOUT is used to develop behavior-composition and -coordination mechanisms. Real-time process algebra operators are used to compose a behavior network for any given mission scenario. These operators afford a capability for producing a formally correct kernel of behaviors that guarantee predictable performance. By use of a method based on multi-objective decision theory (MODT), recommendations from multiple behaviors are combined to form a set of control actions that represents their consensus. In this approach, all behaviors contribute simultaneously to the control of the robotic system in a cooperative rather than a competitive manner. This approach guarantees a solution that is good enough with respect to resolution of complex, possibly conflicting goals within the constraints of the mission to be accomplished by the vehicle(s).
Study of a unified hardware and software fault-tolerant architecture
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan; Alger, Linda; Friend, Steven; Greeley, Gregory; Sacco, Stephen; Adams, Stuart
1989-01-01
A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1988-01-01
The purpose is to document research to develop strategies for concurrent processing of complex algorithms in data driven architectures. The problem domain consists of decision-free algorithms having large-grained, computationally complex primitive operations. Such are often found in signal processing and control applications. The anticipated multiprocessor environment is a data flow architecture containing between two and twenty computing elements. Each computing element is a processor having local program memory, and which communicates with a common global data memory. A new graph theoretic model called ATAMM which establishes rules for relating a decomposed algorithm to its execution in a data flow architecture is presented. The ATAMM model is used to determine strategies to achieve optimum time performance and to develop a system diagnostic software tool. In addition, preliminary work on a new multiprocessor operating system based on the ATAMM specifications is described.
Low Temperature Performance of High-Speed Neural Network Circuits
NASA Technical Reports Server (NTRS)
Duong, T.; Tran, M.; Daud, T.; Thakoor, A.
1995-01-01
Artificial neural networks, derived from their biological counterparts, offer a new and enabling computing paradigm specially suitable for such tasks as image and signal processing with feature classification/object recognition, global optimization, and adaptive control. When implemented in fully parallel electronic hardware, it offers orders of magnitude speed advantage. Basic building blocks of the new architecture are the processing elements called neurons implemented as nonlinear operational amplifiers with sigmoidal transfer function, interconnected through weighted connections called synapses implemented using circuitry for weight storage and multiply functions either in an analog, digital, or hybrid scheme.
NASA Astrophysics Data System (ADS)
Dewell, Larry D.; Tajdaran, Kiarash; Bell, Raymond M.; Liu, Kuo-Chia; Bolcar, Matthew R.; Sacks, Lia W.; Crooke, Julie A.; Blaurock, Carl
2017-09-01
The need for high payload dynamic stability and ultra-stable mechanical systems is an overarching technology need for large space telescopes such as the Large Ultraviolet / Optical / Infrared (LUVOIR) Surveyor. Wavefront error stability of less than 10 picometers RMS of uncorrected system WFE per wavefront control step represents a drastic performance improvement over current space-based telescopes being fielded. Previous studies of similar telescope architectures have shown that passive telescope isolation approaches are hard-pressed to meet dynamic stability requirements and usually involve complex actively-controlled elements and sophisticated metrology. To meet these challenging dynamic stability requirements, an isolation architecture that involves no mechanical contact between telescope and the host spacecraft structure has the potential of delivering this needed performance improvement. One such architecture, previously developed by Lockheed Martin called Disturbance Free Payload (DFP), is applied to and analyzed for LUVOIR. In a noncontact DFP architecture, the payload and spacecraft fly in close proximity, and interact via non-contact actuators to allow precision payload pointing and isolation from spacecraft vibration. Because disturbance isolation through non-contact, vibration isolation down to zero frequency is possible, and high-frequency structural dynamics of passive isolators are not introduced into the system. In this paper, the system-level analysis of a non-contact architecture is presented for LUVOIR, based on requirements that are directly traceable to its science objectives, including astrophysics and the direct imaging of habitable exoplanets. Aspects of architecture and how they contribute to system performance are examined and tailored to the LUVOIR architecture and concept of operation.
Reinforcement learning for a biped robot based on a CPG-actor-critic method.
Nakamura, Yutaka; Mori, Takeshi; Sato, Masa-aki; Ishii, Shin
2007-08-01
Animals' rhythmic movements, such as locomotion, are considered to be controlled by neural circuits called central pattern generators (CPGs), which generate oscillatory signals. Motivated by this biological mechanism, studies have been conducted on the rhythmic movements controlled by CPG. As an autonomous learning framework for a CPG controller, we propose in this article a reinforcement learning method we call the "CPG-actor-critic" method. This method introduces a new architecture to the actor, and its training is roughly based on a stochastic policy gradient algorithm presented recently. We apply this method to an automatic acquisition problem of control for a biped robot. Computer simulations show that training of the CPG can be successfully performed by our method, thus allowing the biped robot to not only walk stably but also adapt to environmental changes.
NASA Technical Reports Server (NTRS)
Elfes, Alberto; Podnar, Gregg W.; Dolan, John M.; Stancliff, Stephen; Lin, Ellie; Hosler, Jeffrey C.; Ames, Troy J.; Higinbotham, John; Moisan, John R.; Moisan, Tiffany A.;
2008-01-01
Earth science research must bridge the gap between the atmosphere and the ocean to foster understanding of Earth s climate and ecology. Ocean sensing is typically done with satellites, buoys, and crewed research ships. The limitations of these systems include the fact that satellites are often blocked by cloud cover, and buoys and ships have spatial coverage limitations. This paper describes a multi-robot science exploration software architecture and system called the Telesupervised Adaptive Ocean Sensor Fleet (TAOSF). TAOSF supervises and coordinates a group of robotic boats, the OASIS platforms, to enable in-situ study of phenomena in the ocean/atmosphere interface, as well as on the ocean surface and sub-surface. The OASIS platforms are extended deployment autonomous ocean surface vehicles, whose development is funded separately by the National Oceanic and Atmospheric Administration (NOAA). TAOSF allows a human operator to effectively supervise and coordinate multiple robotic assets using a sliding autonomy control architecture, where the operating mode of the vessels ranges from autonomous control to teleoperated human control. TAOSF increases data-gathering effectiveness and science return while reducing demands on scientists for robotic asset tasking, control, and monitoring. The first field application chosen for TAOSF is the characterization of Harmful Algal Blooms (HABs). We discuss the overall TAOSF architecture, describe field tests conducted under controlled conditions using rhodamine dye as a HAB simulant, present initial results from these tests, and outline the next steps in the development of TAOSF.
Mohamaddoust, Reza; Haghighat, Abolfazl Toroghi; Sharif, Mohamad Javad Motahari; Capanni, Niccolo
2011-01-01
Wireless sensor networks (WSN) are currently being applied to energy conservation applications such as light control. We propose a design for such a system called a Lighting Automatic Control System (LACS). The LACS system contains a centralized or distributed architecture determined by application requirements and space usage. The system optimizes the calculations and communications for lighting intensity, incorporates user illumination requirements according to their activities and performs adjustments based on external lighting effects in external sensor and external sensor-less architectures. Methods are proposed for reducing the number of sensors required and increasing the lifetime of those used, for considerably reduced energy consumption. Additionally we suggest methods for improving uniformity of illuminance distribution on a workplane’s surface, which improves user satisfaction. Finally simulation results are presented to verify the effectiveness of our design. PMID:22164114
Mohamaddoust, Reza; Haghighat, Abolfazl Toroghi; Sharif, Mohamad Javad Motahari; Capanni, Niccolo
2011-01-01
Wireless sensor networks (WSN) are currently being applied to energy conservation applications such as light control. We propose a design for such a system called a lighting automatic control system (LACS). The LACS system contains a centralized or distributed architecture determined by application requirements and space usage. The system optimizes the calculations and communications for lighting intensity, incorporates user illumination requirements according to their activities and performs adjustments based on external lighting effects in external sensor and external sensor-less architectures. Methods are proposed for reducing the number of sensors required and increasing the lifetime of those used, for considerably reduced energy consumption. Additionally we suggest methods for improving uniformity of illuminance distribution on a workplane's surface, which improves user satisfaction. Finally simulation results are presented to verify the effectiveness of our design.
Safety Verification of a Fault Tolerant Reconfigurable Autonomous Goal-Based Robotic Control System
NASA Technical Reports Server (NTRS)
Braman, Julia M. B.; Murray, Richard M; Wagner, David A.
2007-01-01
Fault tolerance and safety verification of control systems are essential for the success of autonomous robotic systems. A control architecture called Mission Data System (MDS), developed at the Jet Propulsion Laboratory, takes a goal-based control approach. In this paper, a method for converting goal network control programs into linear hybrid systems is developed. The linear hybrid system can then be verified for safety in the presence of failures using existing symbolic model checkers. An example task is simulated in MDS and successfully verified using HyTech, a symbolic model checking software for linear hybrid systems.
NASA Technical Reports Server (NTRS)
Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)
1993-01-01
This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.
Optical And Environmental Properties Of NCAP Glazing Products
NASA Astrophysics Data System (ADS)
van Konynenburg, Peter; Wipfler, Richard T.; Smith, Jerry L.
1989-07-01
The first large area, commercially available, electrically-controllable glazing products sold under the tradename VARILITETM are based on a new liquid crystal film technology called NCAP. The glazing products can be switched in milliseconds between a highly translucent state (for privacy and glare control) to a transparent state (for high visibility) with the application of an AC voltage. The optical and environmental properties are demonstrated to meet the general requirements for architectural glazing use. The first qualified indoor product is described in detail.
Construction of integrated case environments.
Losavio, Francisca; Matteo, Alfredo; Pérez, María
2003-01-01
The main goal of Computer-Aided Software Engineering (CASE) technology is to improve the entire software system development process. The CASE approach is not merely a technology; it involves a fundamental change in the process of software development. The tendency of the CASE approach, technically speaking, is the integration of tools that assist in the application of specific methods. In this sense, the environment architecture, which includes the platform and the system's hardware and software, constitutes the base of the CASE environment. The problem of tools integration has been proposed for two decades. Current integration efforts emphasize the interoperability of tools, especially in distributed environments. In this work we use the Brown approach. The environment resulting from the application of this model is called a federative environment, focusing on the fact that this architecture pays special attention to the connections among the components of the environment. This approach is now being used in component-based design. This paper describes a concrete experience in civil engineering and architecture fields, for the construction of an integrated CASE environment. A generic architectural framework based on an intermediary architectural pattern is applied to achieve the integration of the different tools. This intermediary represents the control perspective of the PAC (Presentation-Abstraction-Control) style, which has been implemented as a Mediator pattern and it has been used in the interactive systems domain. In addition, a process is given to construct the integrated CASE.
Performance and stability of telemanipulators using bilateral impedance control. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Moore, Christopher Lane
1991-01-01
A new method of control for telemanipulators called bilateral impedance control is investigated. This new method differs from previous approaches in that interaction forces are used as the communication signals between the master and slave robots. The new control architecture has several advantages: (1) It allows the master robot and the slave robot to be stabilized independently without becoming involved in the overall system dynamics; (2) It permits the system designers to arbitrarily specify desired performance characteristics such as the force and position ratios between the master and slave; (3) The impedance at both ends of the telerobotic system can be modulated to suit the requirements of the task. The main goals of the research are to characterize the performance and stability of the new control architecture. The dynamics of the telerobotic system are described by a bond graph model that illustrates how energy is transformed, stored, and dissipated. Performance can be completely described by a set of three independent parameters. These parameters are fundamentally related to the structure of the H matrix that regulates the communication of force signals within the system. Stability is analyzed with two mathematical techniques: the Small Gain Theorem and the Multivariable Nyquist Criterion. The theoretical predictions for performance and stability are experimentally verified by implementing the new control architecture on a multidegree of freedom telemanipulator.
SANDS: an architecture for clinical decision support in a National Health Information Network.
Wright, Adam; Sittig, Dean F
2007-10-11
A new architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support) is introduced and its performance evaluated. The architecture provides a method for performing clinical decision support across a network, as in a health information exchange. Using the prototype we demonstrated that, first, a number of useful types of decision support can be carried out using our architecture; and, second, that the architecture exhibits desirable reliability and performance characteristics.
The telesupervised adaptive ocean sensor fleet
NASA Astrophysics Data System (ADS)
Elfes, Alberto; Podnar, Gregg W.; Dolan, John M.; Stancliff, Stephen; Lin, Ellie; Hosler, Jeffrey C.; Ames, Troy J.; Moisan, John; Moisan, Tiffany A.; Higinbotham, John; Kulczycki, Eric A.
2007-09-01
We are developing a multi-robot science exploration architecture and system called the Telesupervised Adaptive Ocean Sensor Fleet (TAOSF). TAOSF uses a group of robotic boats (the OASIS platforms) to enable in-situ study of ocean surface and sub-surface phenomena. The OASIS boats are extended-deployment autonomous ocean surface vehicles, whose development is funded separately by the National Oceanic and Atmospheric Administration (NOAA). The TAOSF architecture provides an integrated approach to multi-vehicle coordination and sliding human-vehicle autonomy. It allows multiple mobile sensing assets to function in a cooperative fashion, and the operating mode of the vessels to range from autonomous control to teleoperated control. In this manner, TAOSF increases data-gathering effectiveness and science return while reducing demands on scientists for tasking, control, and monitoring. It combines and extends prior related work done by the authors and their institutions. The TAOSF architecture is applicable to other areas where multiple sensing assets are needed, including ecological forecasting, water management, carbon management, disaster management, coastal management, homeland security, and planetary exploration. The first field application chosen for TAOSF is the characterization of Harmful Algal Blooms (HABs). Several components of the TAOSF system have been tested, including the OASIS boats, the communications and control interfaces between the various hardware and software subsystems, and an airborne sensor validation system. Field tests in support of future HAB characterization were performed under controlled conditions, using rhodamine dye as a HAB simulant that was dispersed in a pond. In this paper, we describe the overall TAOSF architecture and its components, discuss the initial tests conducted and outline the next steps.
PIMS: Memristor-Based Processing-in-Memory-and-Storage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Jeanine
Continued progress in computing has augmented the quest for higher performance with a new quest for higher energy efficiency. This has led to the re-emergence of Processing-In-Memory (PIM) ar- chitectures that offer higher density and performance with some boost in energy efficiency. Past PIM work either integrated a standard CPU with a conventional DRAM to improve the CPU- memory link, or used a bit-level processor with Single Instruction Multiple Data (SIMD) control, but neither matched the energy consumption of the memory to the computation. We originally proposed to develop a new architecture derived from PIM that more effectively addressed energymore » efficiency for high performance scientific, data analytics, and neuromorphic applications. We also originally planned to implement a von Neumann architecture with arithmetic/logic units (ALUs) that matched the power consumption of an advanced storage array to maximize energy efficiency. Implementing this architecture in storage was our original idea, since by augmenting storage (in- stead of memory), the system could address both in-memory computation and applications that accessed larger data sets directly from storage, hence Processing-in-Memory-and-Storage (PIMS). However, as our research matured, we discovered several things that changed our original direc- tion, the most important being that a PIM that implements a standard von Neumann-type archi- tecture results in significant energy efficiency improvement, but only about a O(10) performance improvement. In addition to this, the emergence of new memory technologies moved us to propos- ing a non-von Neumann architecture, called Superstrider, implemented not in storage, but in a new DRAM technology called High Bandwidth Memory (HBM). HBM is a stacked DRAM tech- nology that includes a logic layer where an architecture such as Superstrider could potentially be implemented.« less
System Engineering Strategy for Distributed Multi-Purpose Simulation Architectures
NASA Technical Reports Server (NTRS)
Bhula, Dlilpkumar; Kurt, Cindy Marie; Luty, Roger
2007-01-01
This paper describes the system engineering approach used to develop distributed multi-purpose simulations. The multi-purpose simulation architecture focuses on user needs, operations, flexibility, cost and maintenance. This approach was used to develop an International Space Station (ISS) simulator, which is called the International Space Station Integrated Simulation (ISIS)1. The ISIS runs unmodified ISS flight software, system models, and the astronaut command and control interface in an open system design that allows for rapid integration of multiple ISS models. The initial intent of ISIS was to provide a distributed system that allows access to ISS flight software and models for the creation, test, and validation of crew and ground controller procedures. This capability reduces the cost and scheduling issues associated with utilizing standalone simulators in fixed locations, and facilitates discovering unknowns and errors earlier in the development lifecycle. Since its inception, the flexible architecture of the ISIS has allowed its purpose to evolve to include ground operator system and display training, flight software modification testing, and as a realistic test bed for Exploration automation technology research and development.
Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition.
Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti
2017-05-01
Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.
Versatile synchronized real-time MEG hardware controller for large-scale fast data acquisition
NASA Astrophysics Data System (ADS)
Sun, Limin; Han, Menglai; Pratt, Kevin; Paulson, Douglas; Dinh, Christoph; Esch, Lorenz; Okada, Yoshio; Hämäläinen, Matti
2017-05-01
Versatile controllers for accurate, fast, and real-time synchronized acquisition of large-scale data are useful in many areas of science, engineering, and technology. Here, we describe the development of a controller software based on a technique called queued state machine for controlling the data acquisition (DAQ) hardware, continuously acquiring a large amount of data synchronized across a large number of channels (>400) at a fast rate (up to 20 kHz/channel) in real time, and interfacing with applications for real-time data analysis and display of electrophysiological data. This DAQ controller was developed specifically for a 384-channel pediatric whole-head magnetoencephalography (MEG) system, but its architecture is useful for wide applications. This controller running in a LabVIEW environment interfaces with microprocessors in the MEG sensor electronics to control their real-time operation. It also interfaces with a real-time MEG analysis software via transmission control protocol/internet protocol, to control the synchronous acquisition and transfer of the data in real time from >400 channels to acquisition and analysis workstations. The successful implementation of this controller for an MEG system with a large number of channels demonstrates the feasibility of employing the present architecture in several other applications.
Controlled molecular self-assembly of complex three-dimensional structures in soft materials
Huang, Changjin; Quinn, David; Suresh, Subra
2018-01-01
Many applications in tissue engineering, flexible electronics, and soft robotics call for approaches that are capable of producing complex 3D architectures in soft materials. Here we present a method using molecular self-assembly to generate hydrogel-based 3D architectures that resembles the appealing features of the bottom-up process in morphogenesis of living tissues. Our strategy effectively utilizes the three essential components dictating living tissue morphogenesis to produce complex 3D architectures: modulation of local chemistry, material transport, and mechanics, which can be engineered by controlling the local distribution of polymerization inhibitor (i.e., oxygen), diffusion of monomers/cross-linkers through the porous structures of cross-linked polymer network, and mechanical constraints, respectively. We show that oxygen plays a role in hydrogel polymerization which is mechanistically similar to the role of growth factors in tissue growth, and the continued growth of hydrogel enabled by diffusion of monomers/cross-linkers into the porous hydrogel similar to the mechanisms of tissue growth enabled by material transport. The capability and versatility of our strategy are demonstrated through biomimetics of tissue morphogenesis for both plants and animals, and its application to generate other complex 3D architectures. Our technique opens avenues to studying many growth phenomena found in nature and generating complex 3D structures to benefit diverse applications. PMID:29255037
Voice over internet protocol with prepaid calling card solutions
NASA Astrophysics Data System (ADS)
Gunadi, Tri
2001-07-01
The VoIP technology is growing up rapidly, it has big network impact on PT Telkom Indonesia, the bigger telecommunication operator in Indonesia. Telkom has adopted VoIP and one other technology, Intelligent Network (IN). We develop those technologies together in one service product, called Internet Prepaid Calling Card (IPCC). IPCC is becoming new breakthrough for the Indonesia telecommunication services especially on VoIP and Prepaid Calling Card solutions. Network architecture of Indonesia telecommunication consists of three layer, Local, Tandem and Trunck Exchange layer. Network development researches for IPCC architecture are focus on network overlay hierarchy, Internet and PSTN. With this design hierarchy the goal of Interworking PSTN, VoIP and IN calling card, become reality. Overlay design for IPCC is not on Trunck Exchange, this is the new architecture, these overlay on Tandem and Local Exchange, to make the faster call processing. The nodes added: Gateway (GW) and Card Management Center (CMC) The GW do interfacing between PSTN and Internet Network used ISDN-PRA and Ethernet. The other functions are making bridge on circuit (PSTN) with packet (VoIP) based and real time billing process. The CMC used for data storage, pin validation, report activation, tariff system, directory number and all the administration transaction. With two nodes added the IPCC service offered to the market.
Flipped Learning as a Paradigm Shift in Architectural Education
ERIC Educational Resources Information Center
Elrayies, Ghada Mohammad
2017-01-01
The target of Education for Sustainable Development is to make people creative and lifelong learners. Over the past years, architectural education has faced challenges of embedding innovation and creativity into its programs. That calls the graduates to be more skilled in the human dimensions of professional practice. So, architectural education…
ERIC Educational Resources Information Center
Tambouris, Efthimios; Zotou, Maria; Kalampokis, Evangelos; Tarabanis, Konstantinos
2012-01-01
Enterprise architecture (EA) implementation refers to a set of activities ultimately aiming to align business objectives with information technology infrastructure in an organization. EA implementation is a multidisciplinary, complicated and endless process, hence, calls for adequate education and training programs that will build highly skilled…
Architectural design of heterogeneous metallic nanocrystals--principles and processes.
Yu, Yue; Zhang, Qingbo; Yao, Qiaofeng; Xie, Jianping; Lee, Jim Yang
2014-12-16
CONSPECTUS: Heterogeneous metal nanocrystals (HMNCs) are a natural extension of simple metal nanocrystals (NCs), but as a research topic, they have been much less explored until recently. HMNCs are formed by integrating metal NCs of different compositions into a common entity, similar to the way atoms are bonded to form molecules. HMNCs can be built to exhibit an unprecedented architectural diversity and complexity by programming the arrangement of the NC building blocks ("unit NCs"). The architectural engineering of HMNCs involves the design and fabrication of the architecture-determining elements (ADEs), i.e., unit NCs with precise control of shape and size, and their relative positions in the design. Similar to molecular engineering, where structural diversity is used to create more property variations for application explorations, the architectural engineering of HMNCs can similarly increase the utility of metal NCs by offering a suite of properties to support multifunctionality in applications. The architectural engineering of HMNCs calls for processes and operations that can execute the design. Some enabling technologies already exist in the form of classical micro- and macroscale fabrication techniques, such as masking and etching. These processes, when used singly or in combination, are fully capable of fabricating nanoscopic objects. What is needed is a detailed understanding of the engineering control of ADEs and the translation of these principles into actual processes. For simplicity of execution, these processes should be integrated into a common reaction system and yet retain independence of control. The key to architectural diversity is therefore the independent controllability of each ADE in the design blueprint. The right chemical tools must be applied under the right circumstances in order to achieve the desired outcome. In this Account, after a short illustration of the infinite possibility of combining different ADEs to create HMNC design variations, we introduce the fabrication processes for each ADE, which enable shape, size, and location control of the unit NCs in a particular HMNC design. The principles of these processes are discussed and illustrated with examples. We then discuss how these processes may be integrated into a common reaction system while retaining the independence of individual processes. The principles for the independent control of each ADE are discussed in detail to lay the foundation for the selection of the chemical reaction system and its operating space.
Bouallaga, I; Massicard, S; Yaniv, M; Thierry, F
2000-11-01
Recent studies have reported new mechanisms that mediate the transcriptional synergy of strong tissue-specific enhancers, involving the cooperative assembly of higher-order nucleoprotein complexes called enhanceosomes. Here we show that the HPV18 enhancer, which controls the epithelial-specific transcription of the E6 and E7 transforming genes, exhibits characteristic features of these structures. We used deletion experiments to show that a core enhancer element cooperates, in a specific helical phasing, with distant essential factors binding to the ends of the enhancer. This core sequence, binding a Jun B/Fra-2 heterodimer, cooperatively recruits the architectural protein HMG-I(Y) in a nucleoprotein complex, where they interact with each other. Therefore, in HeLa cells, HPV18 transcription seems to depend upon the assembly of an enhanceosome containing multiple cellular factors recruited by a core sequence interacting with AP1 and HMG-I(Y).
Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P.; Gerstein, Mark
2010-01-01
The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers’ continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems. PMID:20439753
Yan, Koon-Kiu; Fang, Gang; Bhardwaj, Nitin; Alexander, Roger P; Gerstein, Mark
2010-05-18
The genome has often been called the operating system (OS) for a living organism. A computer OS is described by a regulatory control network termed the call graph, which is analogous to the transcriptional regulatory network in a cell. To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution. We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions. This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network. We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution. The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs. The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers' continual fine-tuning. These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems.
Architecture and robustness tradeoffs in speed-scaled queues with application to energy management
NASA Astrophysics Data System (ADS)
Dinh, Tuan V.; Andrew, Lachlan L. H.; Nazarathy, Yoni
2014-08-01
We consider single-pass, lossless, queueing systems at steady-state subject to Poisson job arrivals at an unknown rate. Service rates are allowed to depend on the number of jobs in the system, up to a fixed maximum, and power consumption is an increasing function of speed. The goal is to control the state dependent service rates such that both energy consumption and delay are kept low. We consider a linear combination of the mean job delay and energy consumption as the performance measure. We examine both the 'architecture' of the system, which we define as a specification of the number of speeds that the system can choose from, and the 'design' of the system, which we define as the actual speeds available. Previous work has illustrated that when the arrival rate is precisely known, there is little benefit in introducing complex (multi-speed) architectures, yet in view of parameter uncertainty, allowing a variable number of speeds improves robustness. We quantify the tradeoffs of architecture specification with respect to robustness, analysing both global robustness and a newly defined measure which we call local robustness.
StegoWall: blind statistical detection of hidden data
NASA Astrophysics Data System (ADS)
Voloshynovskiy, Sviatoslav V.; Herrigel, Alexander; Rytsar, Yuri B.; Pun, Thierry
2002-04-01
Novel functional possibilities, provided by recent data hiding technologies, carry out the danger of uncontrolled (unauthorized) and unlimited information exchange that might be used by people with unfriendly interests. The multimedia industry as well as the research community recognize the urgent necessity for network security and copyright protection, or rather the lack of adequate law for digital multimedia protection. This paper advocates the need for detecting hidden data in digital and analog media as well as in electronic transmissions, and for attempting to identify the underlying hidden data. Solving this problem calls for the development of an architecture for blind stochastic hidden data detection in order to prevent unauthorized data exchange. The proposed architecture is called StegoWall; its key aspects are the solid investigation, the deep understanding, and the prediction of possible tendencies in the development of advanced data hiding technologies. The basic idea of our complex approach is to exploit all information about hidden data statistics to perform its detection based on a stochastic framework. The StegoWall system will be used for four main applications: robust watermarking, secret communications, integrity control and tamper proofing, and internet/network security.
Failure models for textile composites
NASA Technical Reports Server (NTRS)
Cox, Brian
1995-01-01
The goals of this investigation were to: (1) identify mechanisms of failure and determine how the architecture of reinforcing fibers in 3D woven composites controlled stiffness, strength, strain to failure, work of fracture, notch sensitivity, and fatigue life; and (2) to model composite stiffness, strength, and fatigue life. A total of 11 different angle and orthogonal interlock woven composites were examined. Composite properties depended on the weave architecture, the tow size, and the spatial distributions and strength of geometrical flaws. Simple models were developed for elastic properties, strength, and fatigue life. A more complicated stochastic model, called the 'Binary Model,' was developed for damage tolerance and ultimate failure. These 3D woven composites possessed an extraordinary combination of strength, damage tolerance, and notch insensitivity.
Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry
2008-05-01
AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Simulations were performed on tomato plants to demonstrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment.
Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry
2008-01-01
Background and Aims AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. Methods The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Key Results Simulations were performed on tomato plants to demostrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. Conclusions The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment. PMID:17766310
X-38 Vehicle 131R Free Flights 1 and 2
NASA Technical Reports Server (NTRS)
Munday, Steve
2000-01-01
The X-38 program is using a modern flight control system (FCS) architecture originally developed by Honeywell called MACH. During last year's SAE G&C subcommittee meeting, we outlined the design, implementation and testing of MACH in X-38 Vehicles 132, 131R & 201. During this year's SAE meeting, I'll focus upon the first two free flights of V131R, describing what caused the roll-over in FF1 and how we fixed it for FF2. I only have 30 minutes, so it will be a quick summary including VHS video. X-38 is a NASA JSC/DFRC experimental flight test program developing a series of prototypes for an International Space Station (ISS) Crew Return Vehicle (CRV), often described as an ISS "lifeboat." X-38 Vehicle 132 Free Flight 3 was the first flight test of a modern FCS architecture called Multi-Application ControlH (MACH), developed by the Honeywell Technology Center in Minneapolis and Honeywell's Houston Engineering Center. MACH wraps classical Proportional+integral (P+I) outer attitude loops around modern dynamic inversion attitude rate loops. The presentation at last year's SAE Aerospace Meeting No. 85 focused upon the design and testing of the FCS algorithm and Vehicle 132 Free Flight 3. This presentation will summarize flight control and aerodynamics lessons learned during Free Flights 1 and 2 of Vehicle 131R, a subsonic test vehicle laying the groundwork for the orbital/entry test of Vehicle 201 in 2003.
Controlled molecular self-assembly of complex three-dimensional structures in soft materials.
Huang, Changjin; Quinn, David; Suresh, Subra; Hsia, K Jimmy
2018-01-02
Many applications in tissue engineering, flexible electronics, and soft robotics call for approaches that are capable of producing complex 3D architectures in soft materials. Here we present a method using molecular self-assembly to generate hydrogel-based 3D architectures that resembles the appealing features of the bottom-up process in morphogenesis of living tissues. Our strategy effectively utilizes the three essential components dictating living tissue morphogenesis to produce complex 3D architectures: modulation of local chemistry, material transport, and mechanics, which can be engineered by controlling the local distribution of polymerization inhibitor (i.e., oxygen), diffusion of monomers/cross-linkers through the porous structures of cross-linked polymer network, and mechanical constraints, respectively. We show that oxygen plays a role in hydrogel polymerization which is mechanistically similar to the role of growth factors in tissue growth, and the continued growth of hydrogel enabled by diffusion of monomers/cross-linkers into the porous hydrogel similar to the mechanisms of tissue growth enabled by material transport. The capability and versatility of our strategy are demonstrated through biomimetics of tissue morphogenesis for both plants and animals, and its application to generate other complex 3D architectures. Our technique opens avenues to studying many growth phenomena found in nature and generating complex 3D structures to benefit diverse applications. Copyright © 2017 the Author(s). Published by PNAS.
Requirements for an Integrated UAS CNS Architecture
NASA Technical Reports Server (NTRS)
Templin, Fred; Jain, Raj; Sheffield, Greg; Taboso, Pedro; Ponchak, Denise
2017-01-01
The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) is investigating revolutionary and advanced universal, reliable, always available, cyber secure and affordable Communication, Navigation, Surveillance (CNS) options for all altitudes of UAS operations. In Spring 2015, NASA issued a Call for Proposals under NASA Research Announcements (NRA) NNH15ZEA001N, Amendment 7 Subtopic 2.4. Boeing was selected to conduct a study with the objective to determine the most promising candidate technologies for Unmanned Air Systems (UAS) air-to-air and air-to-ground data exchange and analyze their suitability in a post-NextGen NAS environment. The overall objectives are to develop UAS CNS requirements and then develop architectures that satisfy the requirements for UAS in both controlled and uncontrolled air space. This contract is funded under NASAs Aeronautics Research Mission Directorates (ARMD) Aviation Operations and Safety Program (AOSP) Safe Autonomous Systems Operations (SASO) project and proposes technologies for the Unmanned Air Systems Traffic Management (UTM) service. Communications, Navigation and Surveillance (CNS) requirements must be developed in order to establish a CNS architecture supporting Unmanned Air Systems integration in the National Air Space (UAS in the NAS). These requirements must address cybersecurity, future communications, satellite-based navigation APNT, and scalable surveillance and situational awareness. CNS integration, consolidation and miniaturization requirements are also important to support the explosive growth in small UAS deployment. Air Traffic Management (ATM) must also be accommodated to support critical Command and Control (C2) for Air Traffic Controllers (ATC). This document therefore presents UAS CNS requirements that will guide the architecture.
Punctuated evolution and robustness in morphogenesis
Grigoriev, D.; Reinitz, J.; Vakulenko, S.; Weber, A.
2014-01-01
This paper presents an analytic approach to the pattern stability and evolution problem in morphogenesis. The approach used here is based on the ideas from the gene and neural network theory. We assume that gene networks contain a number of small groups of genes (called hubs) controlling morphogenesis process. Hub genes represent an important element of gene network architecture and their existence is empirically confirmed. We show that hubs can stabilize morphogenetic pattern and accelerate the morphogenesis. The hub activity exhibits an abrupt change depending on the mutation frequency. When the mutation frequency is small, these hubs suppress all mutations and gene product concentrations do not change, thus, the pattern is stable. When the environmental pressure increases and the population needs new genotypes, the genetic drift and other effects increase the mutation frequency. For the frequencies that are larger than a critical amount the hubs turn off; and as a result, many mutations can affect phenotype. This effect can serve as an engine for evolution. We show that this engine is very effective: the evolution acceleration is an exponential function of gene redundancy. Finally, we show that the Eldredge-Gould concept of punctuated evolution results from the network architecture, which provides fast evolution, control of evolvability, and pattern robustness. To describe analytically the effect of exponential acceleration, we use mathematical methods developed recently for hard combinatorial problems, in particular, for so-called k-SAT problem, and numerical simulations. PMID:24996115
X-38 Experimental Controls Laws
NASA Technical Reports Server (NTRS)
Munday, Steve; Estes, Jay; Bordano, Aldo J.
2000-01-01
X-38 Experimental Control Laws X-38 is a NASA JSC/DFRC experimental flight test program developing a series of prototypes for an International Space Station (ISS) Crew Return Vehicle, often called an ISS "lifeboat." X- 38 Vehicle 132 Free Flight 3, currently scheduled for the end of this month, will be the first flight test of a modem FCS architecture called Multi-Application Control-Honeywell (MACH), originally developed by the Honeywell Technology Center. MACH wraps classical P&I outer attitude loops around a modem dynamic inversion attitude rate loop. The dynamic inversion process requires that the flight computer have an onboard aircraft model of expected vehicle dynamics based upon the aerodynamic database. Dynamic inversion is computationally intensive, so some timing modifications were made to implement MACH on the slower flight computers of the subsonic test vehicles. In addition to linear stability margin analyses and high fidelity 6-DOF simulation, hardware-in-the-loop testing is used to verify the implementation of MACH and its robustness to aerodynamic and environmental uncertainties and disturbances.
On implementation of DCTCP on three-tier and fat-tree data center network topologies.
Zafar, Saima; Bashir, Abeer; Chaudhry, Shafique Ahmad
2016-01-01
A data center is a facility for housing computational and storage systems interconnected through a communication network called data center network (DCN). Due to a tremendous growth in the computational power, storage capacity and the number of inter-connected servers, the DCN faces challenges concerning efficiency, reliability and scalability. Although transmission control protocol (TCP) is a time-tested transport protocol in the Internet, DCN challenges such as inadequate buffer space in switches and bandwidth limitations have prompted the researchers to propose techniques to improve TCP performance or design new transport protocols for DCN. Data center TCP (DCTCP) emerge as one of the most promising solutions in this domain which employs the explicit congestion notification feature of TCP to enhance the TCP congestion control algorithm. While DCTCP has been analyzed for two-tier tree-based DCN topology for traffic between servers in the same rack which is common in cloud applications, it remains oblivious to the traffic patterns common in university and private enterprise networks which traverse the complete network interconnect spanning upper tier layers. We also recognize that DCTCP performance cannot remain unaffected by the underlying DCN architecture hence there is a need to test and compare DCTCP performance when implemented over diverse DCN architectures. Some of the most notable DCN architectures are the legacy three-tier, fat-tree, BCube, DCell, VL2, and CamCube. In this research, we simulate the two switch-centric DCN architectures; the widely deployed legacy three-tier architecture and the promising fat-tree architecture using network simulator and analyze the performance of DCTCP in terms of throughput and delay for realistic traffic patterns. We also examine how DCTCP prevents incast and outcast congestion when realistic DCN traffic patterns are employed in above mentioned topologies. Our results show that the underlying DCN architecture significantly impacts DCTCP performance. We find that DCTCP gives optimal performance in fat-tree topology and is most suitable for large networks.
NASA Technical Reports Server (NTRS)
Simmons, Reid; Apfelbaum, David
2005-01-01
Task Description Language (TDL) is an extension of the C++ programming language that enables programmers to quickly and easily write complex, concurrent computer programs for controlling real-time autonomous systems, including robots and spacecraft. TDL is based on earlier work (circa 1984 through 1989) on the Task Control Architecture (TCA). TDL provides syntactic support for hierarchical task-level control functions, including task decomposition, synchronization, execution monitoring, and exception handling. A Java-language-based compiler transforms TDL programs into pure C++ code that includes calls to a platform-independent task-control-management (TCM) library. TDL has been used to control and coordinate multiple heterogeneous robots in projects sponsored by NASA and the Defense Advanced Research Projects Agency (DARPA). It has also been used in Brazil to control an autonomous airship and in Canada to control a robotic manipulator.
A Practical Software Architecture for Virtual Universities
ERIC Educational Resources Information Center
Xiang, Peifeng; Shi, Yuanchun; Qin, Weijun
2006-01-01
This article introduces a practical software architecture called CUBES, which focuses on system integration and evolvement for online virtual universities. The key of CUBES is a supporting platform that helps to integrate and evolve heterogeneous educational applications developed by different organizations. Both standardized educational…
Integrating DXplain into a clinical information system using the World Wide Web.
Elhanan, G; Socratous, S A; Cimino, J J
1996-01-01
The World Wide Web(WWW) offers a cross-platform environment and standard protocols that enable integration of various applications available on the Internet. The authors use the Web to facilitate interaction between their Web-based Clinical Information System and a decision-support system-DXplain, at the Massachusetts General Hospital-using local architecture and Common Gateway Interface programs. The current application translates patients laboratory test results into DXplain's terms to generate diagnostic hypotheses. Two different access methods are utilized for this model; Hypertext Transfer Protocol (HTTP) and TCP/IP function calls. While clinical aspects cannot be evaluated as yet, the model demonstrates the potential of Web-based applications for interaction and integration and how local architecture, with a controlled vocabulary server, can further facilitate such integration. This model serves to demonstrate some of the limitations of the current WWW technology and identifies issues such as control over Web resources and their utilization and liability issues as possible obstacles for further integration.
Modular closed-loop control of diabetes.
Patek, S D; Magni, L; Dassau, E; Karvetski, C; Toffanin, C; De Nicolao, G; Del Favero, S; Breton, M; Man, C Dalla; Renard, E; Zisser, H; Doyle, F J; Cobelli, C; Kovatchev, B P
2012-11-01
Modularity plays a key role in many engineering systems, allowing for plug-and-play integration of components, enhancing flexibility and adaptability, and facilitating standardization. In the control of diabetes, i.e., the so-called "artificial pancreas," modularity allows for the step-wise introduction of (and regulatory approval for) algorithmic components, starting with subsystems for assured patient safety and followed by higher layer components that serve to modify the patient's basal rate in real time. In this paper, we introduce a three-layer modular architecture for the control of diabetes, consisting in a sensor/pump interface module (IM), a continuous safety module (CSM), and a real-time control module (RTCM), which separates the functions of insulin recommendation (postmeal insulin for mitigating hyperglycemia) and safety (prevention of hypoglycemia). In addition, we provide details of instances of all three layers of the architecture: the APS© serving as the IM, the safety supervision module (SSM) serving as the CSM, and the range correction module (RCM) serving as the RTCM. We evaluate the performance of the integrated system via in silico preclinical trials, demonstrating 1) the ability of the SSM to reduce the incidence of hypoglycemia under nonideal operating conditions and 2) the ability of the RCM to reduce glycemic variability.
NASA Technical Reports Server (NTRS)
Boriakoff, Valentin
1994-01-01
The goal of this project was the feasibility study of a particular architecture of a digital signal processing machine operating in real time which could do in a pipeline fashion the computation of the fast Fourier transform (FFT) of a time-domain sampled complex digital data stream. The particular architecture makes use of simple identical processors (called inner product processors) in a linear organization called a systolic array. Through computer simulation the new architecture to compute the FFT with systolic arrays was proved to be viable, and computed the FFT correctly and with the predicted particulars of operation. Integrated circuits to compute the operations expected of the vital node of the systolic architecture were proven feasible, and even with a 2 micron VLSI technology can execute the required operations in the required time. Actual construction of the integrated circuits was successful in one variant (fixed point) and unsuccessful in the other (floating point).
The TENOR Architecture for Advanced Distributed Learning and Intelligent Training
2002-01-01
called TENOR, for Training Education Network on Request. There have been a number of recent learning systems developed that leverage off Internet...AG2-14256 AIAA 2002-1054 The TENOR Architecture for Advanced Distributed Learning and Intelligent Training C. Tibaudo, J. Kristl and J. Schroeder...COVERED 4. TITLE AND SUBTITLE The TENOR Architecture for Advanced Distributed Learning and Intelligent Training 5a. CONTRACT NUMBER F33615-00-M
Predictive Thermal Control Applied to HabEx
NASA Technical Reports Server (NTRS)
Brooks, Thomas E.
2017-01-01
Exoplanet science can be accomplished with a telescope that has an internal coronagraph or with an external starshade. An internal coronagraph architecture requires extreme wavefront stability (10 pm change/10 minutes for 10(exp -10) contrast), so every source of wavefront error (WFE) must be controlled. Analysis has been done to estimate the thermal stability required to meet the wavefront stability requirement. This paper illustrates the potential of a new thermal control method called predictive thermal control (PTC) to achieve the required thermal stability. A simple development test using PTC indicates that PTC may meet the thermal stability requirements. Further testing of the PTC method in flight-like environments will be conducted in the X-ray and Cryogenic Facility (XRCF) at Marshall Space Flight Center (MSFC).
Predictive thermal control applied to HabEx
NASA Astrophysics Data System (ADS)
Brooks, Thomas E.
2017-09-01
Exoplanet science can be accomplished with a telescope that has an internal coronagraph or with an external starshade. An internal coronagraph architecture requires extreme wavefront stability (10 pm change/10 minutes for 10-10 contrast), so every source of wavefront error (WFE) must be controlled. Analysis has been done to estimate the thermal stability required to meet the wavefront stability requirement. This paper illustrates the potential of a new thermal control method called predictive thermal control (PTC) to achieve the required thermal stability. A simple development test using PTC indicates that PTC may meet the thermal stability requirements. Further testing of the PTC method in flight-like environments will be conducted in the X-ray and Cryogenic Facility (XRCF) at Marshall Space Flight Center (MSFC).
Hypertext Interchange Using ICA.
ERIC Educational Resources Information Center
Rada, Roy; And Others
1995-01-01
Discusses extended ICA (Integrated Chameleon Architecture), a public domain toolset for generating text-to-hypertext translators. A system called SGML-MUCH has been developed using E-ICA (Extended Integrated Chameleon Architecture) and is presented as a case study with converters for the hypertext systems MUCH, Guide, Hyperties, and Toolbook.…
ERIC Educational Resources Information Center
McClure, Connie
2010-01-01
This article describes how the author teaches a fourth- and fifth-grade unit on architecture called the Art and Science of Planning Buildings. Rockville, Indiana has fine examples of architecture ranging from log cabins, classic Greek columns, Victorian houses, a mission-style theater, and Frank Lloyd Wright prairie-style homes. After reading…
Comparing Architectural Styles for Service-Oriented Architectures - a REST vs. SOAP Case Study
NASA Astrophysics Data System (ADS)
Becker, Jörg; Matzner, Martin; Müller, Oliver
Two architectural styles are currently heavily discussed regarding the design of service-oriented architectures (SOA). Within this chapter we have compared those two alternative styles - the SOAP-style with procedural designs similar to remote procedure calls and the REST-style with loosely coupled services similar to resources of the World Wide Web. We introduce the case of a business network consisting of manufacturers and service providers of the electronics industry for deriving a set of requirements towards a specific SOA implementation. For each architectural style we present a concrete SOA design and evaluate it against the defined set of requirements.
NASA Astrophysics Data System (ADS)
Tiwari, Shivendra N.; Padhi, Radhakant
2018-01-01
Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.
ERIC Educational Resources Information Center
Hassani, Kaveh; Nahvi, Ali; Ahmadi, Ali
2016-01-01
In this paper, we present an intelligent architecture, called intelligent virtual environment for language learning, with embedded pedagogical agents for improving listening and speaking skills of non-native English language learners. The proposed architecture integrates virtual environments into the Intelligent Computer-Assisted Language…
From the Ground Up: Art in American Built Environment Education.
ERIC Educational Resources Information Center
Guilfoil, Joanne K.
2000-01-01
Provides a case for teaching children about local architecture. Describes a specific example called the Kentucky Project as a humanist approach to built environmental education that enabled middle and high school students to study their architectural heritage through a program of videos and related teaching materials. (CMK)
77 FR 60680 - Development of the Nationwide Interoperable Public Safety Broadband Network
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-04
... public comment on the conceptual network architecture presentation made at the FirstNet Board of... business plan considerations. NTIA also seeks comment on the general concept of how to develop applications... network based on a single, nationwide network architecture called for under the Middle Class Tax Relief...
47 CFR 51.5 - Terms and definitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... The Communications Act of 1934, as amended. Advanced intelligent network. Advanced intelligent network is a telecommunications network architecture in which call processing, call routing, and network... carrier's network. Advanced services. The term “advanced services” is defined as high speed, switched...
A Standard Platform for Testing and Comparison of MDAO Architectures
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Moore, Kenneth T.; Hearn, Tristan A.; Naylor, Bret A.
2012-01-01
The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant.
Verified OS Interface Code Synthesis
2016-12-01
in this case we are using the ARMv7 processor architecture ). The application accomplishes this task by issuing the swi (“software interrupt...manual version 4.0.0) on the ARM architecture . To alleviate this problem,we developed an XML-based domain specific language (DSL) in which each...Untyped Retype Table 2.1: seL4 Architecture Independent System Calls. of r2, r3, r4 and r5 into the message registers of the thread’s IPC buffer and
The signs of life in architecture.
Gruber, Petra
2008-06-01
Engineers, designers and architects often look to nature for inspiration. The research on 'natural constructions' is aiming at innovation and the improvement of architectural quality. The introduction of life sciences terminology in the context of architecture delivers new perspectives towards innovation in architecture and design. The investigation is focused on the analogies between nature and architecture. Apart from other principles that are found in living nature, an interpretation of the so-called 'signs of life', which characterize living systems, in architecture is presented. Selected architectural projects that have applied specific characteristics of life, whether on purpose or not, will show the state of development in this field and open up future challenges. The survey will include famous built architecture as well as students' design programs, which were carried out under supervision of the author at the Department of Design and Building Construction at the Vienna University of Technology.
Design and evaluation of cellular power converter architectures
NASA Astrophysics Data System (ADS)
Perreault, David John
Power electronic technology plays an important role in many energy conversion and storage applications, including machine drives, power supplies, frequency changers and UPS systems. Increases in performance and reductions in cost have been achieved through the development of higher performance power semiconductor devices and integrated control devices with increased functionality. Manufacturing techniques, however, have changed little. High power is typically achieved by paralleling multiple die in a sing!e package, producing the physical equivalent of a single large device. Consequently, both the device package and the converter in which the device is used continue to require large, complex mechanical structures, and relatively sophisticated heat transfer systems. An alternative to this approach is the use of a cellular power converter architecture, which is based upon the parallel connection of a large number of quasi-autonomous converters, called cells, each of which is designed for a fraction of the system rating. The cell rating is chosen such that single-die devices in inexpensive packages can be used, and the cell fabricated with an automated assembly process. The use of quasi-autonomous cells means that system performance is not compromised by the failure of a cell. This thesis explores the design of cellular converter architectures with the objective of achieving improvements in performance, reliability, and cost over conventional converter designs. New approaches are developed and experimentally verified for highly distributed control of cellular converters, including methods for ripple cancellation and current-sharing control. The performance of these techniques are quantified, and their dynamics are analyzed. Cell topologies suitable to the cellular architecture are investigated, and their use for systems in the 5-500 kVA range is explored. The design, construction, and experimental evaluation of a 6 kW cellular switched-mode rectifier is also addressed. This cellular system implements entirely distributed control, and achieves performance levels unattainable with an equivalent single converter. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
NASA Technical Reports Server (NTRS)
Hill, Randall W., Jr.
1990-01-01
The issues of knowledge representation and control in hypermedia-based training environments are discussed. The main objective is to integrate the flexible presentation capability of hypermedia with a knowledge-based approach to lesson discourse management. The instructional goals and their associated concepts are represented in a knowledge representation structure called a 'concept network'. Its functional usages are many: it is used to control the navigation through a presentation space, generate tests for student evaluation, and model the student. This architecture was implemented in HyperCLIPS, a hybrid system that creates a bridge between HyperCard, a popular hypertext-like system used for building user interfaces to data bases and other applications, and CLIPS, a highly portable government-owned expert system shell.
Modular data acquisition system and its use in gas-filled detector readout at ESRF
NASA Astrophysics Data System (ADS)
Sever, F.; Epaud, F.; Poncet, F.; Grave, M.; Rey-Bakaikoa, V.
1996-09-01
Since 1992, 18 ESRF beamlines are open to users. Although the data acquisition requirements vary a lot from one beamline to another, we are trying to implement a modular data acquisition system architecture that would fit with the maximum number of acquisition projects at ESRF. Common to all of these systems are large acquisition memories and the requirement to visualize the data during an acquisition run and to transfer them quickly after the run to safe storage. We developed a general memory API handling the acquisition memory and its organization and another library that provides calls for transferring the data over TCP/IP sockets. Interesting utility programs using these libraries are the `online display' program and the `data transfer' program. The data transfer program as well as an acquisition control program rely on our well-established `device server model', which was originally designed for the machine control system and then successfully reused in beamline control systems. In the second half of this paper, the acquisition system for a 2D gas-filled detector is presented, which is one of the first concrete examples using the proposed modular data acquisition architecture.
Using Curriculum Architecture in Workplace Learning
ERIC Educational Resources Information Center
Kaufmann, Ken
2005-01-01
While learning is often designed and executed as if it stood alone, it rarely exists in isolation. If more than one learning event is offered, a relationship between units exists and should be defined. This relationship calls for an architecture of curriculum that defines audience, content, and delivery within a context of performance. Curriculum…
An Object-Oriented Architecture for Intelligent Tutoring Systems. Technical Report No. LSP-3.
ERIC Educational Resources Information Center
Bonar, Jeffrey; And Others
This technical report describes a generic architecture for building intelligent tutoring systems which is developed around objects that represent the knowledge elements to be taught by the tutor. Each of these knowledge elements, called "bites," inherits both a knowledge organization describing the kind of knowledge represented and…
Nagasaki, Masao; Doi, Atsushi; Matsuno, Hiroshi; Miyano, Satoru
2004-01-01
The research on modeling and simulation of complex biological systems is getting more important in Systems Biology. In this respect, we have developed Hybrid Function Petri net (HFPN) that was newly developed from existing Petri net because of their intuitive graphical representation and their capabilities for mathematical analyses. However, in the process of modeling metabolic, gene regulatory or signal transduction pathways with the architecture, we have realized three extensions of HFPN, (i) an entity should be extended to contain more than one value, (ii) an entity should be extended to handle other primitive types, e.g. boolean, string, (iii) an entity should be extended to handle more advanced type called object that consists of variables and methods, are necessary for modeling biological systems with Petri net based architecture. To deal with it, we define a new enhanced Petri net called hybrid functional Petri net with extension (HFPNe). To demonstrate the effectiveness of the enhancements, we model and simulate with HFPNe four biological processes that are diffcult to represent with the previous architecture HFPN.
GASP-PL/I Simulation of Integrated Avionic System Processor Architectures. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brent, G. A.
1978-01-01
A development study sponsored by NASA was completed in July 1977 which proposed a complete integration of all aircraft instrumentation into a single modular system. Instead of using the current single-function aircraft instruments, computers compiled and displayed inflight information for the pilot. A processor architecture called the Team Architecture was proposed. This is a hardware/software approach to high-reliability computer systems. A follow-up study of the proposed Team Architecture is reported. GASP-PL/1 simulation models are used to evaluate the operating characteristics of the Team Architecture. The problem, model development, simulation programs and results at length are presented. Also included are program input formats, outputs and listings.
Flight Software Development for the CHEOPS Instrument with the CORDET Framework
NASA Astrophysics Data System (ADS)
Cechticky, V.; Ottensamer, R.; Pasetti, A.
2015-09-01
CHEOPS is an ESA S-class mission dedicated to the precise measurement of radii of already known exoplanets using ultra-high precision photometry. The instrument flight software controlling the instrument and handling the science data is developed by the University of Vienna using the CORDET Framework offered by P&P Software GmbH. The CORDET Framework provides a generic software infrastructure for PUS-based applications. This paper describes how the framework is used for the CHEOPS application software to provide a consistent solution for to the communication and control services, event handling and FDIR procedures. This approach is innovative in four respects: (a) it is a true third-party reuse; (b) re-use is done at specification, validation and code level; (c) the re-usable assets and their qualification data package are entirely open-source; (d) re-use is based on call-back with the application developer providing functions which are called by the reusable architecture. File names missing from here on out (I tried to mimic the files names from before.)
A modular microfluidic architecture for integrated biochemical analysis.
Shaikh, Kashan A; Ryu, Kee Suk; Goluch, Edgar D; Nam, Jwa-Min; Liu, Juewen; Thaxton, C Shad; Chiesl, Thomas N; Barron, Annelise E; Lu, Yi; Mirkin, Chad A; Liu, Chang
2005-07-12
Microfluidic laboratory-on-a-chip (LOC) systems based on a modular architecture are presented. The architecture is conceptualized on two levels: a single-chip level and a multiple-chip module (MCM) system level. At the individual chip level, a multilayer approach segregates components belonging to two fundamental categories: passive fluidic components (channels and reaction chambers) and active electromechanical control structures (sensors and actuators). This distinction is explicitly made to simplify the development process and minimize cost. Components belonging to these two categories are built separately on different physical layers and can communicate fluidically via cross-layer interconnects. The chip that hosts the electromechanical control structures is called the microfluidic breadboard (FBB). A single LOC module is constructed by attaching a chip comprised of a custom arrangement of fluid routing channels and reactors (passive chip) to the FBB. Many different LOC functions can be achieved by using different passive chips on an FBB with a standard resource configuration. Multiple modules can be interconnected to form a larger LOC system (MCM level). We demonstrated the utility of this architecture by developing systems for two separate biochemical applications: one for detection of protein markers of cancer and another for detection of metal ions. In the first case, free prostate-specific antigen was detected at 500 aM concentration by using a nanoparticle-based bio-bar-code protocol on a parallel MCM system. In the second case, we used a DNAzyme-based biosensor to identify the presence of Pb(2+) (lead) at a sensitivity of 500 nM in <1 nl of solution.
Force-reflective teleoperated system with shared and compliant control capabilities
NASA Technical Reports Server (NTRS)
Szakaly, Z.; Kim, W. S.; Bejczy, A. K.
1989-01-01
The force-reflecting teleoperator breadboard is described. It is the first system among available Research and Development systems with the following combined capabilities: (1) The master input device is not a replica of the slave arm. It is a general purpose device which can be applied to the control of different robot arms through proper mathematical transformations. (2) Force reflection generated in the master hand controller is referenced to forces and moments measured by a six DOF force-moment sensor at the base of the robot hand. (3) The system permits a smooth spectrum of operations between full manual, shared manual and automatic, and full automatic (called traded) control. (4) The system can be operated with variable compliance or stiffness in force-reflecting control. Some of the key points of the system are the data handling and computing architecture, the communication method, and the handling of mathematical transformations. The architecture is a fully synchronized pipeline. The communication method achieves optimal use of a parallel communication channel between the local and remote computing nodes. A time delay box is also implemented in this communication channel permitting experiments with up to 8 sec time delay. The mathematical transformations are computed faster than 1 msec so that control at each node can be operated at 1 kHz servo rate without interpolation. This results in an overall force-reflecting loop rate of 200 Hz.
Design and Development of a 200-kW Turbo-Electric Distributed Propulsion Testbed
NASA Technical Reports Server (NTRS)
Papathakis, Kurt V.; Kloesel, Kurt J.; Lin, Yohan; Clarke, Sean; Ediger, Jacob J.; Ginn, Starr
2016-01-01
The National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC) (Edwards, California) is developing a Hybrid-Electric Integrated Systems Testbed (HEIST) Testbed as part of the HEIST Project, to study power management and transition complexities, modular architectures, and flight control laws for turbo-electric distributed propulsion technologies using representative hardware and piloted simulations. Capabilities are being developed to assess the flight readiness of hybrid electric and distributed electric vehicle architectures. Additionally, NASA will leverage experience gained and assets developed from HEIST to assist in flight-test proposal development, flight-test vehicle design, and evaluation of hybrid electric and distributed electric concept vehicles for flight safety. The HEIST test equipment will include three trailers supporting a distributed electric propulsion wing, a battery system and turbogenerator, dynamometers, and supporting power and communication infrastructure, all connected to the AFRC Core simulation. Plans call for 18 high performance electric motors that will be powered by batteries and the turbogenerator, and commanded by a piloted simulation. Flight control algorithms will be developed on the turbo-electric distributed propulsion system.
Modular Closed-Loop Control of Diabetes
Magni, L.; Dassau, E.; Hughes-Karvetski, C.; Toffanin, C.; De Nicolao, G.; Del Favero, S.; Breton, M.; Man, C. Dalla; Renard, E.; Zisser, H.; Doyle, F. J.; Cobelli, C.; Kovatchev, B. P.
2015-01-01
Modularity plays a key role in many engineering systems, allowing for plug-and-play integration of components, enhancing flexibility and adaptability, and facilitating standardization. In the control of diabetes, i.e., the so-called “artificial pancreas,” modularity allows for the step-wise introduction of (and regulatory approval for) algorithmic components, starting with subsystems for assured patient safety and followed by higher layer components that serve to modify the patient’s basal rate in real time. In this paper, we introduce a three-layer modular architecture for the control of diabetes, consisting in a sensor/pump interface module (IM), a continuous safety module (CSM), and a real-time control module (RTCM), which separates the functions of insulin recommendation (postmeal insulin for mitigating hyperglycemia) and safety (prevention of hypoglycemia). In addition, we provide details of instances of all three layers of the architecture: the APS© serving as the IM, the safety supervision module (SSM) serving as the CSM, and the range correction module (RCM) serving as the RTCM. We evaluate the performance of the integrated system via in silico preclinical trials, demonstrating 1) the ability of the SSM to reduce the incidence of hypoglycemia under nonideal operating conditions and 2) the ability of the RCM to reduce glycemic variability. PMID:22481809
NASA Technical Reports Server (NTRS)
Skoog, Mark A.
2016-01-01
NASAs Armstrong Flight Research Center has been engaged in the development of highly automatic safety systems for aviation since the mid 80s. For the past three years under Seedling and Center Innovation funding this work has moved toward the development of a software architecture applicable to autonomous safety. This work is now broadening and accelerating to address the airworthiness issues surrounding making a case for trustworthy autonomy. This software architecture is called the expandable variable-autonomy architecture (EVAA) and utilizes a run-time assurance approach to safety assurance.
Systems Architecture for Fully Autonomous Space Missions
NASA Technical Reports Server (NTRS)
Esper, Jamie; Schnurr, R.; VanSteenberg, M.; Brumfield, Mark (Technical Monitor)
2002-01-01
The NASA Goddard Space Flight Center is working to develop a revolutionary new system architecture concept in support of fully autonomous missions. As part of GSFC's contribution to the New Millenium Program (NMP) Space Technology 7 Autonomy and on-Board Processing (ST7-A) Concept Definition Study, the system incorporates the latest commercial Internet and software development ideas and extends them into NASA ground and space segment architectures. The unique challenges facing the exploration of remote and inaccessible locales and the need to incorporate corresponding autonomy technologies within reasonable cost necessitate the re-thinking of traditional mission architectures. A measure of the resiliency of this architecture in its application to a broad range of future autonomy missions will depend on its effectiveness in leveraging from commercial tools developed for the personal computer and Internet markets. Specialized test stations and supporting software come to past as spacecraft take advantage of the extensive tools and research investments of billion-dollar commercial ventures. The projected improvements of the Internet and supporting infrastructure go hand-in-hand with market pressures that provide continuity in research. By taking advantage of consumer-oriented methods and processes, space-flight missions will continue to leverage on investments tailored to provide better services at reduced cost. The application of ground and space segment architectures each based on Local Area Networks (LAN), the use of personal computer-based operating systems, and the execution of activities and operations through a Wide Area Network (Internet) enable a revolution in spacecraft mission formulation, implementation, and flight operations. Hardware and software design, development, integration, test, and flight operations are all tied-in closely to a common thread that enables the smooth transitioning between program phases. The application of commercial software development techniques lays the foundation for delivery of product-oriented flight software modules and models. Software can then be readily applied to support the on-board autonomy required for mission self-management. An on-board intelligent system, based on advanced scripting languages, facilitates the mission autonomy required to offload ground system resources, and enables the spacecraft to manage itself safely through an efficient and effective process of reactive planning, science data acquisition, synthesis, and transmission to the ground. Autonomous ground systems in turn coordinate and support schedule contact times with the spacecraft. Specific autonomy software modules on-board include mission and science planners, instrument and subsystem control, and fault tolerance response software, all residing within a distributed computing environment supported through the flight LAN. Autonomy also requires the minimization of human intervention between users on the ground and the spacecraft, and hence calls for the elimination of the traditional operations control center as a funnel for data manipulation. Basic goal-oriented commands are sent directly from the user to the spacecraft through a distributed internet-based payload operations "center". The ensuing architecture calls for the use of spacecraft as point extensions on the Internet. This paper will detail the system architecture implementation chosen to enable cost-effective autonomous missions with applicability to a broad range of conditions. It will define the structure needed for implementation of such missions, including software and hardware infrastructures. The overall architecture is then laid out as a common thread in the mission life cycle from formulation through implementation and flight operations.
An iconic programming language for sensor-based robots
NASA Technical Reports Server (NTRS)
Gertz, Matthew; Stewart, David B.; Khosla, Pradeep K.
1993-01-01
In this paper we describe an iconic programming language called Onika for sensor-based robotic systems. Onika is both modular and reconfigurable and can be used with any system architecture and real-time operating system. Onika is also a multi-level programming environment wherein tasks are built by connecting a series of icons which, in turn, can be defined in terms of other icons at the lower levels. Expert users are also allowed to use control block form to define servo tasks. The icons in Onika are both shape and color coded, like the pieces of a jigsaw puzzle, thus providing a form of error control in the development of high level applications.
3. PHOTOCOPY OF DRAWING (1960 ARCHITECTURAL DRAWING BY THE RALPH ...
3. PHOTOCOPY OF DRAWING (1960 ARCHITECTURAL DRAWING BY THE RALPH M. PARSONS COMPANY) FLOOR PLAN, ELEVATIONS, AND SECTION FOR THE SAMOS TECHNICAL SUPPORT BUILDING (BLDG. 761; NOW CALLED SLC-3 AIR FORCE BUILDING), SHEET A14 - Vandenberg Air Force Base, Space Launch Complex 3, SLC-3 Air Force Building, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
Component Architectures and Web-Based Learning Environments
ERIC Educational Resources Information Center
Ferdig, Richard E.; Mishra, Punya; Zhao, Yong
2004-01-01
The Web has caught the attention of many educators as an efficient communication medium and content delivery system. But we feel there is another aspect of the Web that has not been given the attention it deserves. We call this aspect of the Web its "component architecture." Briefly it means that on the Web one can develop very complex…
Optical Computing Based on Neuronal Models
1988-05-01
walking, and cognition are far too complex for existing sequential digital computers. Therefore new architectures, hardware, and algorithms modeled...collective behavior, and iterative processing into optical processing and artificial neurodynamical systems. Another intriguing promise of neural nets is...with architectures, implementations, and programming; and material research s -7- called for. Our future research in neurodynamics will continue to
Virtualization - A Key Cost Saver in NASA Multi-Mission Ground System Architecture
NASA Technical Reports Server (NTRS)
Swenson, Paul; Kreisler, Stephen; Sager, Jennifer A.; Smith, Dan
2014-01-01
With science team budgets being slashed, and a lack of adequate facilities for science payload teams to operate their instruments, there is a strong need for innovative new ground systems that are able to provide necessary levels of capability processing power, system availability and redundancy while maintaining a small footprint in terms of physical space, power utilization and cooling.The ground system architecture being presented is based off of heritage from several other projects currently in development or operations at Goddard, but was designed and built specifically to meet the needs of the Science and Planetary Operations Control Center (SPOCC) as a low-cost payload command, control, planning and analysis operations center. However, this SPOCC architecture was designed to be generic enough to be re-used partially or in whole by other labs and missions (since its inception that has already happened in several cases!)The SPOCC architecture leverages a highly available VMware-based virtualization cluster with shared SAS Direct-Attached Storage (DAS) to provide an extremely high-performing, low-power-utilization and small-footprint compute environment that provides Virtual Machine resources shared among the various tenant missions in the SPOCC. The storage is also expandable, allowing future missions to chain up to 7 additional 2U chassis of storage at an extremely competitive cost if they require additional archive or virtual machine storage space.The software architecture provides a fully-redundant GMSEC-based message bus architecture based on the ActiveMQ middleware to track all health and safety status within the SPOCC ground system. All virtual machines utilize the GMSEC system agents to report system host health over the GMSEC bus, and spacecraft payload health is monitored using the Hammers Integrated Test and Operations System (ITOS) Galaxy Telemetry and Command (TC) system, which performs near-real-time limit checking and data processing on the downlinked data stream and injects messages into the GMSEC bus that are monitored to automatically page the on-call operator or Systems Administrator (SA) when an off-nominal condition is detected. This architecture, like the LTSP thin clients, are shared across all tenant missions.Other required IT security controls are implemented at the ground system level, including physical access controls, logical system-level authentication authorization management, auditing and reporting, network management and a NIST 800-53 FISMA-Moderate IT Security plan Risk Assessment Contingency Plan, helping multiple missions share the cost of compliance with agency-mandated directives.The SPOCC architecture provides science payload control centers and backup mission operations centers with a cost-effective, standardized approach to virtualizing and monitoring resources that were traditionally multiple racks full of physical machines. The increased agility in deploying new virtual systems and thin client workstations can provide significant savings in personnel costs for maintaining the ground system. The cost savings in procurement, power, rack footprint and cooling as well as the shared multi-mission design greatly reduces upfront cost for missions moving into the facility. Overall, the authors hope that this architecture will become a model for how future NASA operations centers are constructed!
CNV analysis in Tourette syndrome implicates large genomic rearrangements in COL8A1 and NRXN1.
Nag, Abhishek; Bochukova, Elena G; Kremeyer, Barbara; Campbell, Desmond D; Muller, Heike; Valencia-Duarte, Ana V; Cardona, Julio; Rivas, Isabel C; Mesa, Sandra C; Cuartas, Mauricio; Garcia, Jharley; Bedoya, Gabriel; Cornejo, William; Herrera, Luis D; Romero, Roxana; Fournier, Eduardo; Reus, Victor I; Lowe, Thomas L; Farooqi, I Sadaf; Mathews, Carol A; McGrath, Lauren M; Yu, Dongmei; Cook, Ed; Wang, Kai; Scharf, Jeremiah M; Pauls, David L; Freimer, Nelson B; Plagnol, Vincent; Ruiz-Linares, Andrés
2013-01-01
Tourette syndrome (TS) is a neuropsychiatric disorder with a strong genetic component. However, the genetic architecture of TS remains uncertain. Copy number variation (CNV) has been shown to contribute to the genetic make-up of several neurodevelopmental conditions, including schizophrenia and autism. Here we describe CNV calls using SNP chip genotype data from an initial sample of 210 TS cases and 285 controls ascertained in two Latin American populations. After extensive quality control, we found that cases (N = 179) have a significant excess (P = 0.006) of large CNV (>500 kb) calls compared to controls (N = 234). Amongst 24 large CNVs seen only in the cases, we observed four duplications of the COL8A1 gene region. We also found two cases with ∼400 kb deletions involving NRXN1, a gene previously implicated in neurodevelopmental disorders, including TS. Follow-up using multiplex ligation-dependent probe amplification (and including 53 more TS cases) validated the CNV calls and identified additional patients with rearrangements in COL8A1 and NRXN1, but none in controls. Examination of available parents indicates that two out of three NRXN1 deletions detected in the TS cases are de-novo mutations. Our results are consistent with the proposal that rare CNVs play a role in TS aetiology and suggest a possible role for rearrangements in the COL8A1 and NRXN1 gene regions.
CNV Analysis in Tourette Syndrome Implicates Large Genomic Rearrangements in COL8A1 and NRXN1
Nag, Abhishek; Bochukova, Elena G.; Kremeyer, Barbara; Campbell, Desmond D.; Muller, Heike; Valencia-Duarte, Ana V.; Cardona, Julio; Rivas, Isabel C.; Mesa, Sandra C.; Cuartas, Mauricio; Garcia, Jharley; Bedoya, Gabriel; Cornejo, William; Herrera, Luis D.; Romero, Roxana; Fournier, Eduardo; Reus, Victor I.; Lowe, Thomas L.; Farooqi, I. Sadaf; Mathews, Carol A.; McGrath, Lauren M.; Yu, Dongmei; Cook, Ed; Wang, Kai; Scharf, Jeremiah M.; Pauls, David L.; Freimer, Nelson B.; Plagnol, Vincent; Ruiz-Linares, Andrés
2013-01-01
Tourette syndrome (TS) is a neuropsychiatric disorder with a strong genetic component. However, the genetic architecture of TS remains uncertain. Copy number variation (CNV) has been shown to contribute to the genetic make-up of several neurodevelopmental conditions, including schizophrenia and autism. Here we describe CNV calls using SNP chip genotype data from an initial sample of 210 TS cases and 285 controls ascertained in two Latin American populations. After extensive quality control, we found that cases (N = 179) have a significant excess (P = 0.006) of large CNV (>500 kb) calls compared to controls (N = 234). Amongst 24 large CNVs seen only in the cases, we observed four duplications of the COL8A1 gene region. We also found two cases with ∼400kb deletions involving NRXN1, a gene previously implicated in neurodevelopmental disorders, including TS. Follow-up using multiplex ligation-dependent probe amplification (and including 53 more TS cases) validated the CNV calls and identified additional patients with rearrangements in COL8A1 and NRXN1, but none in controls. Examination of available parents indicates that two out of three NRXN1 deletions detected in the TS cases are de-novo mutations. Our results are consistent with the proposal that rare CNVs play a role in TS aetiology and suggest a possible role for rearrangements in the COL8A1 and NRXN1 gene regions. PMID:23533600
A laboratory breadboard system for dual-arm teleoperation
NASA Technical Reports Server (NTRS)
Bejczy, A. K.; Szakaly, Z.; Kim, W. S.
1990-01-01
The computing architecture of a novel dual-arm teleoperation system is described. The novelty of this system is that: (1) the master arm is not a replica of the slave arm; it is unspecific to any manipulator and can be used for the control of various robot arms with software modifications; and (2) the force feedback to the general purpose master arm is derived from force-torque sensor data originating from the slave hand. The computing architecture of this breadboard system is a fully synchronized pipeline with unique methods for data handling, communication and mathematical transformations. The computing system is modular, thus inherently extendable. The local control loops at both sites operate at 100 Hz rate, and the end-to-end bilateral (force-reflecting) control loop operates at 200 Hz rate, each loop without interpolation. This provides high-fidelity control. This end-to-end system elevates teleoperation to a new level of capabilities via the use of sensors, microprocessors, novel electronics, and real-time graphics displays. A description is given of a graphic simulation system connected to the dual-arm teleoperation breadboard system. High-fidelity graphic simulation of a telerobot (called Phantom Robot) is used for preview and predictive displays for planning and for real-time control under several seconds communication time delay conditions. High fidelity graphic simulation is obtained by using appropriate calibration techniques.
Formation Flying for Distributed InSAR
NASA Technical Reports Server (NTRS)
Scharf, Daniel P.; Murray, Emmanuell A.; Ploen, Scott R.; Gromov, Konstantin G.; Chen, Curtis W.
2006-01-01
We consider two spacecraft flying in formation to create interferometric synthetic aperture radar (InSAR). Several candidate orbits for such in InSar formation have been previously determined based on radar performance and Keplerian orbital dynamics. However, with out active control, disturbance-induced drift can degrade radar performance and (in the worst case) cause a collision. This study evaluates the feasibility of operating the InSAR spacecraft as a formation, that is, with inner-spacecraft sensing and control. We describe the candidate InSAR orbits, design formation guidance and control architectures and algorithms, and report the (Delta)(nu) and control acceleration requirements for the candidate orbits for several tracking performance levels. As part of determining formation requirements, a formation guidance algorithm called Command Virtual Structure is introduced that can reduce the (Delta)(nu) requirements compared to standard Leader/Follower formation approaches.
Modular telerobot control system for accident response
NASA Astrophysics Data System (ADS)
Anderson, Richard J. M.; Shirey, David L.
1999-08-01
The Accident Response Mobile Manipulator System (ARMMS) is a teleoperated emergency response vehicle that deploys two hydraulic manipulators, five cameras, and an array of sensors to the scene of an incident. It is operated from a remote base station that can be situated up to four kilometers away from the site. Recently, a modular telerobot control architecture called SMART was applied to ARMMS to improve the precision, safety, and operability of the manipulators on board. Using SMART, a prototype manipulator control system was developed in a couple of days, and an integrated working system was demonstrated within a couple of months. New capabilities such as camera-frame teleoperation, autonomous tool changeout and dual manipulator control have been incorporated. The final system incorporates twenty-two separate modules and implements seven different behavior modes. This paper describes the integration of SMART into the ARMMS system.
42: An Open-Source Simulation Tool for Study and Design of Spacecraft Attitude Control Systems
NASA Technical Reports Server (NTRS)
Stoneking, Eric
2018-01-01
Simulation is an important tool in the analysis and design of spacecraft attitude control systems. The speaker will discuss the simulation tool, called simply 42, that he has developed over the years to support his own work as an engineer in the Attitude Control Systems Engineering Branch at NASA Goddard Space Flight Center. 42 was intended from the outset to be high-fidelity and powerful, but also fast and easy to use. 42 is publicly available as open source since 2014. The speaker will describe some of 42's models and features, and discuss its applicability to studies ranging from early concept studies through the design cycle, integration, and operations. He will outline 42's architecture and share some thoughts on simulation development as a long-term project.
Development of Nonlinear Flight Mechanical Model of High Aspect Ratio Light Utility Aircraft
NASA Astrophysics Data System (ADS)
Bahri, S.; Sasongko, R. A.
2018-04-01
The implementation of Flight Control Law (FCL) for Aircraft Electronic Flight Control System (EFCS) aims to reduce pilot workload, while can also enhance the control performance during missions that require long endurance flight and high accuracy maneuver. In the development of FCL, a quantitative representation of the aircraft dynamics is needed for describing the aircraft dynamics characteristic and for becoming the basis of the FCL design. Hence, a 6 Degree of Freedom nonlinear model of a light utility aircraft dynamics, also called the nonlinear Flight Mechanical Model (FMM), is constructed. This paper shows the construction of FMM from mathematical formulation, the architecture design of FMM, the trimming process and simulations. The verification of FMM is done by analysis of aircraft behaviour in selected trimmed conditions.
NASA Astrophysics Data System (ADS)
Nicol, Patrick; Fleury, Joel; Le Naour, Claire; Bernard, Frédéric
2017-11-01
IASI (Infrared Atmospheric Sounding Interferometer) is an infrared atmospheric sounder. It will provide meteorologist and scientific community with atmospheric spectra. The instrument is composed of a Fourier transform spectrometer and an associated infrared imager. The presentation will describe the spectrometer detection chain architecture, composed by three different detectors cooled in a passive cryo-cooler (so called CBS : Cold Box Subsystem) and associated analog electronics up to digital conversion. It will mainly focus on design choices with regards to environment constraints, implemented technologies, and associated performances. CNES is leading the IASI program in collaboration with EUMETSAT. The instrument Prime is ALCATEL SPACE responsible, notably, of the detection chain architecture. SAGEM SA provides the detector package (so called CAU : Cold Acquisition Unit).
NASA Astrophysics Data System (ADS)
Nicol, Patrick; Fleury, Joel; Bernard, Frédéric
2004-06-01
IASI (Infrared Atmospheric Sounding Interferometer) is an infrared atmospheric sounder. It will provide meteorologist and scientific community with atmospheric spectra. The instrument is composed of a Fourier transform spectrometer and an associated infrared imager. The presentation will describe the spectrometer detection chain architecture, composed by three different detectors cooled in a passive cryo-cooler (so called CBS : Cold Box Subsystem) and associated analog electronics up to digital conversion. It will mainly focus on design choices with regards to environment constraints, implemented technologies, and associated performances . CNES is leading the IASI program in collaboration with EUMETSAT. The instrument Prime is ALCATEL SPACE responsible, notably, of the detection chain architecture. SAGEM SA provides the detector package (so called CAU: Cold Acquisition Unit).
Cathedrals, Casinos, Colleges and Classrooms: Questions for the Architects of Digital Campuses
ERIC Educational Resources Information Center
McCluskey, Frank; Winter, Melanie
2013-01-01
The bricks and mortar classroom has a long and storied history. The digital classroom is so new and different it may be wrong to even call it a "classroom". The authors argue that architecture influences behavior. So in constructing our new digital classrooms we must pay attention to the architecture and what job we want that…
ERIC Educational Resources Information Center
Jurow, A. Susan
2005-01-01
Project-based curricula have the potential to engage students' interests. But how do students become interested in the goals of a project? This article documents how a group of 8th-grade students participated in an architectural design project called the Antarctica Project. The project is based on the imaginary premise that students need to design…
Evaluating a Service-Oriented Architecture
2007-09-01
See the description on page 13. SaaS Software as a service ( SaaS ) is a software delivery model where customers don’t own a copy of the application... serviceability REST Representational State Transfer RIA rich internet application RPC remote procedure call SaaS software as a service SAML Security...Evaluating a Service -Oriented Architecture Phil Bianco, Software Engineering Institute Rick Kotermanski, Summa Technologies Paulo Merson
Integrating planning, execution, and learning
NASA Technical Reports Server (NTRS)
Kuokka, Daniel R.
1989-01-01
To achieve the goal of building an autonomous agent, the usually disjoint capabilities of planning, execution, and learning must be used together. An architecture, called MAX, within which cognitive capabilities can be purposefully and intelligently integrated is described. The architecture supports the codification of capabilities as explicit knowledge that can be reasoned about. In addition, specific problem solving, learning, and integration knowledge is developed.
Mangan, Hazel; Gailín, Michael Ó; McStay, Brian
2017-12-01
Nucleoli are the sites of ribosome biogenesis and the largest membraneless subnuclear structures. They are intimately linked with growth and proliferation control and function as sensors of cellular stress. Nucleoli form around arrays of ribosomal gene (rDNA) repeats also called nucleolar organizer regions (NORs). In humans, NORs are located on the short arms of all five human acrocentric chromosomes. Multiple NORs contribute to the formation of large heterochromatin-surrounded nucleoli observed in most human cells. Here we will review recent findings about their genomic architecture. The dynamic nature of nucleoli began to be appreciated with the advent of photodynamic experiments using fluorescent protein fusions. We review more recent data on nucleoli in Xenopus germinal vesicles (GVs) which has revealed a liquid droplet-like behavior that facilitates nucleolar fusion. Further analysis in both XenopusGVs and Drosophila embryos indicates that the internal organization of nucleoli is generated by a combination of liquid-liquid phase separation and active processes involving rDNA. We will attempt to integrate these recent findings with the genomic architecture of human NORs to advance our understanding of how nucleoli form and respond to stress in human cells. © 2017 Federation of European Biochemical Societies.
UPM: unified policy-based network management
NASA Astrophysics Data System (ADS)
Law, Eddie; Saxena, Achint
2001-07-01
Besides providing network management to the Internet, it has become essential to offer different Quality of Service (QoS) to users. Policy-based management provides control on network routers to achieve this goal. The Internet Engineering Task Force (IETF) has proposed a two-tier architecture whose implementation is based on the Common Open Policy Service (COPS) protocol and Lightweight Directory Access Protocol (LDAP). However, there are several limitations to this design such as scalability and cross-vendor hardware compatibility. To address these issues, we present a functionally enhanced multi-tier policy management architecture design in this paper. Several extensions are introduced thereby adding flexibility and scalability. In particular, an intermediate entity between the policy server and policy rule database called the Policy Enforcement Agent (PEA) is introduced. By keeping internal data in a common format, using a standard protocol, and by interpreting and translating request and decision messages from multi-vendor hardware, this agent allows a dynamic Unified Information Model throughout the architecture. We have tailor-made this unique information system to save policy rules in the directory server and allow executions of policy rules with dynamic addition of new equipment during run-time.
Park, Chang-Seop
2014-01-01
After two recent security attacks against implantable medical devices (IMDs) have been reported, the privacy and security risks of IMDs have been widely recognized in the medical device market and research community, since the malfunctioning of IMDs might endanger the patient's life. During the last few years, a lot of researches have been carried out to address the security-related issues of IMDs, including privacy, safety, and accessibility issues. A physician accesses IMD through an external device called a programmer, for diagnosis and treatment. Hence, cryptographic key management between IMD and programmer is important to enforce a strict access control. In this paper, a new security architecture for the security of IMDs is proposed, based on a 3-Tier security model, where the programmer interacts with a Hospital Authentication Server, to get permissions to access IMDs. The proposed security architecture greatly simplifies the key management between IMDs and programmers. Also proposed is a security mechanism to guarantee the authenticity of the patient data collected from IMD and the nonrepudiation of the physician's treatment based on it. The proposed architecture and mechanism are analyzed and compared with several previous works, in terms of security and performance.
2014-01-01
After two recent security attacks against implantable medical devices (IMDs) have been reported, the privacy and security risks of IMDs have been widely recognized in the medical device market and research community, since the malfunctioning of IMDs might endanger the patient's life. During the last few years, a lot of researches have been carried out to address the security-related issues of IMDs, including privacy, safety, and accessibility issues. A physician accesses IMD through an external device called a programmer, for diagnosis and treatment. Hence, cryptographic key management between IMD and programmer is important to enforce a strict access control. In this paper, a new security architecture for the security of IMDs is proposed, based on a 3-Tier security model, where the programmer interacts with a Hospital Authentication Server, to get permissions to access IMDs. The proposed security architecture greatly simplifies the key management between IMDs and programmers. Also proposed is a security mechanism to guarantee the authenticity of the patient data collected from IMD and the nonrepudiation of the physician's treatment based on it. The proposed architecture and mechanism are analyzed and compared with several previous works, in terms of security and performance. PMID:25276797
Effect of promoter architecture on the cell-to-cell variability in gene expression.
Sanchez, Alvaro; Garcia, Hernan G; Jones, Daniel; Phillips, Rob; Kondev, Jané
2011-03-01
According to recent experimental evidence, promoter architecture, defined by the number, strength and regulatory role of the operators that control transcription, plays a major role in determining the level of cell-to-cell variability in gene expression. These quantitative experiments call for a corresponding modeling effort that addresses the question of how changes in promoter architecture affect variability in gene expression in a systematic rather than case-by-case fashion. In this article we make such a systematic investigation, based on a microscopic model of gene regulation that incorporates stochastic effects. In particular, we show how operator strength and operator multiplicity affect this variability. We examine different modes of transcription factor binding to complex promoters (cooperative, independent, simultaneous) and how each of these affects the level of variability in transcriptional output from cell-to-cell. We propose that direct comparison between in vivo single-cell experiments and theoretical predictions for the moments of the probability distribution of mRNA number per cell can be used to test kinetic models of gene regulation. The emphasis of the discussion is on prokaryotic gene regulation, but our analysis can be extended to eukaryotic cells as well.
Effect of Promoter Architecture on the Cell-to-Cell Variability in Gene Expression
Sanchez, Alvaro; Garcia, Hernan G.; Jones, Daniel; Phillips, Rob; Kondev, Jané
2011-01-01
According to recent experimental evidence, promoter architecture, defined by the number, strength and regulatory role of the operators that control transcription, plays a major role in determining the level of cell-to-cell variability in gene expression. These quantitative experiments call for a corresponding modeling effort that addresses the question of how changes in promoter architecture affect variability in gene expression in a systematic rather than case-by-case fashion. In this article we make such a systematic investigation, based on a microscopic model of gene regulation that incorporates stochastic effects. In particular, we show how operator strength and operator multiplicity affect this variability. We examine different modes of transcription factor binding to complex promoters (cooperative, independent, simultaneous) and how each of these affects the level of variability in transcriptional output from cell-to-cell. We propose that direct comparison between in vivo single-cell experiments and theoretical predictions for the moments of the probability distribution of mRNA number per cell can be used to test kinetic models of gene regulation. The emphasis of the discussion is on prokaryotic gene regulation, but our analysis can be extended to eukaryotic cells as well. PMID:21390269
Huang, Taoying; Shenoy, Pareen J.; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W.; Flowers, Christopher R.
2009-01-01
Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data. PMID:19492074
Development of the Lymphoma Enterprise Architecture Database: a caBIG Silver level compliant system.
Huang, Taoying; Shenoy, Pareen J; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W; Flowers, Christopher R
2009-04-03
Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid (caBIG) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system (LEAD), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute's Center for Bioinformatics to establish the LEAD platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG to the management of clinical and biological data.
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2013-02-01
This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.
Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture
NASA Astrophysics Data System (ADS)
Glosli, James
2013-03-01
With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Gravity response mechanisms of lateral organs and the control of plant architecture in Arabidopsis
NASA Astrophysics Data System (ADS)
Mullen, J.; Hangarter, R.
Most research on gravity responses in plants has focused on primary roots and shoots, which typically grow in a vertical orientation. However, the patterns of lateral organ formation and their growth orientation, which typically are not vertical, govern plant architecture. For example, in Arabidopsis, when lateral roots emerge from the primary root, they grow at a nearly horizontal orientation. As they elongate, the roots slowly curve until they eventually reach a vertical orientation. The regulation of this lateral root orientation is an important component affecting the overall root system architecture. We have found that this change in orientation is not simply due to the onset of gravitropic competence, as non-vertical lateral roots are capable of both positive and negative gravitropism. Thus, the horizontal growth of the new lateral roots is determined by what is called the gravitropic set-point angle (GSA). In Arabidopsis shoots, rosette leaves and inflorescence branches also display GSA-dependent developmental changes in their orientation. The developmental control of the GSA of lateral organs in Arabidopsis provides us with a useful system for investigating the components involved in regulating directionality of tropistic responses. We have identified several Arabidopsis mutants that have either altered lateral root orientations, altered orientation of lateral organs in the shoot, or both, but maintain normal primary organ orientation. The mgsa ({m}odified {g}ravitropic {s}et-point {a}ngle) mutants with both altered lateral root and shoot orientation show that there are common components in the regulation of growth orientation in the different organs. Rosette leaves and lateral roots also have in common a regulation of positioning by red light. Further molecular and physiological analyses of the GSA mutants will provide insight into the basis of GSA regulation and, thus, a better understanding of how gravity controls plant architecture. [This work was supported by the National Aeronautics and Space Administration through grant no. NCC 2-1200.
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.
NASA Technical Reports Server (NTRS)
1983-01-01
Mission scenarios and space station architectures are discussed. Electrical power subsystems (EPS), environmental control and life support, subsystems (ECLSS), and reaction control subsystem (RCS) architectures are addressed. Thermal control subsystems, (TCS), guidance/navigation and control (GN and C), information management systems IMS), communications and tracking (C and T), and propellant transfer and storage systems architectures are discussed.
NASA Technical Reports Server (NTRS)
Chai, Patrick R.; Merrill, Raymond G.; Qu, Min
2016-01-01
NASA's Human Spaceflight Architecture Team is developing a reusable hybrid transportation architecture in which both chemical and solar-electric propulsion systems are used to deliver crew and cargo to exploration destinations. By combining chemical and solar-electric propulsion into a single spacecraft and applying each where it is most effective, the hybrid architecture enables a series of Mars trajectories that are more fuel efficient than an all chemical propulsion architecture without significant increases to trip time. The architecture calls for the aggregation of exploration assets in cislunar space prior to departure for Mars and utilizes high energy lunar-distant high Earth orbits for the final staging prior to departure. This paper presents the detailed analysis of various cislunar operations for the EMC Hybrid architecture as well as the result of the higher fidelity end-to-end trajectory analysis to understand the implications of the design choices on the Mars exploration campaign.
NASA Technical Reports Server (NTRS)
Mathur, F. P.
1972-01-01
Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.
Resource allocation and supervisory control architecture for intelligent behavior generation
NASA Astrophysics Data System (ADS)
Shah, Hitesh K.; Bahl, Vikas; Moore, Kevin L.; Flann, Nicholas S.; Martin, Jason
2003-09-01
In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) was funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). As part of our research, we presented the use of a grammar-based approach to enabling intelligent behaviors in autonomous robotic vehicles. With the growth of the number of available resources on the robot, the variety of the generated behaviors and the need for parallel execution of multiple behaviors to achieve reaction also grew. As continuation of our past efforts, in this paper, we discuss the parallel execution of behaviors and the management of utilized resources. In our approach, available resources are wrapped with a layer (termed services) that synchronizes and serializes access to the underlying resources. The controlling agents (called behavior generating agents) generate behaviors to be executed via these services. The agents are prioritized and then, based on their priority and the availability of requested services, the Control Supervisor decides on a winner for the grant of access to services. Though the architecture is applicable to a variety of autonomous vehicles, we discuss its application on T4, a mid-sized autonomous vehicle developed for security applications.
Executing CLIPS expert systems in a distributed environment
NASA Technical Reports Server (NTRS)
Taylor, James; Myers, Leonard
1990-01-01
This paper describes a framework for running cooperating agents in a distributed environment to support the Intelligent Computer Aided Design System (ICADS), a project in progress at the CAD Research Unit of the Design Institute at the California Polytechnic State University. Currently, the systems aids an architectural designer in creating a floor plan that satisfies some general architectural constraints and project specific requirements. At the core of ICADS is the Blackboard Control System. Connected to the blackboard are any number of domain experts called Intelligent Design Tools (IDT). The Blackboard Control System monitors the evolving design as it is being drawn and helps resolve conflicts from the domain experts. The user serves as a partner in this system by manipulating the floor plan in the CAD system and validating recommendations made by the domain experts. The primary components of the Blackboard Control System are two expert systems executed by a modified CLIPS shell. The first is the Message Handler. The second is the Conflict Resolver. The Conflict Resolver synthesizes the suggestions made by the domain experts, which can be either CLIPS expert systems, or compiled C programs. In DEMO1, the current ICADS prototype, the CLIPS domain expert systems are Acoustics, Lighting, Structural, and Thermal; the compiled C domain experts are the CAD system and the User Interface.
NASA Astrophysics Data System (ADS)
Mota, Alessandro D.; Cestari, André M.; de Oliveira, André O.; Oliveira, Anselmo G.; Terruggi, Cristina H. B.; Rossi, Giuliano; Castro, Jarbas C.; Ligabô, João. P. B.; Ortega, Tiago A.; Rosa, Tiago
2015-09-01
This work presents an innovative cross-linking procedure to keratoconus treatment, a corneal disease. It includes the development of an ultraviolet controlled emission portable device based on LED source and a new formulation of a photosensitive drug called riboflavin. Thus new formulation improves drug administration by its transepithelial property. The UV reaction with riboflavin in corneal tissue leads to a modification of corneal collagen fibers, turning them more rigid and dense, and consequently restraining the advance of the disease. We present the control procedures to maintain UV output power stable up to 45mw/cm2, the optical architecture that leads to a homogeneous UV spot and the new formulation of Riboflavin.
ERIC Educational Resources Information Center
Benedetto, S.; Bernelli Zazzera, F.; Bertola, P.; Cantamessa, M.; Ceri, S.; Ranci, C.; Spaziante, A.; Zanino, R.
2010-01-01
Politecnico di Milano and Politecnico di Torino, the top technical universities in Italy, united their efforts in 2004 by launching a unique excellence programme called Alta Scuola Politecnica (ASP). The ASP programme is devoted to 150 students, selected each year from among the top 5-10% of those enrolled in the Engineering, Architecture and…
Joint Technical Architecture for Robotic Systems (JTARS)-Final Report
NASA Technical Reports Server (NTRS)
Bradley, Arthur T.; Holloway, Sidney E., III
2006-01-01
This document represents the final report for the Joint Technical Architecture for Robotic Systems (JTARS) project, funded by the Office of Exploration as part of the Intramural Call for Proposals of 2005. The project was prematurely terminated, without review, as part of an agency-wide realignment towards the development of a Crew Exploration Vehicle (CEV) and meeting the near-term goals of lunar exploration.
NASA Technical Reports Server (NTRS)
Lee, Alan G.; Robinson, John E.; Lai, Chok Fung
2017-01-01
This paper will describe the purpose, architecture, and implementation of a gate-to-gate, high-fidelity air traffic simulation environment called the Shadow Mode Assessment using Realistic Technologies for the National Airspace System (SMART-NAS) Test Bed.The overarching purpose of the SMART-NAS Test Bed (SNTB) is to conduct high-fidelity, real-time, human-in-the-loop and automation-in-the-loop simulations of current and proposed future air traffic concepts for the Next Generation Air Transportation System of the United States, called NextGen. SNTB is intended to enable simulations that are currently impractical or impossible for three major areas of NextGen research and development: Concepts across multiple operational domains such as the gate-to-gate trajectory-based operations concept; Concepts related to revolutionary operations such as the seamless and widespread integration of large and small Unmanned Aerial System (UAS) vehicles throughout U.S. airspace; Real-time system-wide safety assurance technologies to allow safe, increasingly autonomous aviation operations. SNTB is primarily accessed through a web browser. A set of secure support services are provided to simplify all aspects of real-time, human-in-the-loop and automation-in-the-loop simulations from design (i.e., prior to execution) through analysis (i.e., after execution). These services include simulation architecture and asset configuration; scenario generation; command, control and monitoring; and analysis support.
Model-based Robotic Dynamic Motion Control for the Robonaut 2 Humanoid Robot
NASA Technical Reports Server (NTRS)
Badger, Julia M.; Hulse, Aaron M.; Taylor, Ross C.; Curtis, Andrew W.; Gooding, Dustin R.; Thackston, Allison
2013-01-01
Robonaut 2 (R2), an upper-body dexterous humanoid robot, has been undergoing experimental trials on board the International Space Station (ISS) for more than a year. R2 will soon be upgraded with two climbing appendages, or legs, as well as a new integrated model-based control system. This control system satisfies two important requirements; first, that the robot can allow humans to enter its workspace during operation and second, that the robot can move its large inertia with enough precision to attach to handrails and seat track while climbing around the ISS. This is achieved by a novel control architecture that features an embedded impedance control law on the motor drivers called Multi-Loop control which is tightly interfaced with a kinematic and dynamic coordinated control system nicknamed RoboDyn that resides on centralized processors. This paper presents the integrated control algorithm as well as several test results that illustrate R2's safety features and performance.
Call for Papers: Photonics in Switching
NASA Astrophysics Data System (ADS)
Wosinska, Lena; Glick, Madeleine
2006-04-01
Sparse distributed memory overview
NASA Technical Reports Server (NTRS)
Raugh, Mike
1990-01-01
The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.
Feasibility Assessment of a Fine-Grained Access Control Model on Resource Constrained Sensors.
Uriarte Itzazelaia, Mikel; Astorga, Jasone; Jacob, Eduardo; Huarte, Maider; Romaña, Pedro
2018-02-13
Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that provide services that can adapt to user behavior or be managed to achieve greater productivity. In such environments, smart things are inexpensive and, therefore, constrained devices. However, they are also critical components because of the importance of the information that they provide. Given this, strong security is a requirement, but not all security mechanisms in general and access control models in particular are feasible. In this paper, we present the feasibility assessment of an access control model that utilizes a hybrid architecture and a policy language that provides dynamic fine-grained policy enforcement in the sensors, which requires an efficient message exchange protocol called Hidra. This experimental performance assessment includes a prototype implementation, a performance evaluation model, the measurements and related discussions, which demonstrate the feasibility and adequacy of the analyzed access control model.
Feasibility Assessment of a Fine-Grained Access Control Model on Resource Constrained Sensors
Huarte, Maider; Romaña, Pedro
2018-01-01
Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that provide services that can adapt to user behavior or be managed to achieve greater productivity. In such environments, smart things are inexpensive and, therefore, constrained devices. However, they are also critical components because of the importance of the information that they provide. Given this, strong security is a requirement, but not all security mechanisms in general and access control models in particular are feasible. In this paper, we present the feasibility assessment of an access control model that utilizes a hybrid architecture and a policy language that provides dynamic fine-grained policy enforcement in the sensors, which requires an efficient message exchange protocol called Hidra. This experimental performance assessment includes a prototype implementation, a performance evaluation model, the measurements and related discussions, which demonstrate the feasibility and adequacy of the analyzed access control model. PMID:29438338
A development framework for distributed artificial intelligence
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1989-01-01
The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.
Telerobotic surgery: applications on human patients and training with virtual reality.
Rovetta, A; Bejczy, A K; Sala, R
1997-01-01
This paper deals with the developed researches and applications on telerobotic surgery, devoted to human patients and with training by virtual reality. The researches have been developed in cooperation between Telerobotics Laboratory, Department of Mechanics, Politecnico di Milano, Italy, and Automation and Control Section, Jet Propulsion Laboratory, Pasadena, USA. The researches carried to a telesurgery robotic operation on a dummy on 7th July 1993, by means of satellites communications, to a prostatic biopsy on a human patient on 1st September 1995 with optical fibers, to results on time delay effects, to results on virtual reality applications for training on laparoscopy and surgery. The search implied time delay when the control input originated in Politecnico di Milano, Italy. The results were satisfactory, but also pointed out the need for specific new control transformations to ease the operator's or surgeon's visual/mental workload for hand-eye coordination. In the same research, dummy force commands from JPL to Milan were sent, and were echoed immediately back to JPL, measuring the round-trip time of the command signal. This, to some degree, simulates a contact force feedback situation. The results were very surprising; despite the fact that the ISDN calls are closed and "private" calls, the round-trip time exhibited great variations not only between calls but also within the same call. The results proved that telerobotics and telecontrol may be applied to surgery. Time latency variations are caused by features of communication network, of sending and receiving end computer software. The problem and its solution is also an architectural issue, and considerable improvements are possible. Virtual reality in the application of the research is a strong support to training on virtual objects and not on living beings.
Comprehensive multiplatform collaboration
NASA Astrophysics Data System (ADS)
Singh, Kundan; Wu, Xiaotao; Lennox, Jonathan; Schulzrinne, Henning G.
2003-12-01
We describe the architecture and implementation of our comprehensive multi-platform collaboration framework known as Columbia InterNet Extensible Multimedia Architecture (CINEMA). It provides a distributed architecture for collaboration using synchronous communications like multimedia conferencing, instant messaging, shared web-browsing, and asynchronous communications like discussion forums, shared files, voice and video mails. It allows seamless integration with various communication means like telephones, IP phones, web and electronic mail. In addition, it provides value-added services such as call handling based on location information and presence status. The paper discusses the media services needed for collaborative environment, the components provided by CINEMA and the interaction among those components.
Aquaponic Growbed Water Level Control Using Fog Architecture
NASA Astrophysics Data System (ADS)
Asmi Romli, Muhamad; Daud, Shuhaizar; Raof, Rafikha Aliana A.; Awang Ahmad, Zahari; Mahrom, Norfadilla
2018-05-01
Integrated Multi-Trophic Aquaculture (IMTA) is an advance method of aquaculture which combines species with different nutritional needs to live together. The combination between aquatic live and crops is called aquaponics. Aquatic waste that normally removed by biofilters in normal aquaculture practice will be absorbed by crops in this practice. Aquaponics have few common components and growbed provide the best filtration function. In growbed a siphon act as mechanical structure to control water fill and flush process. Water to the growbed comes from fish tank with multiple flow speeds based on the pump specification and height. Too low speed and too fast flow rate can result in siphon malfunctionality. Pumps with variable speed do exist but it is costly. Majority of the aquaponic practitioner use single speed pump and try to match the pump speed with siphon operational requirement. In order to remove the matching requirement some control need to be introduced. Preliminarily this research will show the concept of fill-and-flush for multiple pumping speeds. The final aim of this paper is to show how water level management can be done to remove the speed dependency. The siphon tried to be controlled remotely since wireless data transmission quite practical in vast operational area. Fog architecture will be used in order to transmit sensor data and control command. This paper able to show the water able to be retented in the growbed within suggested duration by stopping the flow in once predefined level.
2017-01-31
mapping critical business workflows and then optimizing them with appropriate evolutionary technology choices is often called “ Product Line Architecture... technologies , products , services, and processes, and the USG evaluates them against its 360o requirements objectives, and refines them as appropriate, clarity...in rapidly evolving technological domains (e.g. by applying best commercial practices for open standard product line architecture.) An MP might be
ERIC Educational Resources Information Center
Lima, Ana Gabriela Godinho
2005-01-01
There are two peculiar moments in the history of the "struggle for national education", which, specifically in the city of Sao Paulo, capital of the State of Sao Paulo, one of the major and richest cities in Brazil, produced very interesting results in school architecture. The first moment happened in the period called the "First…
Antony, Joby; Mathuria, D S; Datta, T S; Maity, Tanmoy
2015-12-01
The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW(®). This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.
NASA Astrophysics Data System (ADS)
Antony, Joby; Mathuria, D. S.; Datta, T. S.; Maity, Tanmoy
2015-12-01
The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW®. This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antony, Joby; Mathuria, D. S.; Datta, T. S.
The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similarmore » control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as “CADS,” which stands for “Complete Automation of Distribution System.” CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW{sup ®}. This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.« less
NASA Technical Reports Server (NTRS)
Farah, Jeffrey J.
1992-01-01
Developing a robust, task level, error recovery and on-line planning architecture is an open research area. There is previously published work on both error recovery and on-line planning; however, none incorporates error recovery and on-line planning into one integrated platform. The integration of these two functionalities requires an architecture that possesses the following characteristics. The architecture must provide for the inclusion of new information without the destruction of existing information. The architecture must provide for the relating of pieces of information, old and new, to one another in a non-trivial rather than trivial manner (e.g., object one is related to object two under the following constraints, versus, yes, they are related; no, they are not related). Finally, the architecture must be not only a stand alone architecture, but also one that can be easily integrated as a supplement to some existing architecture. This thesis proposal addresses architectural development. Its intent is to integrate error recovery and on-line planning onto a single, integrated, multi-processor platform. This intelligent x-autonomous platform, called the Planning Coordinator, will be used initially to supplement existing x-autonomous systems and eventually replace them.
Enterprise Architecture Tradespace Analysis
2014-02-21
EXECUTIVE SUMMARY The Department of Defense (DoD)’s Science & Technology (S&T) priority for Engineered Resilient Systems (ERS) calls for...Science & Technology (S&T) priority for Engineered Resilient Systems (ERS) calls for adaptable designs with diverse systems models that can easily be...Department of Defense [Holland, 2012]. Some explicit goals are: • Establish baseline resiliency of current capabilities • More complete and robust
Controlling Material Reactivity Using Architecture
Sullivan, Kyle T.; Zhu, Cheng; Duoss, Eric B.; ...
2015-12-16
3D-printing methods are used to generate reactive material architectures. We observed several geometric parameters in order to influence the resultant flame propagation velocity, indicating that the architecture can be utilized to control reactivity. Two different architectures, channels and hurdles, are generated, and thin films of thermite are deposited onto the surface. Additionally, the architecture offers a route to control, at will, the energy release rate in reactive composite materials.
A High Performance COTS Based Computer Architecture
NASA Astrophysics Data System (ADS)
Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland
2014-08-01
Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.
Field-programmable lab-on-a-chip based on microelectrode dot array architecture.
Wang, Gary; Teng, Daniel; Lai, Yi-Tse; Lu, Yi-Wen; Ho, Yingchieh; Lee, Chen-Yi
2014-09-01
The fundamentals of electrowetting-on-dielectric (EWOD) digital microfluidics are very strong: advantageous capability in the manipulation of fluids, small test volumes, precise dynamic control and detection, and microscale systems. These advantages are very important for future biochip developments, but the development of EWOD microfluidics has been hindered by the absence of: integrated detector technology, standard commercial components, on-chip sample preparation, standard manufacturing technology and end-to-end system integration. A field-programmable lab-on-a-chip (FPLOC) system based on microelectrode dot array (MEDA) architecture is presented in this research. The MEDA architecture proposes a standard EWOD microfluidic component called 'microelectrode cell', which can be dynamically configured into microfluidic components to perform microfluidic operations of the biochip. A proof-of-concept prototype FPLOC, containing a 30 × 30 MEDA, was developed by using generic integrated circuits computer aided design tools, and it was manufactured with standard low-voltage complementary metal-oxide-semiconductor technology, which allows smooth on-chip integration of microfluidics and microelectronics. By integrating 900 droplet detection circuits into microelectrode cells, the FPLOC has achieved large-scale integration of microfluidics and microelectronics. Compared to the full-custom and bottom-up design methods, the FPLOC provides hierarchical top-down design approach, field-programmability and dynamic manipulations of droplets for advanced microfluidic operations.
Koutelakis, George V.; Anastassopoulos, George K.; Lymberopoulos, Dimitrios K.
2012-01-01
Multiprotocol medical imaging communication through the Internet is more flexible than the tight DICOM transfers. This paper introduces a modular multiprotocol teleradiology architecture that integrates DICOM and common Internet services (based on web, FTP, and E-mail) into a unique operational domain. The extended WADO service (a web extension of DICOM) and the other proposed services allow access to all levels of the DICOM information hierarchy as opposed to solely Object level. A lightweight client site is considered adequate, because the server site of the architecture provides clients with service interfaces through the web as well as invulnerable space for temporary storage, called as User Domains, so that users fulfill their applications' tasks. The proposed teleradiology architecture is pilot implemented using mainly Java-based technologies and is evaluated by engineers in collaboration with doctors. The new architecture ensures flexibility in access, user mobility, and enhanced data security. PMID:22489237
Behavioral networks as a model for intelligent agents
NASA Technical Reports Server (NTRS)
Sliwa, Nancy E.
1990-01-01
On-going work at NASA Langley Research Center in the development and demonstration of a paradigm called behavioral networks as an architecture for intelligent agents is described. This work focuses on the need to identify a methodology for smoothly integrating the characteristics of low-level robotic behavior, including actuation and sensing, with intelligent activities such as planning, scheduling, and learning. This work assumes that all these needs can be met within a single methodology, and attempts to formalize this methodology in a connectionist architecture called behavioral networks. Behavioral networks are networks of task processes arranged in a task decomposition hierarchy. These processes are connected by both command/feedback data flow, and by the forward and reverse propagation of weights which measure the dynamic utility of actions and beliefs.
Toward autonomous driving: The CMU Navlab. II - Architecture and systems
NASA Technical Reports Server (NTRS)
Thorpe, Charles; Hebert, Martial; Kanade, Takeo; Shafer, Steven
1991-01-01
A description is given of EDDIE, the architecture for the Navlab mobile robot which provides a toolkit for building specific systems quickly and easily. Included in the discussion are the annotated maps used by EDDIE and the Navlab's road-following system, called the Autonomous Mail Vehicle, which was built using EDDIE and its annotated maps as a basis. The contributions of the Navlab project and the lessons learned from it are examined.
SimWorx: An ADA Distributed Simulation Application Framework Supporting HLA and DIS
1996-12-01
The authors emphasize that most real systems have elements of several architectural styles; these are called heterogeneous architectures. Typically...In order for frameworks to be used, understood, and maintained, Adair emphasizes they must be clearly documented. 37 2.5.2.2 Framework Use Issues...0a) cuE U)) 00 Z64 Support Category Classes I Component-Type, Max Size _ Item-Type, Max-Size Bounded Buffer ProtectedContainer +Get() +Add() +Put
Velsko, Stephan; Bates, Thomas
2016-01-01
Despite numerous calls for improvement, the US biosurveillance enterprise remains a patchwork of uncoordinated systems that fail to take advantage of the rapid progress in information processing, communication, and analytics made in the past decade. By synthesizing components from the extensive biosurveillance literature, we propose a conceptual framework for a national biosurveillance architecture and provide suggestions for implementation. The framework differs from the current federal biosurveillance development pathway in that it is not focused on systems useful for "situational awareness" but is instead focused on the long-term goal of having true warning capabilities. Therefore, a guiding design objective is the ability to digitally detect emerging threats that span jurisdictional boundaries, because attempting to solve the most challenging biosurveillance problem first provides the strongest foundation to meet simpler surveillance objectives. Core components of the vision are: (1) a whole-of-government approach to support currently disparate federal surveillance efforts that have a common data need, including those for food safety, vaccine and medical product safety, and infectious disease surveillance; (2) an information architecture that enables secure national access to electronic health records, yet does not require that data be sent to a centralized location for surveillance analysis; (3) an inference architecture that leverages advances in "big data" analytics and learning inference engines-a significant departure from the statistical process control paradigm that underpins nearly all current syndromic surveillance systems; and (4) an organizational architecture with a governance model aimed at establishing national biosurveillance as a critical part of the US national infrastructure. Although it will take many years to implement, and a national campaign of education and debate to acquire public buy-in for such a comprehensive system, the potential benefits warrant increased consideration by the US government.
Implementing neural nets with programmable logic
NASA Technical Reports Server (NTRS)
Vidal, Jacques J.
1988-01-01
Networks of Boolean programmable logic modules are presented as one purely digital class of artificial neural nets. The approach contrasts with the continuous analog framework usually suggested. Programmable logic networks are capable of handling many neural-net applications. They avoid some of the limitations of threshold logic networks and present distinct opportunities. The network nodes are called dynamically programmable logic modules. They can be implemented with digitally controlled demultiplexers. Each node performs a Boolean function of its inputs which can be dynamically assigned. The overall network is therefore a combinational circuit and its outputs are Boolean global functions of the network's input variables. The approach offers definite advantages for VLSI implementation, namely, a regular architecture with limited connectivity, simplicity of the control machinery, natural modularity, and the support of a mature technology.
Nuclear pore complex tethers to the cytoskeleton.
Goldberg, Martin W
2017-08-01
The nuclear envelope is tethered to the cytoskeleton. The best known attachments of all elements of the cytoskeleton are via the so-called LINC complex. However, the nuclear pore complexes, which mediate the transport of soluble and membrane bound molecules, are also linked to the microtubule network, primarily via motor proteins (dynein and kinesins) which are linked, most importantly, to the cytoplasmic filament protein of the nuclear pore complex, Nup358, by the adaptor BicD2. The evidence for such linkages and possible roles in nuclear migration, cell cycle control, nuclear transport and cell architecture are discussed. Copyright © 2017. Published by Elsevier Ltd.
High performance network and channel-based storage
NASA Technical Reports Server (NTRS)
Katz, Randy H.
1991-01-01
In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.
Controlling Material Reactivity Using Architecture.
Sullivan, Kyle T; Zhu, Cheng; Duoss, Eric B; Gash, Alexander E; Kolesky, David B; Kuntz, Joshua D; Lewis, Jennifer A; Spadaccini, Christopher M
2016-03-09
3D-printing methods are used to generate reactive material architectures. Several geometric parameters are observed to influence the resultant flame propagation velocity, indicating that the architecture can be utilized to control reactivity. Two different architectures, channels and hurdles, are generated, and thin films of thermite are deposited onto the surface. The architecture offers an additional route to control, at will, the energy release rate in reactive composite materials. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
SIENA Customer Problem Statement and Requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
L. Sauer; R. Clay; C. Adams
2000-08-01
This document describes the problem domain and functional requirements of the SIENA framework. The software requirements and system architecture of SIENA are specified in separate documents (called SIENA Software Requirement Specification and SIENA Software Architecture, respectively). While currently this version of the document describes the problems and captures the requirements within the Analysis domain (concentrating on finite element models), it is our intention to subsequent y expand this document to describe problems and capture requirements from the Design and Manufacturing domains. In addition, SIENA is designed to be extendible to support and integrate elements from the other domains (see SIENAmore » Software Architecture document).« less
Designing Security-Hardened Microkernels For Field Devices
NASA Astrophysics Data System (ADS)
Hieb, Jeffrey; Graham, James
Distributed control systems (DCSs) play an essential role in the operation of critical infrastructures. Perimeter field devices are important DCS components that measure physical process parameters and perform control actions. Modern field devices are vulnerable to cyber attacks due to their increased adoption of commodity technologies and that fact that control networks are no longer isolated. This paper describes an approach for creating security-hardened field devices using operating system microkernels that isolate vital field device operations from untrusted network-accessible applications. The approach, which is influenced by the MILS and Nizza architectures, is implemented in a prototype field device. Whereas, previous microkernel-based implementations have been plagued by poor inter-process communication (IPC) performance, the prototype exhibits an average IPC overhead for protected device calls of 64.59 μs. The overall performance of field devices is influenced by several factors; nevertheless, the observed IPC overhead is low enough to encourage the continued development of the prototype.
Retrospective revaluation in sequential decision making: a tale of two systems.
Gershman, Samuel J; Markman, Arthur B; Otto, A Ross
2014-02-01
Recent computational theories of decision making in humans and animals have portrayed 2 systems locked in a battle for control of behavior. One system--variously termed model-free or habitual--favors actions that have previously led to reward, whereas a second--called the model-based or goal-directed system--favors actions that causally lead to reward according to the agent's internal model of the environment. Some evidence suggests that control can be shifted between these systems using neural or behavioral manipulations, but other evidence suggests that the systems are more intertwined than a competitive account would imply. In 4 behavioral experiments, using a retrospective revaluation design and a cognitive load manipulation, we show that human decisions are more consistent with a cooperative architecture in which the model-free system controls behavior, whereas the model-based system trains the model-free system by replaying and simulating experience.
Neural networks for feedback feedforward nonlinear control systems.
Parisini, T; Zoppoli, R
1994-01-01
This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method.
The entropy reduction engine: Integrating planning, scheduling, and control
NASA Technical Reports Server (NTRS)
Drummond, Mark; Bresina, John L.; Kedar, Smadar T.
1991-01-01
The Entropy Reduction Engine, an architecture for the integration of planning, scheduling, and control, is described. The architecture is motivated, presented, and analyzed in terms of its different components; namely, problem reduction, temporal projection, and situated control rule execution. Experience with this architecture has motivated the recent integration of learning. The learning methods are described along with their impact on architecture performance.
Architectures for mission control at the Jet Propulsion Laboratory
NASA Technical Reports Server (NTRS)
Davidson, Reger A.; Murphy, Susan C.
1992-01-01
JPL is currently converting to an innovative control center data system which is a distributed, open architecture for telemetry delivery and which is enabling advancement towards improved automation and operability, as well as new technology, in mission operations at JPL. The scope of mission control within mission operations is examined. The concepts of a mission control center and how operability can affect the design of a control center data system are discussed. Examples of JPL's mission control architecture, data system development, and prototype efforts at the JPL Operations Engineering Laboratory are provided. Strategies for the future of mission control architectures are outlined.
To Wield Excalibur: Seeking Unity of Effort in Joint Information Operations
2002-06-07
University of Michigan. In 1994, they co-authored a book called “Competing for the Future” in which they introduce the concepts of strategic intent ... strategic architecture, and then to help “build” that future. According to Hamel and Prahalad , a strategic architecture is a “high-level blueprint for the...Figure 6: Develop JIO Intellectual Leadership. It shows that the JIOST would be responsible for setting the Strategic Intent of our IO efforts
Algorithms and software for solving finite element equations on serial and parallel architectures
NASA Technical Reports Server (NTRS)
George, Alan
1989-01-01
Over the past 15 years numerous new techniques have been developed for solving systems of equations and eigenvalue problems arising in finite element computations. A package called SPARSPAK has been developed by the author and his co-workers which exploits these new methods. The broad objective of this research project is to incorporate some of this software in the Computational Structural Mechanics (CSM) testbed, and to extend the techniques for use on multiprocessor architectures.
Cognitive architecture of perceptual organization: from neurons to gnosons.
van der Helm, Peter A
2012-02-01
What, if anything, is cognitive architecture and how is it implemented in neural architecture? Focusing on perceptual organization, this question is addressed by way of a pluralist approach which, supported by metatheoretical considerations, combines complementary insights from representational, connectionist, and dynamic systems approaches to cognition. This pluralist approach starts from a representationally inspired model which implements the intertwined but functionally distinguishable subprocesses of feedforward feature encoding, horizontal feature binding, and recurrent feature selection. As sustained by a review of neuroscientific evidence, these are the subprocesses that are believed to take place in the visual hierarchy in the brain. Furthermore, the model employs a special form of processing, called transparallel processing, whose neural signature is proposed to be gamma-band synchronization in transient horizontal neural assemblies. In neuroscience, such assemblies are believed to mediate binding of similar features. Their formal counterparts in the model are special input-dependent distributed representations, called hyperstrings, which allow many similar features to be processed in a transparallel fashion, that is, simultaneously as if only one feature were concerned. This form of processing does justice to both the high combinatorial capacity and the high speed of the perceptual organization process. A naturally following proposal is that those temporarily synchronized neural assemblies are "gnosons", that is, constituents of flexible self-organizing cognitive architecture in between the relatively rigid level of neurons and the still elusive level of consciousness.
The UAS control segment architecture: an overview
NASA Astrophysics Data System (ADS)
Gregory, Douglas A.; Batavia, Parag; Coats, Mark; Allport, Chris; Jennings, Ann; Ernst, Richard
2013-05-01
The Under Secretary of Defense (Acquisition, Technology and Logistics) directed the Services in 2009 to jointly develop and demonstrate a common architecture for command and control of Department of Defense (DoD) Unmanned Aircraft Systems (UAS) Groups 2 through 5. The UAS Control Segment (UCS) Architecture is an architecture framework for specifying and designing the softwareintensive capabilities of current and emerging UCS systems in the DoD inventory. The UCS Architecture is based on Service Oriented Architecture (SOA) principles that will be adopted by each of the Services as a common basis for acquiring, integrating, and extending the capabilities of the UAS Control Segment. The UAS Task Force established the UCS Working Group to develop and support the UCS Architecture. The Working Group currently has over three hundred members, and is open to qualified representatives from DoD-approved defense contractors, academia, and the Government. The UCS Architecture is currently at Release 2.2, with Release 3.0 planned for July 2013. This paper discusses the current and planned elements of the UCS Architecture, and related activities of the UCS Community of Interest.
Different micromanipulation applications based on common modular control architecture
NASA Astrophysics Data System (ADS)
Sipola, Risto; Vallius, Tero; Pudas, Marko; Röning, Juha
2010-01-01
This paper validates a previously introduced scalable modular control architecture and shows how it can be used to implement research equipment. The validation is conducted by presenting different kinds of micromanipulation applications that use the architecture. Conditions of the micro-world are very different from those of the macro-world. Adhesive forces are significant compared to gravitational forces when micro-scale objects are manipulated. Manipulation is mainly conducted by automatic control relying on haptic feedback provided by force sensors. The validated architecture is a hierarchical layered hybrid architecture, including a reactive layer and a planner layer. The implementation of the architecture is modular, and the architecture has a lot in common with open architectures. Further, the architecture is extensible, scalable, portable and it enables reuse of modules. These are the qualities that we validate in this paper. To demonstrate the claimed features, we present different applications that require special control in micrometer, millimeter and centimeter scales. These applications include a device that measures cell adhesion, a device that examines properties of thin films, a device that measures adhesion of micro fibers and a device that examines properties of submerged gel produced by bacteria. Finally, we analyze how the architecture is used in these applications.
Evolving concepts of lunar architecture: The potential of subselene development
NASA Technical Reports Server (NTRS)
Daga, Andrew W.; Daga, Meryl A.; Wendel, Wendel R.
1992-01-01
In view of the superior environmental and operational conditions that are thought to exist in lava tubes, popular visions of permanent settlements built upon the lunar surface may prove to be entirely romantic. The factors that will ultimately come together to determine the design of a lunar base are complex and interrelated, and they call for a radical architectural solution. Whether lunar surface-deployed superstructures can answer these issues is called into question. One particularly troublesome concern in any lunar base design is the need for vast amounts of space, and the ability of man-made structures to provide such volumes in a reliable pressurized habitat is doubtful. An examination of several key environmental design issues suggests that the alternative mode of subselene development may offer the best opportunity for an enduring and humane settlement.
F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming
NASA Technical Reports Server (NTRS)
DiNucci, David C.; Saini, Subhash (Technical Monitor)
1998-01-01
Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).
Mental models for cognitive control
NASA Astrophysics Data System (ADS)
Schilling, Malte; Cruse, Holk; Schmitz, Josef
2007-05-01
Even so called "simple" organisms as insects are able to fastly adapt to changing conditions of their environment. Their behaviour is affected by many external influences and only its variability and adaptivity permits their survival. An intensively studied example concerns hexapod walking. 1,2 Complex walking behaviours in stick insects have been analysed and the results were used to construct a reactive model that controls walking in a robot. This model is now extended by higher levels of control: as a bottom-up approach the low-level reactive behaviours are modulated and activated through a medium level. In addition, the system grows up to an upper level for cognitive control of the robot: Cognition - as the ability to plan ahead - and cognitive skills involve internal representations of the subject itself and its environment. These representations are used for mental simulations: In difficult situations, for which neither motor primitives, nor whole sequences of these exist, available behaviours are varied and applied in the internal model while the body itself is decoupled from the controlling modules. The result of the internal simulation is evaluated. Successful actions are learned and applied to the robot. This constitutes a level for planning. Its elements (movements, behaviours) are embodied in the lower levels, whereby their meaning arises directly from these levels. The motor primitives are situation models represented as neural networks. The focus of this work concerns the general architecture of the framework as well as the reactive basic layer of the bottom-up architecture and its connection to higher level functions and its application on an internal model.
Bearer channel control protocol for the dynamic VB5.2 interface in ATM access networks
NASA Astrophysics Data System (ADS)
Fragoulopoulos, Stratos K.; Mavrommatis, K. I.; Venieris, Iakovos S.
1996-12-01
In the multi-vendor systems, a customer connected to an Access network (AN) must be capable of selecting a specific Service Node (SN) according to the services the SN provides. The multiplicity of technologically varying AN calls for the definition of a standard reference point between the AN and the SN widely known as the VB interface. Two versions are currently offered. The VB5.1 is simpler to implement but is not as flexible as the VB5.2, which supports switched connections. The VB5.2 functionality is closely coupled to the Broadband Bearer Channel Connection Protocol (B-BCCP). The B-BCCP is used for conveying the necessary information for dynamic resource allocation, traffic policing and routing in the AN as well as for information exchange concerning the status of the AN before a new call is established by the SN. By relying on such a protocol for the exchange of information instead of intercepting and interpreting signalling messages in the AN, the architecture of the AN is simplified because the functionality related to processing is not duplicated. In this paper a prominent B- BCCP candidate is defined, called the Service node Access network Interaction Protocol.
A new flight control and management system architecture and configuration
NASA Astrophysics Data System (ADS)
Kong, Fan-e.; Chen, Zongji
2006-11-01
The advanced fighter should possess the performance such as super-sound cruising, stealth, agility, STOVL(Short Take-Off Vertical Landing),powerful communication and information processing. For this purpose, it is not enough only to improve the aerodynamic and propulsion system. More importantly, it is necessary to enhance the control system. A complete flight control system provides not only autopilot, auto-throttle and control augmentation, but also the given mission management. F-22 and JSF possess considerably outstanding flight control system on the basis of pave pillar and pave pace avionics architecture. But their control architecture is not enough integrated. The main purpose of this paper is to build a novel fighter control system architecture. The control system constructed on this architecture should be enough integrated, inexpensive, fault-tolerant, high safe, reliable and effective. And it will take charge of both the flight control and mission management. Starting from this purpose, this paper finishes the work as follows: First, based on the human nervous control, a three-leveled hierarchical control architecture is proposed. At the top of the architecture, decision level is in charge of decision-making works. In the middle, organization & coordination level will schedule resources, monitor the states of the fighter and switch the control modes etc. And the bottom is execution level which holds the concrete drive and measurement; then, according to their function and resources all the tasks involving flight control and mission management are sorted to individual level; at last, in order to validate the three-leveled architecture, a physical configuration is also showed. The configuration is distributed and applies some new advancement in information technology industry such line replaced module and cluster technology.
CHEETAH: circuit-switched high-speed end-to-end transport architecture
NASA Astrophysics Data System (ADS)
Veeraraghavan, Malathi; Zheng, Xuan; Lee, Hyuk; Gardner, M.; Feng, Wuchun
2003-10-01
Leveraging the dominance of Ethernet in LANs and SONET/SDH in MANs and WANs, we propose a service called CHEETAH (Circuit-switched High-speed End-to-End Transport ArcHitecture). The service concept is to provide end hosts with high-speed, end-to-end circuit connectivity on a call-by-call shared basis, where a "circuit" consists of Ethernet segments at the ends that are mapped into Ethernet-over-SONET long-distance circuits. This paper focuses on the file-transfer application for such circuits. For this application, the CHEETAH service is proposed as an add-on to the primary Internet access service already in place for enterprise hosts. This allows an end host that is sending a file to first attempt setting up an end-to-end Ethernet/EoS circuit, and if rejected, fall back to the TCP/IP path. If the circuit setup is successful, the end host will enjoy a much shorter file-transfer delay than on the TCP/IP path. To determine the conditions under which an end host with access to the CHEETAH service should attempt circuit setup, we analyze mean file-transfer delays as a function of call blocking probability in the circuit-switched network, probability of packet loss in the IP network, round-trip times, link rates, and so on.
An Open Specification for Space Project Mission Operations Control Architectures
NASA Technical Reports Server (NTRS)
Hooke, A.; Heuser, W. R.
1995-01-01
An 'open specification' for Space Project Mission Operations Control Architectures is under development in the Spacecraft Control Working Group of the American Institute for Aeronautics and Astro- nautics. This architecture identifies 5 basic elements incorporated in the design of similar operations systems: Data, System Management, Control Interface, Decision Support Engine, & Space Messaging Service.
A synchronized computational architecture for generalized bilateral control of robot arms
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Szakaly, Zoltan
1987-01-01
This paper describes a computational architecture for an interconnected high speed distributed computing system for generalized bilateral control of robot arms. The key method of the architecture is the use of fully synchronized, interrupt driven software. Since an objective of the development is to utilize the processing resources efficiently, the synchronization is done in the hardware level to reduce system software overhead. The architecture also achieves a balaced load on the communication channel. The paper also describes some architectural relations to trading or sharing manual and automatic control.
GSM-Railway as part of the European Rail Traffic Management System
NASA Astrophysics Data System (ADS)
Bibac, Ionut
2007-05-01
GSM-R is a vital component inside the ERTMS which is also an essential element of European Community rail projects; investment in equipping and the rolling stock with ERTMS could reach 5 billion eurodollars in the period 2007-2016. GSM-R is the result of over ten years of collaboration between the various European railway companies, the railway communication industry and the different standardization bodies. GSM-R provides a secure platform for voice and data communication between the operational staff of the railway companies including drivers, dispatchers, shunting team members, train engineers, and station controllers. It delivers advanced features such as group calls, voice broadcast, location based connections, and call pre-emption in case of an emergency, which significantly improves communication, collaboration, and security management across operational staff members. Taking into account the above mentioned, the paper will permit to audience to discover the GSM-R network architecture, services and applications proposed by this technology together with the future development and market situation due to the market liberalization.
Intranet technology in hospital information systems.
Cimino, J J
1997-01-01
The clinical information system architecture at the Columbia-Presbyterian Medical Center in New York is being incorporated into an intranet using Internet and World Wide Web protocols. The result is an Enterprise-Wide Web which provides more flexibility for access to specific patient information and general medical knowledge. Critical aspects of the architecture include a central data repository and a vocabulary server. The new architecture provides ways of displaying patient information in summary, graphical, and multimedia forms. Using customized links called Infobuttons, we provide access to on-line information resources available on the World Wide Web. Our experience to date has raised a number of interesting issues about the use of this technology for health care systems.
Array architectures for iterative algorithms
NASA Technical Reports Server (NTRS)
Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas
1987-01-01
Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.
Writing Classroom as A&P Parking Lot.
ERIC Educational Resources Information Center
Sirc, Geoffrey
1993-01-01
Calls for a new urbanism in composition studies. Attempts to reconfigure the landscape of the writing classroom around the very notion of landscape, to reposition the architectonics of college writing more strictly according to architecture. (RS)
46 CFR 172.020 - Incorporation by reference.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., Naval Architecture Division, Office of Design and Engineering Standards, (CG-521), 2100 2nd St., SW...). For information on the availability of this material at NARA, call 202-741-6030, or go to: http://www...
Integrating Automation into a Multi-Mission Operations Center
NASA Technical Reports Server (NTRS)
Surka, Derek M.; Jones, Lori; Crouse, Patrick; Cary, Everett A, Jr.; Esposito, Timothy C.
2007-01-01
NASA Goddard Space Flight Center's Space Science Mission Operations (SSMO) Project is currently tackling the challenge of minimizing ground operations costs for multiple satellites that have surpassed their prime mission phase and are well into extended mission. These missions are being reengineered into a multi-mission operations center built around modern information technologies and a common ground system infrastructure. The effort began with the integration of four SMEX missions into a similar architecture that provides command and control capabilities and demonstrates fleet automation and control concepts as a pathfinder for additional mission integrations. The reengineered ground system, called the Multi-Mission Operations Center (MMOC), is now undergoing a transformation to support other SSMO missions, which include SOHO, Wind, and ACE. This paper presents the automation principles and lessons learned to date for integrating automation into an existing operations environment for multiple satellites.
A Mission Concept: Re-Entry Hopper-Aero-Space-Craft System on-Mars (REARM-Mars)
NASA Technical Reports Server (NTRS)
Davoodi, Faranak
2013-01-01
Future missions to Mars that would need a sophisticated lander, hopper, or rover could benefit from the REARM Architecture. The mission concept REARM Architecture is designed to provide unprecedented capabilities for future Mars exploration missions, including human exploration and possible sample-return missions, as a reusable lander, ascend/descend vehicle, refuelable hopper, multiple-location sample-return collector, laboratory, and a cargo system for assets and humans. These could all be possible by adding just a single customized Re-Entry-Hopper-Aero-Space-Craft System, called REARM-spacecraft, and a docking station at the Martian orbit, called REARM-dock. REARM could dramatically decrease the time and the expense required to launch new exploratory missions on Mars by making them less dependent on Earth and by reusing the assets already designed, built, and sent to Mars. REARM would introduce a new class of Mars exploration missions, which could explore much larger expanses of Mars in a much faster fashion and with much more sophisticated lab instruments. The proposed REARM architecture consists of the following subsystems: REARM-dock, REARM-spacecraft, sky-crane, secure-attached-compartment, sample-return container, agile rover, scalable orbital lab, and on-the-road robotic handymen.
Prototype architecture for a VLSI level zero processing system. [Space Station Freedom
NASA Technical Reports Server (NTRS)
Shi, Jianfei; Grebowsky, Gerald J.; Horner, Ward P.; Chesney, James R.
1989-01-01
The prototype architecture and implementation of a high-speed level zero processing (LZP) system are discussed. Due to the new processing algorithm and VLSI technology, the prototype LZP system features compact size, low cost, high processing throughput, and easy maintainability and increased reliability. Though extensive control functions have been done by hardware, the programmability of processing tasks makes it possible to adapt the system to different data formats and processing requirements. It is noted that the LZP system can handle up to 8 virtual channels and 24 sources with combined data volume of 15 Gbytes per orbit. For greater demands, multiple LZP systems can be configured in parallel, each called a processing channel and assigned a subset of virtual channels. The telemetry data stream will be steered into different processing channels in accordance with their virtual channel IDs. This super system can cope with a virtually unlimited number of virtual channels and sources. In the near future, it is expected that new disk farms with data rate exceeding 150 Mbps will be available from commercial vendors due to the advance in disk drive technology.
Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array
NASA Astrophysics Data System (ADS)
Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul
2008-04-01
This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.
Efficient Ada multitasking on a RISC register window architecture
NASA Technical Reports Server (NTRS)
Kearns, J. P.; Quammen, D.
1987-01-01
This work addresses the problem of reducing context switch overhead on a processor which supports a large register file - a register file much like that which is part of the Berkeley RISC processors and several other emerging architectures (which are not necessarily reduced instruction set machines in the purest sense). Such a reduction in overhead is particularly desirable in a real-time embedded application, in which task-to-task context switch overhead may result in failure to meet crucial deadlines. A storage management technique by which a context switch may be implemented as cheaply as a procedure call is presented. The essence of this technique is the avoidance of the save/restore of registers on the context switch. This is achieved through analysis of the static source text of an Ada tasking program. Information gained during that analysis directs the optimized storage management strategy for that program at run time. A formal verification of the technique in terms of an operational control model and an evaluation of the technique's performance via simulations driven by synthetic Ada program traces are presented.
Design of an integrated airframe/propulsion control system architecture
NASA Technical Reports Server (NTRS)
Cohen, Gerald C.; Lee, C. William; Strickland, Michael J.; Torkelson, Thomas C.
1990-01-01
The design of an integrated airframe/propulsion control system architecture is described. The design is based on a prevalidation methodology that uses both reliability and performance. A detailed account is given for the testing associated with a subset of the architecture and concludes with general observations of applying the methodology to the architecture.
An Intelligent Propulsion Control Architecture to Enable More Autonomous Vehicle Operation
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Sowers, T. Shane; Simon, Donald L.; Owen, A. Karl; Rinehart, Aidan W.; Chicatelli, Amy K.; Acheson, Michael J.; Hueschen, Richard M.; Spiers, Christopher W.
2018-01-01
This paper describes an intelligent propulsion control architecture that coordinates with the flight control to reduce the amount of pilot intervention required to operate the vehicle. Objectives of the architecture include the ability to: automatically recognize the aircraft operating state and flight phase; configure engine control to optimize performance with knowledge of engine condition and capability; enhance aircraft performance by coordinating propulsion control with flight control; and recognize off-nominal propulsion situations and to respond to them autonomously. The hierarchical intelligent propulsion system control can be decomposed into a propulsion system level and an individual engine level. The architecture is designed to be flexible to accommodate evolving requirements, adapt to technology improvements, and maintain safety.
A spacecraft computer repairable via command.
NASA Technical Reports Server (NTRS)
Fimmel, R. O.; Baker, T. E.
1971-01-01
The MULTIPAC is a central data system developed for deep-space probes with the distinctive feature that it may be repaired during flight via command and telemetry links by reprogramming around the failed unit. The computer organization uses pools of identical modules which the program organizes into one or more computers called processors. The interaction of these modules is dynamically controlled by the program rather than hardware. In the event of a failure, new programs are entered which reorganize the central data system with a somewhat reduced total processing capability aboard the spacecraft. Emphasis is placed on the evolution of the system architecture and the final overall system design rather than the specific logic design.
Plant Water Uptake in Drying Soils1
Lobet, Guillaume; Couvreur, Valentin; Meunier, Félicien; Javaux, Mathieu; Draye, Xavier
2014-01-01
Over the last decade, investigations on root water uptake have evolved toward a deeper integration of the soil and roots compartment properties, with the goal of improving our understanding of water acquisition from drying soils. This evolution parallels the increasing attention of agronomists to suboptimal crop production environments. Recent results have led to the description of root system architectures that might contribute to deep-water extraction or to water-saving strategies. In addition, the manipulation of root hydraulic properties would provide further opportunities to improve water uptake. However, modeling studies highlight the role of soil hydraulics in the control of water uptake in drying soil and call for integrative soil-plant system approaches. PMID:24515834
Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)
2002-01-01
The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.
Database architectures for Space Telescope Science Institute
NASA Astrophysics Data System (ADS)
Lubow, Stephen
1993-08-01
At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).
2004-01-01
login identity to the one under which the system call is executed, the parameters of the system call execution - file names including full path...Anomaly detection COAST-EIMDT Distributed on target hosts EMERALD Distributed on target hosts and security servers Signature recognition Anomaly...uses a centralized architecture, and employs an anomaly detection technique for intrusion detection. The EMERALD project [80] proposes a
Insulator function and topological domain border strength scale with architectural protein occupancy
2014-01-01
Background Chromosome conformation capture studies suggest that eukaryotic genomes are organized into structures called topologically associating domains. The borders of these domains are highly enriched for architectural proteins with characterized roles in insulator function. However, a majority of architectural protein binding sites localize within topological domains, suggesting sites associated with domain borders represent a functionally different subclass of these regulatory elements. How topologically associating domains are established and what differentiates border-associated from non-border architectural protein binding sites remain unanswered questions. Results By mapping the genome-wide target sites for several Drosophila architectural proteins, including previously uncharacterized profiles for TFIIIC and SMC-containing condensin complexes, we uncover an extensive pattern of colocalization in which architectural proteins establish dense clusters at the borders of topological domains. Reporter-based enhancer-blocking insulator activity as well as endogenous domain border strength scale with the occupancy level of architectural protein binding sites, suggesting co-binding by architectural proteins underlies the functional potential of these loci. Analyses in mouse and human stem cells suggest that clustering of architectural proteins is a general feature of genome organization, and conserved architectural protein binding sites may underlie the tissue-invariant nature of topologically associating domains observed in mammals. Conclusions We identify a spectrum of architectural protein occupancy that scales with the topological structure of chromosomes and the regulatory potential of these elements. Whereas high occupancy architectural protein binding sites associate with robust partitioning of topologically associating domains and robust insulator function, low occupancy sites appear reserved for gene-specific regulation within topological domains. PMID:24981874
Integrating Computer Architectures into the Design of High-Performance Controllers
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.; Leyland, Jane A.; Warmbrodt, William
1986-01-01
Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, on-line graphics, and file management. This paper discusses five global design considerations that are useful to integrate array processor, multimicroprocessor, and host computer system architecture into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the non-real-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration will be briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind-tunnel environment, the control architecture can generally be applied to a wide range of automatic control applications.
Arbex, D F; Jappur, R; Selig, P; Varvakis, G
2012-01-01
This article addresses the ergonomic criteria that guide the construction of an educational game called Environmental Simulator. The focus is on environment navigation considering aspects of content architecture and its esthetics functionality.
Object-Oriented Control System Design Using On-Line Training of Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rubaai, Ahmed
1997-01-01
This report deals with the object-oriented model development of a neuro-controller design for permanent magnet (PM) dc motor drives. The system under study is described as a collection of interacting objects. Each object module describes the object behaviors, called methods. The characteristics of the object are included in its variables. The knowledge of the object exists within its variables, and the performance is determined by its methods. This structure maps well to the real world objects that comprise the system being modeled. A dynamic learning architecture that possesses the capabilities of simultaneous on-line identification and control is incorporated to enforce constraints on connections and control the dynamics of the motor. The control action is implemented "on-line", in "real time" in such a way that the predicted trajectory follows a specified reference model. A design example of controlling a PM dc motor drive on-line shows the effectiveness of the design tool. This will therefore be very useful in aerospace applications. It is expected to provide an innovative and noval software model for the rocket engine numerical simulator executive.
NASA Technical Reports Server (NTRS)
Martin-Alvarez, A.; Hayati, S.; Volpe, R.; Petras, R.
1999-01-01
An advanced design and implementation of a Control Architecture for Long Range Autonomous Planetary Rovers is presented using a hierarchical top-down task decomposition, and the common structure of each design is presented based on feedback control theory. Graphical programming is presented as a common intuitive language for the design when a large design team is composed of managers, architecture designers, engineers, programmers, and maintenance personnel. The whole design of the control architecture consists in the classic control concepts of cyclic data processing and event-driven reaction to achieve all the reasoning and behaviors needed. For this purpose, a commercial graphical tool is presented that includes the mentioned control capabilities. Messages queues are used for inter-communication among control functions, allowing Artificial Intelligence (AI) reasoning techniques based on queue manipulation. Experimental results show a highly autonomous control system running in real time on top the JPL micro-rover Rocky 7 controlling simultaneously several robotic devices. This paper validates the sinergy between Artificial Intelligence and classic control concepts in having in advanced Control Architecture for Long Range Autonomous Planetary Rovers.
Navigation through unknown and dynamic open spaces using topological notions
NASA Astrophysics Data System (ADS)
Miguel-Tomé, Sergio
2018-04-01
Until now, most algorithms used for navigation have had the purpose of directing system towards one point in space. However, humans communicate tasks by specifying spatial relations among elements or places. In addition, the environments in which humans develop their activities are extremely dynamic. The only option that allows for successful navigation in dynamic and unknown environments is making real-time decisions. Therefore, robots capable of collaborating closely with human beings must be able to make decisions based on the local information registered by the sensors and interpret and express spatial relations. Furthermore, when one person is asked to perform a task in an environment, this task is communicated given a category of goals so the person does not need to be supervised. Thus, two problems appear when one wants to create multifunctional robots: how to navigate in dynamic and unknown environments using spatial relations and how to accomplish this without supervision. In this article, a new architecture to address the two cited problems is presented, called the topological qualitative navigation architecture. In previous works, a qualitative heuristic called the heuristic of topological qualitative semantics (HTQS) has been developed to establish and identify spatial relations. However, that heuristic only allows for establishing one spatial relation with a specific object. In contrast, navigation requires a temporal sequence of goals with different objects. The new architecture attains continuous generation of goals and resolves them using HTQS. Thus, the new architecture achieves autonomous navigation in dynamic or unknown open environments.
Hadoop-based implementation of processing medical diagnostic records for visual patient system
NASA Astrophysics Data System (ADS)
Yang, Yuanyuan; Shi, Liehang; Xie, Zhe; Zhang, Jianguo
2018-03-01
We have innovatively introduced Visual Patient (VP) concept and method visually to represent and index patient imaging diagnostic records (IDR) in last year SPIE Medical Imaging (SPIE MI 2017), which can enable a doctor to review a large amount of IDR of a patient in a limited appointed time slot. In this presentation, we presented a new approach to design data processing architecture of VP system (VPS) to acquire, process and store various kinds of IDR to build VP instance for each patient in hospital environment based on Hadoop distributed processing structure. We designed this system architecture called Medical Information Processing System (MIPS) with a combination of Hadoop batch processing architecture and Storm stream processing architecture. The MIPS implemented parallel processing of various kinds of clinical data with high efficiency, which come from disparate hospital information system such as PACS, RIS LIS and HIS.
NASA Astrophysics Data System (ADS)
Valtonen, Katariina; Leppänen, Mauri
Governments worldwide are concerned for efficient production of services to customers. To improve quality of services and to make service production more efficient, information and communication technology (ICT) is largely exploited in public administration (PA). Succeeding in this exploitation calls for large-scale planning which embraces issues from strategic to technological level. In this planning the notion of enterprise architecture (EA) is commonly applied. One of the sub-architectures of EA is business architecture (BA). BA planning is challenging in PA due to a large number of stakeholders, a wide set of customers, and solid and hierarchical structures of organizations. To support EA planning in Finland, a project to engineer a government EA (GEA) method was launched. In this chapter, we analyze the discussions and outputs of the project workshops and reflect emerged issues on current e-government literature. We bring forth insights into and suggestions for government BA and its development.
NASA Astrophysics Data System (ADS)
Kelley, Troy D.; McGhee, S.
2013-05-01
This paper describes the ongoing development of a robotic control architecture that inspired by computational cognitive architectures from the discipline of cognitive psychology. The Symbolic and Sub-Symbolic Robotics Intelligence Control System (SS-RICS) combines symbolic and sub-symbolic representations of knowledge into a unified control architecture. The new architecture leverages previous work in cognitive architectures, specifically the development of the Adaptive Character of Thought-Rational (ACT-R) and Soar. This paper details current work on learning from episodes or events. The use of episodic memory as a learning mechanism has, until recently, been largely ignored by computational cognitive architectures. This paper details work on metric level episodic memory streams and methods for translating episodes into abstract schemas. The presentation will include research on learning through novelty and self generated feedback mechanisms for autonomous systems.
Miller, Christopher A; Parasuraman, Raja
2007-02-01
To develop a method enabling human-like, flexible supervisory control via delegation to automation. Real-time supervisory relationships with automation are rarely as flexible as human task delegation to other humans. Flexibility in human-adaptable automation can provide important benefits, including improved situation awareness, more accurate automation usage, more balanced mental workload, increased user acceptance, and improved overall performance. We review problems with static and adaptive (as opposed to "adaptable") automation; contrast these approaches with human-human task delegation, which can mitigate many of the problems; and revise the concept of a "level of automation" as a pattern of task-based roles and authorizations. We argue that delegation requires a shared hierarchical task model between supervisor and subordinates, used to delegate tasks at various levels, and offer instruction on performing them. A prototype implementation called Playbook is described. On the basis of these analyses, we propose methods for supporting human-machine delegation interactions that parallel human-human delegation in important respects. We develop an architecture for machine-based delegation systems based on the metaphor of a sports team's "playbook." Finally, we describe a prototype implementation of this architecture, with an accompanying user interface and usage scenario, for mission planning for uninhabited air vehicles. Delegation offers a viable method for flexible, multilevel human-automation interaction to enhance system performance while maintaining user workload at a manageable level. Most applications of adaptive automation (aviation, air traffic control, robotics, process control, etc.) are potential avenues for the adaptable, delegation approach we advocate. We present an extended example for uninhabited air vehicle mission planning.
Neural Architectures for Control
NASA Technical Reports Server (NTRS)
Peterson, James K.
1991-01-01
The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-01-11
GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.
NASA Technical Reports Server (NTRS)
Jacklin, S. A.; Leyland, J. A.; Warmbrodt, W.
1985-01-01
Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, online graphics, and file management. This paper discusses five global design considerations which are useful to integrate array processor, multimicroprocessor, and host computer system architectures into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the nonreal-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration is briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind tunnel environment, the controller architecture can generally be applied to a wide range of automatic control applications.
NASA Astrophysics Data System (ADS)
Mortier, S.; Van Daele, K.; Meganck, L.
2017-08-01
Heritage organizations in Flanders started using thesauri fairly recently compared to other countries. This paper starts with examining the historical use of thesauri and controlled vocabularies in computer systems by the Flemish Government dealing with immovable cultural heritage. Their evolution from simple, flat, controlled lists to actual thesauri with scope notes, hierarchical and equivalence relations and links to other thesauri will be discussed. An explanation will be provided for the evolution in our approach to controlled vocabularies, and how they radically changed querying and the way data is indexed in our systems. Technical challenges inherent to complex thesauri and how to overcome them will be outlined. These issues being solved, thesauri have become an essential feature of the Flanders Heritage inventory management system. The number of vocabularies rose over the years and became an essential tool for integrating heritage from different disciplines. As a final improvement, thesauri went from being a core part of one application (the inventory management system) to forming an essential part of a new general resource oriented system architecture for Flanders Heritage influenced by Linked Data. For this purpose, a generic SKOS based editor was created. Due to the SKOS model being generic enough to be used outside of Flanders Heritage, the decision was made early on to develop this editor as an open source project called Atramhasis and share it with the larger heritage world.
Software architecture of INO340 telescope control system
NASA Astrophysics Data System (ADS)
Ravanmehr, Reza; Khosroshahi, Habib
2016-08-01
The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.
Johnstone, Megan-Jane
2016-12-01
Over the past several years increasing attention has been given to the social engineering process of 'nudging' (also called 'choice architecture') and its impact as a mechanism designed to deliberately manipulate and incentivise people to think and act in a presumably beneficial direction.
An Extensible Model and Analysis Framework
2010-11-01
Eclipse or Netbeans Rich Client Platform (RCP). We call this the Triquetrum Project. Configuration files support narrower variability than Triquetrum/RCP...Triquetrum/RCP supports assembling in arbitrary ways. (12/08 presentation) 2. Prototyped OSGi component architecture for use with Netbeans and
Differential geometric treewidth estimation in adiabatic quantum computation
NASA Astrophysics Data System (ADS)
Wang, Chi; Jonckheere, Edmond; Brun, Todd
2016-10-01
The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.
Thrifty: An Exascale Architecture for Energy Proportional Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torrellas, Josep
2014-12-23
The objective of this project is to design different aspects of a novel exascale architecture called Thrifty. Our goal is to focus on the challenges of power/energy efficiency, performance, and resiliency in exascale systems. The project includes work on computer architecture (Josep Torrellas from University of Illinois), compilation (Daniel Quinlan from Lawrence Livermore National Laboratory), runtime and applications (Laura Carrington from University of California San Diego), and circuits (Wilfred Pinfold from Intel Corporation). In this report, we focus on the progress at the University of Illinois during the last year of the grant (September 1, 2013 to August 31, 2014).more » We also point to the progress in the other collaborating institutions when needed.« less
NASA Technical Reports Server (NTRS)
Tick, Evan
1987-01-01
This note describes an efficient software emulator for the Warren Abstract Machine (WAM) Prolog architecture. The version of the WAM implemented is called Lcode. The Lcode emulator, written in C, executes the 'naive reverse' benchmark at 3900 LIPS. The emulator is one of a set of tools used to measure the memory-referencing characteristics and performance of Prolog programs. These tools include a compiler, assembler, and memory simulators. An overview of the Lcode architecture is given here, followed by a description and listing of the emulator code implementing each Lcode instruction. This note will be of special interest to those studying the WAM and its performance characteristics. In general, this note will be of interest to those creating efficient software emulators for abstract machine architectures.
Identification and Control of Aircrafts using Multiple Models and Adaptive Critics
NASA Technical Reports Server (NTRS)
Principe, Jose C.
2007-01-01
We compared two possible implementations of local linear models for control: one approach is based on a self-organizing map (SOM) to cluster the dynamics followed by a set of linear models operating at each cluster. Therefore the gating function is hard (a single local model will represent the regional dynamics). This simplifies the controller design since there is a one to one mapping between controllers and local models. The second approach uses a soft gate using a probabilistic framework based on a Gaussian Mixture Model (also called a dynamic mixture of experts). In this approach several models may be active at a given time, we can expect a smaller number of models, but the controller design is more involved, with potentially better noise rejection characteristics. Our experiments showed that the SOM provides overall best performance in high SNRs, but the performance degrades faster than with the GMM for the same noise conditions. The SOM approach required about an order of magnitude more models than the GMM, so in terms of implementation cost, the GMM is preferable. The design of the SOM is straight forward, while the design of the GMM controllers, although still reasonable, is more involved and needs more care in the selection of the parameters. Either one of these locally linear approaches outperform global nonlinear controllers based on neural networks, such as the time delay neural network (TDNN). Therefore, in essence the local model approach warrants practical implementations. In order to call the attention of the control community for this design methodology we extended successfully the multiple model approach to PID controllers (still today the most widely used control scheme in the industry), and wrote a paper on this subject. The echo state network (ESN) is a recurrent neural network with the special characteristics that only the output parameters are trained. The recurrent connections are preset according to the problem domain and are fixed. In a nutshell, the states of the reservoir of recurrent processing elements implement a projection space, where the desired response is optimally projected. This architecture trades training efficiency by a large increase in the dimension of the recurrent layer. However, the power of the recurrent neural networks can be brought to bear on practical difficult problems. Our goal was to implement an adaptive critic architecture implementing Bellman s approach to optimal control. However, we could only characterize the ESN performance as a critic in value function evaluation, which is just one of the pieces of the overall adaptive critic controller. The results were very convincing, and the simplicity of the implementation was unparalleled.
An Architecture to Promote the Commercialization of Space Mission Command and Control
NASA Technical Reports Server (NTRS)
Jones, Michael K.
1996-01-01
This paper describes a command and control architecture that encompasses space mission operations centers, ground terminals, and spacecraft. This architecture is intended to promote the growth of a lucrative space mission operations command and control market through a set of open standards used by both gevernment and profit-making space mission operators.
Advanced computer architecture specification for automated weld systems
NASA Technical Reports Server (NTRS)
Katsinis, Constantine
1994-01-01
This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.
A Machine Learning Method for Power Prediction on the Mobile Devices.
Chen, Da-Ren; Chen, You-Shyang; Chen, Lin-Chih; Hsu, Ming-Yang; Chiang, Kai-Feng
2015-10-01
Energy profiling and estimation have been popular areas of research in multicore mobile architectures. While short sequences of system calls have been recognized by machine learning as pattern descriptions for anomalous detection, power consumption of running processes with respect to system-call patterns are not well studied. In this paper, we propose a fuzzy neural network (FNN) for training and analyzing process execution behaviour with respect to series of system calls, parameters and their power consumptions. On the basis of the patterns of a series of system calls, we develop a power estimation daemon (PED) to analyze and predict the energy consumption of the running process. In the initial stage, PED categorizes sequences of system calls as functional groups and predicts their energy consumptions by FNN. In the operational stage, PED is applied to identify the predefined sequences of system calls invoked by running processes and estimates their energy consumption.
Air Traffic Control: Complete and Enforced Architecture Needed for FAA Systems Modernization
DOT National Transportation Integrated Search
1997-02-01
Because of the size, complexity, and importance of FAA's air traffic control : (ATC) modernization, the General Accounting Office (GAO) reviewed it to : determine (1) whether FAA has a target architecture(s), and associated : subarchitectures, to gui...
Autonomous control systems: applications to remote sensing and image processing
NASA Astrophysics Data System (ADS)
Jamshidi, Mohammad
2001-11-01
One of the main challenges of any control (or image processing) paradigm is being able to handle complex systems under unforeseen uncertainties. A system may be called complex here if its dimension (order) is too high and its model (if available) is nonlinear, interconnected, and information on the system is uncertain such that classical techniques cannot easily handle the problem. Examples of complex systems are power networks, space robotic colonies, national air traffic control system, and integrated manufacturing plant, the Hubble Telescope, the International Space Station, etc. Soft computing, a consortia of methodologies such as fuzzy logic, neuro-computing, genetic algorithms and genetic programming, has proven to be powerful tools for adding autonomy and semi-autonomy to many complex systems. For such systems the size of soft computing control architecture will be nearly infinite. In this paper new paradigms using soft computing approaches are utilized to design autonomous controllers and image enhancers for a number of application areas. These applications are satellite array formations for synthetic aperture radar interferometry (InSAR) and enhancement of analog and digital images.
An Experiment of GMPLS-Based Dispersion Compensation Control over In-Field Fibers
NASA Astrophysics Data System (ADS)
Seno, Shoichiro; Horiuchi, Eiichi; Yoshida, Sota; Sugihara, Takashi; Onohara, Kiyoshi; Kamei, Misato; Baba, Yoshimasa; Kubo, Kazuo; Mizuochi, Takashi
As ROADMs (Reconfigurable Optical Add/Drop Multiplexers) are becoming widely used in metro/core networks, distributed control of wavelength paths by extended GMPLS (Generalized MultiProtocol Label Switching) protocols has attracted much attention. For the automatic establishment of an arbitrary wavelength path satisfying dynamic traffic demands over a ROADM or WXC (Wavelength Cross Connect)-based network, precise determination of chromatic dispersion over the path and optimized assignment of dispersion compensation capabilities at related nodes are essential. This paper reports an experiment over in-field fibers where GMPLS-based control was applied for the automatic discovery of chromatic dispersion, path computation, and wavelength path establishment with dynamic adjustment of variable dispersion compensation. The GMPLS-based control scheme, which the authors called GMPLS-Plus, extended GMPLS's distributed control architecture with attributes for automatic discovery, advertisement, and signaling of chromatic dispersion. In this experiment, wavelength paths with distances of 24km and 360km were successfully established and error-free data transmission was verified. The experiment also confirmed path restoration with dynamic compensation adjustment upon fiber failure.
Programming with process groups: Group and multicast semantics
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry
1991-01-01
Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.
Portable data collection terminal in the automated power consumption measurement system
NASA Astrophysics Data System (ADS)
Vologdin, S. V.; Shushkov, I. D.; Bysygin, E. K.
2018-01-01
Aim of efficiency increasing, automation process of electric energy data collection and processing is very important at present time. High cost of classic electric energy billing systems prevent from its mass application. Udmurtenergo Branch of IDGC of Center and Volga Region developed electronic automated system called “Mobile Energy Billing” based on data collection terminals. System joins electronic components based on service-oriented architecture, WCF services. At present time all parts of Udmurtenergo Branch electric network are connected to “Mobile Energy Billing” project. System capabilities are expanded due to flexible architecture.
Design and Implementation of an Enterprise Internet of Things
NASA Astrophysics Data System (ADS)
Sun, Jing; Zhao, Huiqun; Wang, Ka; Zhang, Houyong; Hu, Gongzhu
Since the notion of "Internet of Things" (IoT) introduced about 10 years ago, most IoT research has focused on higher level issues, such as strategies, architectures, standardization, and enabling technologies, but studies of real cases of IoT are still lacking. In this paper, a real case of Internet of Things called ZB IoT is introduced. It combines the Service Oriented Architecture (SOA) with EPC global standards in the system design, and focuses on the security and extensibility of IoT in its implementation.
Flynn, Allen J; Bahulekar, Namita; Boisvert, Peter; Lagoze, Carl; Meng, George; Rampton, James; Friedman, Charles P
2017-01-01
Throughout the world, biomedical knowledge is routinely generated and shared through primary and secondary scientific publications. However, there is too much latency between publication of knowledge and its routine use in practice. To address this latency, what is actionable in scientific publications can be encoded to make it computable. We have created a purpose-built digital library platform to hold, manage, and share actionable, computable knowledge for health called the Knowledge Grid Library. Here we present it with its system architecture.
Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman
2008-08-04
Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.
Tang, Haijing; Wang, Siye; Zhang, Yanjun
2013-01-01
Clustering has become a common trend in very long instruction words (VLIW) architecture to solve the problem of area, energy consumption, and design complexity. Register-file-connected clustered (RFCC) VLIW architecture uses the mechanism of global register file to accomplish the inter-cluster data communications, thus eliminating the performance and energy consumption penalty caused by explicit inter-cluster data move operations in traditional bus-connected clustered (BCC) VLIW architecture. However, the limit number of access ports to the global register file has become an issue which must be well addressed; otherwise the performance and energy consumption would be harmed. In this paper, we presented compiler optimization techniques for an RFCC VLIW architecture called Lily, which is designed for encryption systems. These techniques aim at optimizing performance and energy consumption for Lily architecture, through appropriate manipulation of the code generation process to maintain a better management of the accesses to the global register file. All the techniques have been implemented and evaluated. The result shows that our techniques can significantly reduce the penalty of performance and energy consumption due to access port limitation of global register file. PMID:23970841
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velsko, Stephan; Bates, Thomas
Despite numerous calls for improvement, the U.S. biosurveillance enterprise remains a patchwork of uncoordinated systems that fail to take advantage of the rapid progress in information processing, communication, and analytics made in the past decade. By synthesizing components from the extensive biosurveillance literature, we propose a conceptual framework for a national biosurveillance architecture and provide suggestions for implementation. The framework differs from the current federal biosurveillance development pathway in that it is not focused on systems useful for “situational awareness,” but is instead focused on the long-term goal of having true warning capabilities. Therefore, a guiding design objective is themore » ability to digitally detect emerging threats that span jurisdictional boundaries, because attempting to solve the most challenging biosurveillance problem first provides the strongest foundation to meet simpler surveillance objectives. Core components of the vision are: (1) a whole-of-government approach to support currently disparate federal surveillance efforts that have a common data need, including those for food safety, vaccine and medical product safety, and infectious disease surveillance; (2) an information architecture that enables secure, national access to electronic health records, yet does not require that data be sent to a centralized location for surveillance analysis; (3) an inference architecture that leverages advances in ‘big data’ analytics and learning inference engines—a significant departure from the statistical process control paradigm that underpins nearly all current syndromic surveillance systems; and, (4) an organizational architecture with a governance model aimed at establishing national biosurveillance as a critical part of the U.S. national infrastructure. Although it will take many years to implement, and a national campaign of education and debate to acquire public buy-in for such a comprehensive system, the potential benefits warrant increased consideration within the U.S. government.« less
Velsko, Stephan; Bates, Thomas
2016-06-17
Despite numerous calls for improvement, the U.S. biosurveillance enterprise remains a patchwork of uncoordinated systems that fail to take advantage of the rapid progress in information processing, communication, and analytics made in the past decade. By synthesizing components from the extensive biosurveillance literature, we propose a conceptual framework for a national biosurveillance architecture and provide suggestions for implementation. The framework differs from the current federal biosurveillance development pathway in that it is not focused on systems useful for “situational awareness,” but is instead focused on the long-term goal of having true warning capabilities. Therefore, a guiding design objective is themore » ability to digitally detect emerging threats that span jurisdictional boundaries, because attempting to solve the most challenging biosurveillance problem first provides the strongest foundation to meet simpler surveillance objectives. Core components of the vision are: (1) a whole-of-government approach to support currently disparate federal surveillance efforts that have a common data need, including those for food safety, vaccine and medical product safety, and infectious disease surveillance; (2) an information architecture that enables secure, national access to electronic health records, yet does not require that data be sent to a centralized location for surveillance analysis; (3) an inference architecture that leverages advances in ‘big data’ analytics and learning inference engines—a significant departure from the statistical process control paradigm that underpins nearly all current syndromic surveillance systems; and, (4) an organizational architecture with a governance model aimed at establishing national biosurveillance as a critical part of the U.S. national infrastructure. Although it will take many years to implement, and a national campaign of education and debate to acquire public buy-in for such a comprehensive system, the potential benefits warrant increased consideration within the U.S. government.« less
Design and Implementation of Davis Social Links OSN Kernel
NASA Astrophysics Data System (ADS)
Tran, Thomas; Chan, Kelcey; Ye, Shaozhi; Bhattacharyya, Prantik; Garg, Ankush; Lu, Xiaoming; Wu, S. Felix
Social network popularity continues to rise as they broaden out to more users. Hidden away within these social networks is a valuable set of data that outlines everyone’s relationships. Networks have created APIs such as the Facebook Development Platform and OpenSocial that allow developers to create applications that can leverage user information. However, at the current stage, the social network support for these new applications is fairly limited in its functionality. Most, if not all, of the existing internet applications such as email, BitTorrent, and Skype cannot benefit from the valuable social network among their own users. In this paper, we present an architecture that couples two different communication layers together: the end2end communication layer and the social context layer, under the Davis Social Links (DSL) project. Our proposed architecture attempts to preserve the original application semantics (i.e., we can use Thunderbird or Outlook, unmodified, to read our SMTP emails) and provides the communicating parties (email sender and receivers) a social context for control and management. For instance, the receiver can set trust policy rules based on the social context between the pair, to determine how a particular email in question should be prioritized for delivery to the SMTP layer. Furthermore, as our architecture includes two coupling layers, it is then possible, as an option, to shift some of the services from the original applications into the social context layer. In the context of email, for example, our architecture allows users to choose operations, such as reply, reply-all, and forward, to be realized in either the application layer or the social network layer. And, the realization of these operations under the social network layer offers powerful features unavailable in the original applications. To validate our coupling architecture, we have implemented a DSL kernel prototype as a Facebook application called CyrusDSL (currently about 40 local users) and a simple communication application combined into the DSL kernel but is unaware of Facebook’s API.
NASA Technical Reports Server (NTRS)
Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik
2011-01-01
Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level
A future-proof architecture for telemedicine using loose-coupled modules and HL7 FHIR.
Gøeg, Kirstine Rosenbeck; Rasmussen, Rune Kongsgaard; Jensen, Lasse; Wollesen, Christian Møller; Larsen, Søren; Pape-Haugaard, Louise Bilenberg
2018-07-01
Most telemedicine solutions are proprietary and disease specific which cause a heterogeneous and silo-oriented system landscape with limited interoperability. Solving the interoperability problem would require a strong focus on data integration and standardization in telemedicine infrastructures. Our objective was to suggest a future-proof architecture, that consisted of small loose-coupled modules to allow flexible integration with new and existing services, and the use of international standards to allow high re-usability of modules, and interoperability in the health IT landscape. We identified core features of our future-proof architecture as the following (1) To provide extended functionality the system should be designed as a core with modules. Database handling and implementation of security protocols are modules, to improve flexibility compared to other frameworks. (2) To ensure loosely coupled modules the system should implement an inversion of control mechanism. (3) A focus on ease of implementation requires the system should use HL7 FHIR (Fast Interoperable Health Resources) as the primary standard because it is based on web-technologies. We evaluated the feasibility of our architecture by developing an open source implementation of the system called ORDS. ORDS is written in TypeScript, and makes use of the Express Framework and HL7 FHIR DSTU2. The code is distributed on GitHub. All modules have been tested unit wise, but end-to-end testing awaits our first clinical example implementations. Our study showed that highly adaptable and yet interoperable core frameworks for telemedicine can be designed and implemented. Future work includes implementation of a clinical use case and evaluation. Copyright © 2018 Elsevier B.V. All rights reserved.
A parallel-pipelined architecture for a multi carrier demodulator
NASA Astrophysics Data System (ADS)
Kwatra, S. C.; Jamali, M. M.; Eugene, Linus P.
1991-03-01
Analog devices have been used for processing the information on board the satellites. Presently, digital devices are being used because they are economical and flexible as compared to their analog counterparts. Several schemes of digital transmission can be used depending on the data rate requirement of the user. An economical scheme of transmission for small earth stations uses single channel per carrier/frequency division multiple access (SCPC/FDMA) on the uplink and time division multiplexing (TDM) on the downlink. This is a typical communication service offered to low data rate users in commercial mass market. These channels usually pertain to either voice or data transmission. An efficient digital demodulator architecture is provided for a large number of law data rate users. A demodulator primarily consists of carrier, clock, and data recovery modules. This design uses principles of parallel processing, pipelining, and time sharing schemes to process large numbers of voice or data channels. It maintains the optimum throughput which is derived from the designed architecture and from the use of high speed components. The design is optimized for reduced power and area requirements. This is essential for satellite applications. The design is also flexible in processing a group of a varying number of channels. The algorithms that are used are verified by the use of a computer aided software engineering (CASE) tool called the Block Oriented System Simulator. The data flow, control circuitry, and interface of the hardware design is simulated in C language. Also, a multiprocessor approach is provided to map, model, and simulate the demodulation algorithms mainly from a speed view point. A hypercude based architecture implementation is provided for such a scheme of operation. The hypercube structure and the demodulation models on hypercubes are simulated in Ada.
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Jamali, M. M.; Eugene, Linus P.
1991-01-01
Analog devices have been used for processing the information on board the satellites. Presently, digital devices are being used because they are economical and flexible as compared to their analog counterparts. Several schemes of digital transmission can be used depending on the data rate requirement of the user. An economical scheme of transmission for small earth stations uses single channel per carrier/frequency division multiple access (SCPC/FDMA) on the uplink and time division multiplexing (TDM) on the downlink. This is a typical communication service offered to low data rate users in commercial mass market. These channels usually pertain to either voice or data transmission. An efficient digital demodulator architecture is provided for a large number of law data rate users. A demodulator primarily consists of carrier, clock, and data recovery modules. This design uses principles of parallel processing, pipelining, and time sharing schemes to process large numbers of voice or data channels. It maintains the optimum throughput which is derived from the designed architecture and from the use of high speed components. The design is optimized for reduced power and area requirements. This is essential for satellite applications. The design is also flexible in processing a group of a varying number of channels. The algorithms that are used are verified by the use of a computer aided software engineering (CASE) tool called the Block Oriented System Simulator. The data flow, control circuitry, and interface of the hardware design is simulated in C language. Also, a multiprocessor approach is provided to map, model, and simulate the demodulation algorithms mainly from a speed view point. A hypercude based architecture implementation is provided for such a scheme of operation. The hypercube structure and the demodulation models on hypercubes are simulated in Ada.
Wright, Adam; Sittig, Dean F
2008-12-01
In this paper, we describe and evaluate a new distributed architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support), which leverages current health information exchange efforts and is based on the principles of a service-oriented architecture. The architecture allows disparate clinical information systems and clinical decision support systems to be seamlessly integrated over a network according to a set of interfaces and protocols described in this paper. The architecture described is fully defined and developed, and six use cases have been developed and tested using a prototype electronic health record which links to one of the existing prototype National Health Information Networks (NHIN): drug interaction checking, syndromic surveillance, diagnostic decision support, inappropriate prescribing in older adults, information at the point of care and a simple personal health record. Some of these use cases utilize existing decision support systems, which are either commercially or freely available at present, and developed outside of the SANDS project, while other use cases are based on decision support systems developed specifically for the project. Open source code for many of these components is available, and an open source reference parser is also available for comparison and testing of other clinical information systems and clinical decision support systems that wish to implement the SANDS architecture. The SANDS architecture for decision support has several significant advantages over other architectures for clinical decision support. The most salient of these are:
NASA Astrophysics Data System (ADS)
Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.
Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.
The K9 On-Board Rover Architecture
NASA Technical Reports Server (NTRS)
Bresina, John L.; Bualat, Maria; Fair, Michael; Washington, Richard; Wright, Anne
2006-01-01
This paper describes the software architecture of NASA Ames Research Center s K9 rover. The goal of the onboard software architecture team was to develop a modular, flexible framework that would allow both high- and low-level control of the K9 hardware. Examples of low-level control are the simple drive or pan/tilt commands which are handled by the resource managers, and examples of high-level control are the command sequences which are handled by the conditional executive. In between these two control levels are complex behavioral commands which are handled by the pilot, such as drive to goal with obstacle avoidance or visually servo to a target. This paper presents the design of the architecture as of Fall 2000. We describe the state of the architecture implementation as well as its current evolution. An early version of the architecture was used for K9 operations during a dual-rover field experiment conducted by NASA Ames Research Center (ARC) and the Jet Propulsion Laboratory (JPL) from May 14 to May 16, 2000.
Doing It Right: 366 answers to computing questions you didn't know you had
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herring, Stuart Davis
Slides include information on history: version control, version control: branches, version control: Git, releases, requirements, readability, readability control flow, global variables, architecture, architecture redundancy, processes, input/output, unix, etcetera.
Stable architectures for deep neural networks
NASA Astrophysics Data System (ADS)
Haber, Eldad; Ruthotto, Lars
2018-01-01
Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.
NASA Technical Reports Server (NTRS)
Garcia, Jerry L.; McCleskey, Carey M.; Bollo, Timothy R.; Rhodes, Russel E.; Robinson, John W.
2012-01-01
This paper presents a structured approach for achieving a compatible Ground System (GS) and Flight System (FS) architecture that is affordable, productive and sustainable. This paper is an extension of the paper titled "Approach to an Affordable and Productive Space Transportation System" by McCleskey et al. This paper integrates systems engineering concepts and operationally efficient propulsion system concepts into a structured framework for achieving GS and FS compatibility in the mid-term and long-term time frames. It also presents a functional and quantitative relationship for assessing system compatibility called the Architecture Complexity Index (ACI). This paper: (1) focuses on systems engineering fundamentals as it applies to improving GS and FS compatibility; (2) establishes mid-term and long-term spaceport goals; (3) presents an overview of transitioning a spaceport to an airport model; (4) establishes a framework for defining a ground system architecture; (5) presents the ACI concept; (6) demonstrates the approach by presenting a comparison of different GS architectures; and (7) presents a discussion on the benefits of using this approach with a focus on commonality.
Advanced Technology Lifecycle Analysis System (ATLAS)
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.; Mankins, John C.
2004-01-01
Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is satisfied with the system configurations, technology portfolios, and deployment strategies, he or she can present the concepts to a team, which will conduct a detailed, discipline-oriented analysis within a CEE. An analog to this approach is the music industry where a songwriter creates the lyrics and music before entering a recording studio.
NASA Technical Reports Server (NTRS)
Jethwa, Dipan; Selmic, Rastko R.; Figueroa, Fernando
2008-01-01
This paper presents a concept of feedback control for smart actuators that are compatible with smart sensors, communication protocols, and a hierarchical Integrated System Health Management (ISHM) architecture developed by NASA s Stennis Space Center. Smart sensors and actuators typically provide functionalities such as automatic configuration, system condition awareness and self-diagnosis. Spacecraft and rocket test facilities are in the early stages of adopting these concepts. The paper presents a concept combining the IEEE 1451-based ISHM architecture with a transducer health monitoring capability to enhance the control process. A control system testbed for intelligent actuator control, with on-board ISHM capabilities, has been developed and implemented. Overviews of the IEEE 1451 standard, the smart actuator architecture, and control based on this architecture are presented.
NASA Technical Reports Server (NTRS)
Albus, James S.; Mccain, Harry G.; Lumia, Ronald
1989-01-01
The document describes the NASA Standard Reference Model (NASREM) Architecture for the Space Station Telerobot Control System. It defines the functional requirements and high level specifications of the control system for the NASA space Station document for the functional specification, and a guideline for the development of the control system architecture, of the 10C Flight Telerobot Servicer. The NASREM telerobot control system architecture defines a set of standard modules and interfaces which facilitates software design, development, validation, and test, and make possible the integration of telerobotics software from a wide variety of sources. Standard interfaces also provide the software hooks necessary to incrementally upgrade future Flight Telerobot Systems as new capabilities develop in computer science, robotics, and autonomous system control.
A Low-Cost Part-Task Flight Training System: An Application of a Head Mounted Display
1990-12-01
architecture. The task at hand was to develop a software emulation libary that would emulate the function calls used within the Flight and Dog programs. This...represented in two hexadecimal digits for each color. The format of the packed long integer looks like aaggbbrr with each color value representing a...Western Digital ethernet card as the cheapest compatible card available. Good fortune arrived, as I was calling to order the card, I saw an unused card
Test Program of the "Combined Data and Power Management Infrastructure"
NASA Astrophysics Data System (ADS)
Eickhoff, Jens; Fritz, Michael; Witt, Rouven; Bucher, Nico; Roser, Hans-Peter
2013-08-01
As already published in previous DASIA papers, the University of Stuttgart, Germany, is developing an advanced 3-axis stabilized small satellite applying industry standards for command/control techniques and Onboard Software design. This satellite furthermore features an innovative hybrid architecture of Onboard Computer and Power Control and Distribution Unit. One of the main challenges was the development of an ultra-compact and performing Onboard Computer (OBC), which was intended to support an RTEMS operating system, a PUS standard based Onboard Software (OBSW) and CCSDS standard based ground/space communication. The developed architecture (see [1, 2, 3]) is called a “Combined Onboard Data and Power Management Infrastructure” - CDPI. It features: The OBC processor boards based on a LEON3FT architecture - from Aeroflex Inc., USA The I/O Boards for all OBC digital interfaces to S/C equipment (digital RIU) - from 4Links Ltd. UK CCSDS TC/TM decoder/encoder boards - with same HW design as I/O boards - just with limited number of interfaces. HW from 4Links Ltd, UK, driver SW and IP-Core from Aeroflex Gaisler, SE Analog RIU functions via enhanced PCDU from Vectronic Aerospace, D OBC reconfiguration unit functions via Common Controller - here in PCDU [4] The CDPI overall assembly is meanwhile complete and a exhaustive description can be found in [5]. The EM test campaign including the HW/SW compatibility testing is finalized. This comprises all OBC EM units, OBC EM assembly and the EM PCDU. The unit test program for the FM Processor-Boards and Power-Boards of the OBC are completed and the unit tests of FM I/O-Boards and CCSDS-Boards have been completed by 4Links at the assembly house. The subsystem tests of the assembled OBC also are completed and the overall System tests of the CDPI with system reconfiguration in diverse possible FDIR cases also reach the last steps. Still ongoing is the subsequent integration of the CDPI with the satellite's avionics components encompassing TTC, AOCS, Power and Payload Control. This paper provides a full picture of the test campaign. Further details can be taken from
Canadian digitization: radical beginning and pragmatic follow-on
NASA Astrophysics Data System (ADS)
Grant, Terrill K.
2000-08-01
The Canadian Army, like most Western armies, spent a lot of time soul-searching about the application of technology to its Command and Control processes during the height of the Cold War in the 70's and 80's. In the late 1980's, these efforts were formalized in a program called the Tactical Command, Control and Communications System (TCCCS). As envisioned, the project would replace in one revolutionary Big Bang all of the tactical communications employed in the Canadian field forces. It would also add significant capabilities such as a long range satellite communications system, a universal tactical e-mail system, and a command and control system for the commander and his staff from division to unit HQ. In 1989, the project was scaled back due to budgetary constraints by removing the divisional trunk communications system and the command and control system. At this point a contract was let to Computing Devices Canada for the core communications functionality. During the next 6 years, the Canadian Army expanded on this digitization effort by amending the contract to add in a trunk system and a situational awareness system. As well, in 1996, Computing Devices received a contract to develop and integrate a C2 system with the communications system thereby restoring the final two Cs of TCCCS. This paper discusses the architecture and implementation of the TCCCS as the revolutionary enabler of the Canadian Army's digitization effort for the early 2000 era. The choice of a hybrid approach of using commercial standards supplemented by appropriate NATO communications standards allowed for an easy addition of the trunk system. As well, conformance to the emerging NATO Communications architecture for Land Tactical Communications in the Post 2000 era will enhance interoperability with Canada's allies. The paper also discusses the pragmatic approach taken by the Canadian Army in inserting C2 functionally into TCCCS, and presents the ultimate architecture and functionality. This paper concludes with a review of some of the areas of concern that will need to be addressed to complete a baseline digitization capability for the Canadian Army.
NASA Astrophysics Data System (ADS)
Lazar, Aurel A.; White, John S.
1987-07-01
Theoretical analysis of integrated local area network model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up during video and voice calls during periods of little movement in the images and periods of silence in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamicaly controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real-time multimedia workstation EDDY, which integrates video, voice, and data traffic flows. Protocols supporting variable-bandwidth, fixed-quality packetized video transport are described in detail.
NASA Astrophysics Data System (ADS)
Lazar, Aurel A.; White, John S.
1986-11-01
Theoretical analysis of an ILAN model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up by video and voice calls during periods of little movement in the images and silence periods in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamically controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real time multimedia workstation EDDY that integrates video, voice and data traffic flows. Protocols supporting variable bandwidth, constant quality packetized video transport are descibed in detail.
Airport-Noise Levels and Annoyance Model (ALAMO) user's guide
NASA Technical Reports Server (NTRS)
Deloach, R.; Donaldson, J. L.; Johnson, M. J.
1986-01-01
A guide for the use of the Airport-Noise Level and Annoyance MOdel (ALAMO) at the Langley Research Center computer complex is provided. This document is divided into 5 primary sections, the introduction, the purpose of the model, and an in-depth description of the following subsystems: baseline, noise reduction simulation and track analysis. For each subsystem, the user is provided with a description of architecture, an explanation of subsystem use, sample results, and a case runner's check list. It is assumed that the user is familiar with the operations at the Langley Research Center (LaRC) computer complex, the Network Operating System (NOS 1.4) and CYBER Control Language. Incorporated within the ALAMO model is a census database system called SITE II.
Sánchez, Antonio; Blanc, Sara; Yuste, Pedro; Perles, Angel; Serrano, Juan José
2012-01-01
This paper is focused on the description of the physical layer of a new acoustic modem called ITACA. The modem architecture includes as a major novelty an ultra-low power asynchronous wake-up system implementation for underwater acoustic transmission that is based on a low-cost off-the-shelf RFID peripheral integrated circuit. This feature enables a reduced power dissipation of 10 μW in stand-by mode and registers very low power values during reception and transmission. The modem also incorporates clear channel assessment (CCA) to support CSMA-based medium access control (MAC) layer protocols. The design is part of a compact platform for a long-life short/medium range underwater wireless sensor network. PMID:22969324
Sánchez, Antonio; Blanc, Sara; Yuste, Pedro; Perles, Angel; Serrano, Juan José
2012-01-01
This paper is focused on the description of the physical layer of a new acoustic modem called ITACA. The modem architecture includes as a major novelty an ultra-low power asynchronous wake-up system implementation for underwater acoustic transmission that is based on a low-cost off-the-shelf RFID peripheral integrated circuit. This feature enables a reduced power dissipation of 10 μW in stand-by mode and registers very low power values during reception and transmission. The modem also incorporates clear channel assessment (CCA) to support CSMA-based medium access control (MAC) layer protocols. The design is part of a compact platform for a long-life short/medium range underwater wireless sensor network.
Supervisory Control System Architecture for Advanced Small Modular Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Sacit M; Cole, Daniel L; Fugate, David L
2013-08-01
This technical report was generated as a product of the Supervisory Control for Multi-Modular SMR Plants project within the Instrumentation, Control and Human-Machine Interface technology area under the Advanced Small Modular Reactor (SMR) Research and Development Program of the U.S. Department of Energy. The report documents the definition of strategies, functional elements, and the structural architecture of a supervisory control system for multi-modular advanced SMR (AdvSMR) plants. This research activity advances the state-of-the art by incorporating decision making into the supervisory control system architectural layers through the introduction of a tiered-plant system approach. The report provides a brief history ofmore » hierarchical functional architectures and the current state-of-the-art, describes a reference AdvSMR to show the dependencies between systems, presents a hierarchical structure for supervisory control, indicates the importance of understanding trip setpoints, applies a new theoretic approach for comparing architectures, identifies cyber security controls that should be addressed early in system design, and describes ongoing work to develop system requirements and hardware/software configurations.« less
Code of Federal Regulations, 2012 CFR
2012-07-01
... conventional architecture and will utilize colors, nonglare roofing materials, and spacing or layout that harmonizes with forested settings. Except for signs, structures designed primarily for purposes of calling..., harmonizing in design and color with the surroundings and shall not be attached to any tree or shrub...
Code of Federal Regulations, 2014 CFR
2014-07-01
... conventional architecture and will utilize colors, nonglare roofing materials, and spacing or layout that harmonizes with forested settings. Except for signs, structures designed primarily for purposes of calling..., harmonizing in design and color with the surroundings and shall not be attached to any tree or shrub...
Code of Federal Regulations, 2013 CFR
2013-07-01
... conventional architecture and will utilize colors, nonglare roofing materials, and spacing or layout that harmonizes with forested settings. Except for signs, structures designed primarily for purposes of calling..., harmonizing in design and color with the surroundings and shall not be attached to any tree or shrub...
ERIC Educational Resources Information Center
Dias, Martin A.
2012-01-01
The purpose of this dissertation is to examine information systems-enabled interorganizational collaborations called public safety networks--their proliferation, information systems architecture, and technology evolution. These networks face immense pressures from member organizations, external stakeholders, and environmental contingencies. This…
Fault tolerant and lifetime control architecture for autonomous vehicles
NASA Astrophysics Data System (ADS)
Bogdanov, Alexander; Chen, Yi-Liang; Sundareswaran, Venkataraman; Altshuler, Thomas
2008-04-01
Increased vehicle autonomy, survivability and utility can provide an unprecedented impact on mission success and are one of the most desirable improvements for modern autonomous vehicles. We propose a general architecture of intelligent resource allocation, reconfigurable control and system restructuring for autonomous vehicles. The architecture is based on fault-tolerant control and lifetime prediction principles, and it provides improved vehicle survivability, extended service intervals, greater operational autonomy through lower rate of time-critical mission failures and lesser dependence on supplies and maintenance. The architecture enables mission distribution, adaptation and execution constrained on vehicle and payload faults and desirable lifetime. The proposed architecture will allow managing missions more efficiently by weighing vehicle capabilities versus mission objectives and replacing the vehicle only when it is necessary.
On the Execution Control of HLA Federations using the SISO Space Reference FOM
NASA Technical Reports Server (NTRS)
Moller, Bjorn; Garro, Alfredo; Falcone, Alberto; Crues, Edwin Z.; Dexter, Daniel E.
2017-01-01
In the Space domain the High Level Architecture (HLA) is one of the reference standard for Distributed Simulation. However, for the different organizations involved in the Space domain (e.g. NASA, ESA, Roscosmos, and JAXA) and their industrial partners, it is difficult to implement HLA simulators (called Federates) able to interact and interoperate in the context of a distributed HLA simulation (called Federation). The lack of a common FOM (Federation Object Model) for the Space domain is one of the main reasons that precludes a-priori interoperability between heterogeneous federates. To fill this lack a Product Development Group (PDG) has been recently activated in the Simulation Interoperability Standards Organization (SISO) with the aim to provide a Space Reference FOM (SRFOM) for international collaboration on Space systems simulations. Members of the PDG come from several countries and contribute experiences from projects within NASA, ESA and other organizations. Participants represent government, academia and industry. The paper presents an overview of the ongoing Space Reference FOM standardization initiative by focusing on the solution provided for managing the execution of an SRFOM-based Federation.
NASA Astrophysics Data System (ADS)
Jankovic, Marko; Paul, Jan; Kirchner, Frank
2016-04-01
Recent studies of the space debris population in low Earth orbit (LEO) have concluded that certain regions have already reached a critical density of objects. This will eventually lead to a cascading process called the Kessler syndrome. The time may have come to seriously consider active debris removal (ADR) missions as the only viable way of preserving the space environment for future generations. Among all objects in the current environment, the SL-8 (Kosmos 3M second stages) rocket bodies (R/Bs) are some of the most suitable targets for future robotic ADR missions. However, to date, an autonomous relative navigation to and capture of an non-cooperative target has never been performed. Therefore, there is a need for more advanced, autonomous and modular systems that can cope with uncontrolled, tumbling objects. The guidance, navigation and control (GNC) system is one of the most critical ones. The main objective of this paper is to present a preliminary concept of a modular GNC architecture that should enable a safe and fuel-efficient capture of a known but uncooperative target, such as Kosmos 3M R/B. In particular, the concept was developed having in mind the most critical part of an ADR mission, i.e. close range proximity operations, and state of the art algorithms in the field of autonomous rendezvous and docking. In the end, a brief description of the hardware in the loop (HIL) testing facility is made, foreseen for the practical evaluation of the developed architecture.
NASA Technical Reports Server (NTRS)
Miller, Christopher J.; Goodrick, Dan
2017-01-01
The problem of control command and maneuver induced structural loads is an important aspect of any control system design. The aircraft structure and the control architecture must be designed to achieve desired piloted control responses while limiting the imparted structural loads. The classical approach is to utilize high structural margins, restrict control surface commands to a limited set of analyzed combinations, and train pilots to follow procedural maneuvering limitations. With recent advances in structural sensing and the continued desire to improve safety and vehicle fuel efficiency, it is both possible and desirable to develop control architectures that enable lighter vehicle weights while maintaining and improving protection against structural damage. An optimal control technique has been explored and shown to achieve desirable vehicle control performance while limiting sensed structural loads. The subject of this paper is the design of the optimal control architecture, and provides the reader with some techniques for tailoring the architecture, along with detailed simulation results.
Flexible software architecture for user-interface and machine control in laboratory automation.
Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E
1998-10-01
We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.
National IVHS Architecture Development Strategy
DOT National Transportation Integrated Search
1994-01-27
NATIONAL INFORMATION AND CONTROL SYSTEMS ARE EMERGING THAT REQUIRE SYSTEM ARCHITECTURES FOR DEPLOYMENT ACROSS THE NATION, E.G., AIR TRAFFIC CONTROL SYSTEMS, MILITARY COMMAND AND CONTROL SYSTEMS, AND OTHER NATIONAL INFORMATION SYSTEMS. THE REQUIRED CH...
Open architecture CMM motion controller
NASA Astrophysics Data System (ADS)
Chang, David; Spence, Allan D.; Bigg, Steve; Heslip, Joe; Peterson, John
2001-12-01
Although initially the only Coordinate Measuring Machine (CMM) sensor available was a touch trigger probe, technological advances in sensors and computing have greatly increased the variety of available inspection sensors. Non-contact laser digitizers and analog scanning touch probes require very well tuned CMM motion control, as well as an extensible, open architecture interface. This paper describes the implementation of a retrofit CMM motion controller designed for open architecture interface to a variety of sensors. The controller is based on an Intel Pentium microcomputer and a Servo To Go motion interface electronics card. Motor amplifiers, safety, and additional interface electronics are housed in a separate enclosure. Host Signal Processing (HSP) is used for the motion control algorithm. Compared to the usual host plus DSP architecture, single CPU HSP simplifies integration with the various sensors, and implementation of software geometric error compensation. Motion control tuning is accomplished using a remote computer via 100BaseTX Ethernet. A Graphical User Interface (GUI) is used to enter geometric error compensation data, and to optimize the motion control tuning parameters. It is shown that this architecture achieves the required real time motion control response, yet is much easier to extend to additional sensors.
Modeling, simulation, and high-autonomy control of a Martian oxygen production plant
NASA Technical Reports Server (NTRS)
Schooley, L. C.; Cellier, F. E.; Wang, F.-Y.; Zeigler, B. P.
1992-01-01
Progress on a project for the development of a high-autonomy intelligent command and control architecture for process plants used to produce oxygen from local planetary resources is reported. A distributed command and control architecture is being developed and implemented so that an oxygen production plant, or other equipment, can be reliably commanded and controlled over an extended time period in a high-autonomy mode with high-level task-oriented teleoperation from one or several remote locations. During the reporting period, progress was made at all levels of the architecture. At the remote site, several remote observers can now participate in monitoring the plant. At the local site, a command and control center was introduced for increased flexibility, reliability, and robustness. The local control architecture was enhanced to control multiple tubes in parallel, and was refined for increased robustness. The simulation model was enhanced to full dynamics descriptions.
Ultra-Stable Segmented Telescope Sensing and Control Architecture
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Bolcar, Matthew; Knight, Scott; Redding, David
2017-01-01
The LUVOIR team is conducting two full architecture studies Architecture A 15 meter telescope that folds up in an 8.4m SLS Block 2 shroud is nearly complete. Architecture B 9.2 meter that uses an existing fairing size will begin study this Fall. This talk will summarize the ultra-stable architecture of the 15m segmented telescope including the basic requirements, the basic rationale for the architecture, the technologies employed, and the expected performance. This work builds on several dynamics and thermal studies performed for ATLAST segmented telescope configurations. The most important new element was an approach to actively control segments for segment to segment motions which will be discussed later.
NASA Technical Reports Server (NTRS)
Jones, Michael K.
1998-01-01
Various issues associated with interoperability for space mission monitor and control are presented in viewgraph form. Specific topics include: 1) Space Project Mission Operations Control Architecture (SuperMOCA) goals and methods for achieving them; 2) Specifics on the architecture: open standards ad layering, enhancing interoperability, and promoting commercialization; 3) An advertisement; 4) Status of the task - government/industry cooperation and architecture and technology demonstrations; and 5) Key features of messaging services and virtual devices.
Partially Decentralized Control Architectures for Satellite Formations
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Bauer, Frank H.
2002-01-01
In a partially decentralized control architecture, more than one but less than all nodes have supervisory capability. This paper describes an approach to choosing the number of supervisors in such au architecture, based on a reliability vs. cost trade. It also considers the implications of these results for the design of navigation systems for satellite formations that could be controlled with a partially decentralized architecture. Using an assumed cost model, analytic and simulation-based results indicate that it may be cheaper to achieve a given overall system reliability with a partially decentralized architecture containing only a few supervisors, than with either fully decentralized or purely centralized architectures. Nominally, the subset of supervisors may act as centralized estimation and control nodes for corresponding subsets of the remaining subordinate nodes, and act as decentralized estimation and control peers with respect to each other. However, in the context of partially decentralized satellite formation control, the absolute positions and velocities of each spacecraft are unique, so that correlations which make estimates using only local information suboptimal only occur through common biases and process noise. Covariance and monte-carlo analysis of a simplified system show that this lack of correlation may allow simplification of the local estimators while preserving the global optimality of the maneuvers commanded by the supervisors.
NASA Astrophysics Data System (ADS)
Hall, Justin R.; Hastrup, Rolf C.
The United States Space Exploration Initiative (SEI) calls for the charting of a new and evolving manned course to the Moon, Mars, and beyond. This paper discusses key challenges in providing effective deep space telecommunications, navigation, and information management (TNIM) architectures and designs for Mars exploration support. The fundamental objectives are to provide the mission with means to monitor and control mission elements, acquire engineering, science, and navigation data, compute state vectors and navigate, and move these data efficiently and automatically between mission nodes for timely analysis and decision-making. Although these objectives do not depart, fundamentally, from those evolved over the past 30 years in supporting deep space robotic exploration, there are several new issues. This paper focuses on summarizing new requirements, identifying related issues and challenges, responding with concepts and strategies which are enabling, and, finally, describing candidate architectures, and driving technologies. The design challenges include the attainment of: 1) manageable interfaces in a large distributed system, 2) highly unattended operations for in-situ Mars telecommunications and navigation functions, 3) robust connectivity for manned and robotic links, 4) information management for efficient and reliable interchange of data between mission nodes, and 5) an adequate Mars-Earth data rate.
NASA Technical Reports Server (NTRS)
Gallagher, Seana; Olson, Matt; Blythe, Doug; Heletz, Jacob; Hamilton, Griff; Kolb, Bill; Homans, Al; Zemrowski, Ken; Decker, Steve; Tegge, Cindy
2000-01-01
This document is the NASA AATT Task Order 24 Final Report. NASA Research Task Order 24 calls for the development of eleven distinct task reports. Each task was a necessary exercise in the development of comprehensive communications systems architecture (CSA) for air traffic management and aviation weather information dissemination for 2015, the definition of the interim architecture for 2007, and the transition plan to achieve the desired End State. The eleven tasks are summarized along with the associated Task Order reference. The output of each task was an individual task report. The task reports that make up the main body of this document include Task 5, Task 6, Task 7, Task 8, Task 10, and Task 11. The other tasks provide the supporting detail used in the development of the architecture. These reports are included in the appendices. The detailed user needs, functional communications requirements and engineering requirements associated with Tasks 1, 2, and 3 have been put into a relational database and are provided electronically.
Numerical Propulsion System Simulation Architecture
NASA Technical Reports Server (NTRS)
Naiman, Cynthia G.
2004-01-01
The Numerical Propulsion System Simulation (NPSS) is a framework for performing analysis of complex systems. Because the NPSS was developed using the object-oriented paradigm, the resulting architecture is an extensible and flexible framework that is currently being used by a diverse set of participants in government, academia, and the aerospace industry. NPSS is being used by over 15 different institutions to support rockets, hypersonics, power and propulsion, fuel cells, ground based power, and aerospace. Full system-level simulations as well as subsystems may be modeled using NPSS. The NPSS architecture enables the coupling of analyses at various levels of detail, which is called numerical zooming. The middleware used to enable zooming and distributed simulations is the Common Object Request Broker Architecture (CORBA). The NPSS Developer's Kit offers tools for the developer to generate CORBA-based components and wrap codes. The Developer's Kit enables distributed multi-fidelity and multi-discipline simulations, preserves proprietary and legacy codes, and facilitates addition of customized codes. The platforms supported are PC, Linux, HP, Sun, and SGI.
Connecting Requirements to Architecture and Analysis via Model-Based Systems Engineering
NASA Technical Reports Server (NTRS)
Cole, Bjorn F.; Jenkins, J. Steven
2015-01-01
In traditional systems engineering practice, architecture, concept development, and requirements development are related but still separate activities. Concepts for operation, key technical approaches, and related proofs of concept are developed. These inform the formulation of an architecture at multiple levels, starting with the overall system composition and functionality and progressing into more detail. As this formulation is done, a parallel activity develops a set of English statements that constrain solutions. These requirements are often called "shall statements" since they are formulated to use "shall." The separation of requirements from design is exacerbated by well-meaning tools like the Dynamic Object-Oriented Requirements System (DOORS) that remained separated from engineering design tools. With the Europa Clipper project, efforts are being taken to change the requirements development approach from a separate activity to one intimately embedded in formulation effort. This paper presents a modeling approach and related tooling to generate English requirement statements from constraints embedded in architecture definition.
NASA Technical Reports Server (NTRS)
Bargar, Robin
1995-01-01
The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.
A Generalized Framework for Modeling Next Generation 911 Implementations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelic, Andjelka; Aamir, Munaf Syed; Kelic, Andjelka
This document summarizes the current state of Sandia 911 modeling capabilities and then addresses key aspects of Next Generation 911 (NG911) architectures for expansion of existing models. Analysis of three NG911 implementations was used to inform heuristics , associated key data requirements , and assumptions needed to capture NG911 architectures in the existing models . Modeling of NG911 necessitates careful consideration of its complexity and the diversity of implementations. Draft heuristics for constructing NG911 models are pres ented based on the analysis along with a summary of current challenges and ways to improve future NG911 modeling efforts . We foundmore » that NG911 relies on E nhanced 911 (E911) assets such as 911 selective routers to route calls originating from traditional tel ephony service which are a majority of 911 calls . We also found that the diversity and transitional nature of NG911 implementations necessitates significant and frequent data collection to ensure that adequate model s are available for crisis action support .« less
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Ji, Yuefeng; Zhang, Jie; Li, Hui; Xiong, Qianjin; Qiu, Shaofeng
2014-08-01
Ultrahigh throughout capacity requirement is challenging the current optical switching nodes with the fast development of data center networks. Pbit/s level all optical switching networks need to be deployed soon, which will cause the high complexity of node architecture. How to control the future network and node equipment together will become a new problem. An enhanced Software Defined Networking (eSDN) control architecture is proposed in the paper, which consists of Provider NOX (P-NOX) and Node NOX (N-NOX). With the cooperation of P-NOX and N-NOX, the flexible control of the entire network can be achieved. All optical switching network testbed has been experimentally demonstrated with efficient control of enhanced Software Defined Networking (eSDN). Pbit/s level all optical switching nodes in the testbed are implemented based on multi-dimensional switching architecture, i.e. multi-level and multi-planar. Due to the space and cost limitation, each optical switching node is only equipped with four input line boxes and four output line boxes respectively. Experimental results are given to verify the performance of our proposed control and switching architecture.
Wright, Adam; Sittig, Dean F.
2008-01-01
In this paper we describe and evaluate a new distributed architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support), which leverages current health information exchange efforts and is based on the principles of a service-oriented architecture. The architecture allows disparate clinical information systems and clinical decision support systems to be seamlessly integrated over a network according to a set of interfaces and protocols described in this paper. The architecture described is fully defined and developed, and six use cases have been developed and tested using a prototype electronic health record which links to one of the existing prototype National Health Information Networks (NHIN): drug interaction checking, syndromic surveillance, diagnostic decision support, inappropriate prescribing in older adults, information at the point of care and a simple personal health record. Some of these use cases utilize existing decision support systems, which are either commercially or freely available at present, and developed outside of the SANDS project, while other use cases are based on decision support systems developed specifically for the project. Open source code for many of these components is available, and an open source reference parser is also available for comparison and testing of other clinical information systems and clinical decision support systems that wish to implement the SANDS architecture. PMID:18434256
The TJO-OAdM robotic observatory: OpenROCS and dome control
NASA Astrophysics Data System (ADS)
Colomé, Josep; Francisco, Xavier; Ribas, Ignasi; Casteels, Kevin; Martín, Jonatan
2010-07-01
The Telescope Joan Oró at the Montsec Astronomical Observatory (TJO - OAdM) is a small-class observatory working in completely unattended control. There are key problems to solve when a robotic control is envisaged, both on hardware and software issues. We present the OpenROCS (ROCS stands for Robotic Observatory Control System), an open source platform developed for the robotic control of the TJO - OAdM and similar astronomical observatories. It is a complex software architecture, composed of several applications for hardware control, event handling, environment monitoring, target scheduling, image reduction pipeline, etc. The code is developed in Java, C++, Python and Perl. The software infrastructure used is based on the Internet Communications Engine (Ice), an object-oriented middleware that provides object-oriented remote procedure call, grid computing, and publish/subscribe functionality. We also describe the subsystem in charge of the dome control: several hardware and software elements developed to specially protect the system at this identified single point of failure. It integrates a redundant control and a rain detector signal for alarm triggering and it responds autonomously in case communication with any of the control elements is lost (watchdog functionality). The self-developed control software suite (OpenROCS) and dome control system have proven to be highly reliable.
A system architecture for a planetary rover
NASA Technical Reports Server (NTRS)
Smith, D. B.; Matijevic, J. R.
1989-01-01
Each planetary mission requires a complex space vehicle which integrates several functions to accomplish the mission and science objectives. A Mars Rover is one of these vehicles, and extends the normal spacecraft functionality with two additional functions: surface mobility and sample acquisition. All functions are assembled into a hierarchical and structured format to understand the complexities of interactions between functions during different mission times. It can graphically show data flow between functions, and most importantly, the necessary control flow to avoid unambiguous results. Diagrams are presented organizing the functions into a structured, block format where each block represents a major function at the system level. As such, there are six blocks representing telecomm, power, thermal, science, mobility and sampling under a supervisory block called Data Management/Executive. Each block is a simple collection of state machines arranged into a hierarchical order very close to the NASREM model for Telerobotics. Each layer within a block represents a level of control for a set of state machines that do the three primary interface functions: command, telemetry, and fault protection. This latter function is expanded to include automatic reactions to the environment as well as internal faults. Lastly, diagrams are presented that trace the system operations involved in moving from site to site after site selection. The diagrams clearly illustrate both the data and control flows. They also illustrate inter-block data transfers and a hierarchical approach to fault protection. This systems architecture can be used to determine functional requirements, interface specifications and be used as a mechanism for grouping subsystems (i.e., collecting groups of machines, or blocks consistent with good and testable implementations).
A Biologically Inspired Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Tom; Craft, Mike; ONeil, Daniel; Howell, Joe T. (Technical Monitor)
2002-01-01
A prototype cooperative multi-robot control architecture suitable for the eventual construction of large space structures has been developed. In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. The prototype control architecture emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
NASA Astrophysics Data System (ADS)
Eguiraun, M.; Jugo, J.; Arredondo, I.; del Campo, M.; Feuchtwanger, J.; Etxebarria, V.; Bermejo, F. J.
2013-04-01
ISHN (Ion Source Hydrogen Negative) consists of a Penning type ion source in operation at ESS-Bilbao facilities. From the control point of view, this source is representative of the first steps and decisions taken towards the general control architecture of the whole accelerator to be built. The ISHN main control system is based on a PXI architecture, under a real-time controller which is programmed using LabVIEW. This system, with additional elements, is connected to the general control system. The whole system is based on EPICS for the control network, and the modularization of the communication layers of the accelerator plays an important role in the proposed control architecture.
Implementation of a system to provide mobile satellite services in North America
NASA Technical Reports Server (NTRS)
Johanson, Gary A.; Davies, N. George; Tisdale, William R. H.
1993-01-01
This paper describes the implementation of the ground network to support Mobile Satellite Services (MSS). The system is designed to take advantage of a powerful new satellite series and provides significant improvements in capacity and throughput over systems in service today. The system is described in terms of the services provided and the system architecture being implemented to deliver those services. The system operation is described including examples of a circuit switched and packet switched call placement. The physical architecture is presented showing the major hardware components and software functionality placement within the hardware.
NASA Astrophysics Data System (ADS)
Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan
A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.
Future Data Communication Architectures for Safety Critical Aircraft Cabin Systems
NASA Astrophysics Data System (ADS)
Berkhahn, Sven-Olaf
2012-05-01
The cabin of modern aircraft is subject to increasing demands for fast reconfiguration and hence flexibility. These demands require studies for new network architectures and technologies of the electronic cabin systems, which consider also weight and cost reductions as well as safety constraints. Two major approaches are in consideration to reduce the complex and heavy wiring harness: the usage of a so called hybrid data bus technology, which enables the common usage of the same data bus for several electronic cabin systems with different safety and security requirements and the application of wireless data transfer technologies for electronic cabin systems.
FPGA implementation of bit controller in double-tick architecture
NASA Astrophysics Data System (ADS)
Kobylecki, Michał; Kania, Dariusz
2017-11-01
This paper presents a comparison of the two original architectures of programmable bit controllers built on FPGAs. Programmable Logic Controllers (which include, among other things programmable bit controllers) built on FPGAs provide a efficient alternative to the controllers based on microprocessors which are expensive and often too slow. The presented and compared methods allow for the efficient implementation of any bit control algorithm written in Ladder Diagram language into the programmable logic system in accordance with IEC61131-3. In both cases, we have compared the effect of the applied architecture on the performance of executing the same bit control program in relation to its own size.
Multi-Kepler GPU vs. multi-Intel MIC for spin systems simulations
NASA Astrophysics Data System (ADS)
Bernaschi, M.; Bisson, M.; Salvadore, F.
2014-10-01
We present and compare the performances of two many-core architectures: the Nvidia Kepler and the Intel MIC both in a single system and in cluster configuration for the simulation of spin systems. As a benchmark we consider the time required to update a single spin of the 3D Heisenberg spin glass model by using the Over-relaxation algorithm. We present data also for a traditional high-end multi-core architecture: the Intel Sandy Bridge. The results show that although on the two Intel architectures it is possible to use basically the same code, the performances of a Intel MIC change dramatically depending on (apparently) minor details. Another issue is that to obtain a reasonable scalability with the Intel Phi coprocessor (Phi is the coprocessor that implements the MIC architecture) in a cluster configuration it is necessary to use the so-called offload mode which reduces the performances of the single system. As to the GPU, the Kepler architecture offers a clear advantage with respect to the previous Fermi architecture maintaining exactly the same source code. Scalability of the multi-GPU implementation remains very good by using the CPU as a communication co-processor of the GPU. All source codes are provided for inspection and for double-checking the results.
NASA Astrophysics Data System (ADS)
Baik, A.; Yaagoubi, R.; Boehm, J.
2015-08-01
This work outlines a new approach for the integration of 3D Building Information Modelling and the 3D Geographic Information System (GIS) to provide semantically rich models, and to get the benefits from both systems to help document and analyse cultural heritage sites. Our proposed framework is based on the Jeddah Historical Building Information Modelling process (JHBIM). This JHBIM consists of a Hijazi Architectural Objects Library (HAOL) that supports higher level of details (LoD) while decreasing the time of modelling. The Hijazi Architectural Objects Library has been modelled based on the Islamic historical manuscripts and Hijazi architectural pattern books. Moreover, the HAOL is implemented using BIM software called Autodesk Revit. However, it is known that this BIM environment still has some limitations with the non-standard architectural objects. Hence, we propose to integrate the developed 3D JHBIM with 3D GIS for more advanced analysis. To do so, the JHBIM database is exported and semantically enriched with non-architectural information that is necessary for restoration and preservation of historical monuments. After that, this database is integrated with the 3D Model in the 3D GIS solution. At the end of this paper, we'll illustrate our proposed framework by applying it to a Historical Building called Nasif Historical House in Jeddah. First of all, this building is scanned by the use of a Terrestrial Laser Scanner (TLS) and Close Range Photogrammetry. Then, the 3D JHBIM based on the HOAL is designed on Revit Platform. Finally, this model is integrated to a 3D GIS solution through Autodesk InfraWorks. The shown analysis presented in this research highlights the importance of such integration especially for operational decisions and sharing the historical knowledge about Jeddah Historical City. Furthermore, one of the historical buildings in Old Jeddah, Nasif Historical House, was chosen as a test case for the project.
Active Fault Tolerant Control for Ultrasonic Piezoelectric Motor
NASA Astrophysics Data System (ADS)
Boukhnifer, Moussa
2012-07-01
Ultrasonic piezoelectric motor technology is an important system component in integrated mechatronics devices working on extreme operating conditions. Due to these constraints, robustness and performance of the control interfaces should be taken into account in the motor design. In this paper, we apply a new architecture for a fault tolerant control using Youla parameterization for an ultrasonic piezoelectric motor. The distinguished feature of proposed controller architecture is that it shows structurally how the controller design for performance and robustness may be done separately which has the potential to overcome the conflict between performance and robustness in the traditional feedback framework. A fault tolerant control architecture includes two parts: one part for performance and the other part for robustness. The controller design works in such a way that the feedback control system will be solely controlled by the proportional plus double-integral
Adaptive method with intercessory feedback control for an intelligent agent
Goldsmith, Steven Y.
2004-06-22
An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.
A comparative analysis of loop heat pipe based thermal architectures for spacecraft thermal control
NASA Technical Reports Server (NTRS)
Pauken, Mike; Birur, Gaj
2004-01-01
Loop Heat Pipes (LHP) have gained acceptance as a viable means of heat transport in many spacecraft in recent years. However, applications using LHP technology tend to only remove waste heat from a single component to an external radiator. Removing heat from multiple components has been done by using multiple LHPs. This paper discusses the development and implementation of a Loop Heat Pipe based thermal architecture for spacecraft. In this architecture, a Loop Heat Pipe with multiple evaporators and condensers is described in which heat load sharing and thermal control of multiple components can be achieved. A key element in using a LHP thermal architecture is defining the need for such an architecture early in the spacecraft design process. This paper describes an example in which a LHP based thermal architecture can be used and how such a system can have advantages in weight, cost and reliability over other kinds of distributed thermal control systems. The example used in this paper focuses on a Mars Rover Thermal Architecture. However, the principles described here are applicable to Earth orbiting spacecraft as well.
HELPR: Hybrid Evolutionary Learning for Pattern Recognition
2005-12-01
to a new approach called memetic algorithms that combines machine learning systems with human expertise to create new tools that have the advantage...architecture could form the foundation for a memetic system capable of solving ATR problems faster and more accurately than possible using pure human expertise
Code of Federal Regulations, 2012 CFR
2012-10-01
... Government, in solicitations and contracts. (c) The Government shall obtain unlimited rights in shop drawings for construction. In solicitations and contracts calling for delivery of shop drawings, include the clause at 252.227-7033, Rights in Shop Drawings. ...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Government, in solicitations and contracts. (c) The Government shall obtain unlimited rights in shop drawings for construction. In solicitations and contracts calling for delivery of shop drawings, include the clause at 252.227-7033, Rights in Shop Drawings. ...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Government, in solicitations and contracts. (c) The Government shall obtain unlimited rights in shop drawings for construction. In solicitations and contracts calling for delivery of shop drawings, include the clause at 252.227-7033, Rights in Shop Drawings. ...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Government, in solicitations and contracts. (c) The Government shall obtain unlimited rights in shop drawings for construction. In solicitations and contracts calling for delivery of shop drawings, include the clause at 252.227-7033, Rights in Shop Drawings. ...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Government, in solicitations and contracts. (c) The Government shall obtain unlimited rights in shop drawings for construction. In solicitations and contracts calling for delivery of shop drawings, include the clause at 252.227-7033, Rights in Shop Drawings. ...
SALT: The Simulator for the Analysis of LWP Timing
NASA Technical Reports Server (NTRS)
Springer, Paul L.; Rodrigues, Arun; Brockman, Jay
2006-01-01
With the emergence of new processor architectures that are highly multithreaded, and support features such as full/empty memory semantics and split-phase memory transactions, the need for a processor simulator to handle these features becomes apparent. This paper describes such a simulator, called SALT.
Building Entrepreneurial Architectures: A Conceptual Interpretation of the Third Mission
ERIC Educational Resources Information Center
Vorley, Tim; Nelles, Jen
2009-01-01
Universities are increasingly being challenged to become more socially and economically relevant institutions under the guise of the so-called "Third Mission". This phenomenon, articulated in policy, has prompted the emergence of a growing literature documenting the evolution of the contemporary university, and specifically addressing…
Communication Needs Assessment for Distributed Turbine Engine Control
NASA Technical Reports Server (NTRS)
Culley, Dennis E.; Behbahani, Alireza R.
2008-01-01
Control system architecture is a major contributor to future propulsion engine performance enhancement and life cycle cost reduction. The control system architecture can be a means to effect net weight reduction in future engine systems, provide a streamlined approach to system design and implementation, and enable new opportunities for performance optimization and increased awareness about system health. The transition from a centralized, point-to-point analog control topology to a modular, networked, distributed system is paramount to extracting these system improvements. However, distributed engine control systems are only possible through the successful design and implementation of a suitable communication system. In a networked system, understanding the data flow between control elements is a fundamental requirement for specifying the communication architecture which, itself, is dependent on the functional capability of electronics in the engine environment. This paper presents an assessment of the communication needs for distributed control using strawman designs and relates how system design decisions relate to overall goals as we progress from the baseline centralized architecture, through partially distributed and fully distributed control systems.
Advanced control architecture for autonomous vehicles
NASA Astrophysics Data System (ADS)
Maurer, Markus; Dickmanns, Ernst D.
1997-06-01
An advanced control architecture for autonomous vehicles is presented. The hierarchical architecture consists of four levels: a vehicle level, a control level, a rule-based level and a knowledge-based level. A special focus is on forms of internal representation, which have to be chosen adequately for each level. The control scheme is applied to VaMP, a Mercedes passenger car which autonomously performs missions on German freeways. VaMP perceives the environment with its sense of vision and conventional sensors. It controls its actuators for locomotion and attention focusing. Modules for perception, cognition and action are discussed.
NASA Astrophysics Data System (ADS)
Haener, Rainer; Waechter, Joachim; Fleischer, Jens; Herrnkind, Stefan; Schwarting, Herrmann
2010-05-01
The German Indonesian Tsunami Early Warning System (GITEWS) is a multifaceted system consisting of various sensor types like seismometers, sea level sensors or GPS stations, and processing components, all with their own system behavior and proprietary data structure. To operate a warning chain, beginning from measurements scaling up to warning products, all components have to interact in a correct way, both syntactically and semantically. Designing the system great emphasis was laid on conformity to the Sensor Web Enablement (SWE) specification by the Open Geospatial Consortium (OGC). The technical infrastructure, the so called Tsunami Service Bus (TSB) follows the blueprint of Service Oriented Architectures (SOA). The TSB is an integration concept (SWE) where functionality (observe, task, notify, alert, and process) is grouped around business processes (Monitoring, Decision Support, Sensor Management) and packaged as interoperable services (SAS, SOS, SPS, WNS). The benefits of using a flexible architecture together with SWE lead to an open integration platform: • accessing and controlling heterogeneous sensors in a uniform way (Functional Integration) • assigns functionality to distinct services (Separation of Concerns) • allows resilient relationship between systems (Loose Coupling) • integrates services so that they can be accessed from everywhere (Location Transparency) • enables infrastructures which integrate heterogeneous applications (Encapsulation) • allows combination of services (Orchestration) and data exchange within business processes Warning systems will evolve over time: New sensor types might be added, old sensors will be replaced and processing components will be improved. From a collection of few basic services it shall be possible to compose more complex functionality essential for specific warning systems. Given these requirements a flexible infrastructure is a prerequisite for sustainable systems and their architecture must be tailored for evolution. The use of well-known techniques and widely used open source software implementing industrial standards reduces the impact of service modifications allowing the evolution of a system as a whole. GITEWS implemented a solution to feed sensor raw data from any (remote) system into the infrastructure. Specific dispatchers enable plugging in sensor-type specific processing without changing the architecture. Client components don't need to be adjusted if new sensor-types or individuals are added to the system, because they access them via standardized services. One of the outstanding features of service-oriented architectures is the possibility to compose new services from existing ones. The so called orchestration, allows the definition of new warning processes which can be adapted easily to new requirements. This approach has following advantages: • With implementing SWE it is possible to establish the "detection" and integration of sensors via the internet. Thus a system of systems combining early warning functionality at different levels of detail is feasible. • Any institution could add both its own components as well as components from third parties if they are developed in conformance to SOA principles. In a federation an institution keeps the ownership of its data and decides which data are provided by a service and when. • A system can be deployed at minor costs as a core for own development at any institution and thus enabling autonomous early warning- or monitoring systems. The presentation covers both design and various instantiations (live demonstration) of the GITEWS architecture. Experiences concerning the design and complexity of SWE will be addressed in detail. A substantial amount of attention is laid on the techniques and methods of extending the architecture, adapting proprietary components to SWE services and encoding, and their orchestration in high level workflows and processes. Furthermore the potential of the architecture concerning adaptive behavior, collaboration across boundaries and semantic interoperability will be addressed.
Chen, Elizabeth S.; Maloney, Francine L.; Shilmayster, Eugene; Goldberg, Howard S.
2009-01-01
A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs. PMID:20351830
Chen, Elizabeth S; Maloney, Francine L; Shilmayster, Eugene; Goldberg, Howard S
2009-11-14
A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs.
Performance prediction: A case study using a multi-ring KSR-1 machine
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Zhu, Jianping
1995-01-01
While computers with tens of thousands of processors have successfully delivered high performance power for solving some of the so-called 'grand-challenge' applications, the notion of scalability is becoming an important metric in the evaluation of parallel machine architectures and algorithms. In this study, the prediction of scalability and its application are carefully investigated. A simple formula is presented to show the relation between scalability, single processor computing power, and degradation of parallelism. A case study is conducted on a multi-ring KSR1 shared virtual memory machine. Experimental and theoretical results show that the influence of topology variation of an architecture is predictable. Therefore, the performance of an algorithm on a sophisticated, heirarchical architecture can be predicted and the best algorithm-machine combination can be selected for a given application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalsi, Karan; Fuller, Jason C.; Somani, Abhishek
Disclosed herein are representative embodiments of methods, apparatus, and systems for facilitating operation and control of a resource distribution system (such as a power grid). Among the disclosed embodiments is a distributed hierarchical control architecture (DHCA) that enables smart grid assets to effectively contribute to grid operations in a controllable manner, while helping to ensure system stability and equitably rewarding their contribution. Embodiments of the disclosed architecture can help unify the dispatch of these resources to provide both market-based and balancing services.
Execution environment for intelligent real-time control systems
NASA Technical Reports Server (NTRS)
Sztipanovits, Janos
1987-01-01
Modern telerobot control technology requires the integration of symbolic and non-symbolic programming techniques, different models of parallel computations, and various programming paradigms. The Multigraph Architecture, which has been developed for the implementation of intelligent real-time control systems is described. The layered architecture includes specific computational models, integrated execution environment and various high-level tools. A special feature of the architecture is the tight coupling between the symbolic and non-symbolic computations. It supports not only a data interface, but also the integration of the control structures in a parallel computing environment.
Sawmill: A Logging File System for a High-Performance RAID Disk Array
1995-01-01
from limiting disk performance, new controller architectures connect the disks directly to the network so that data movement bypasses the file server...These developments raise two questions for file systems: how to get the best performance from a RAID, and how to use such a controller architecture ...the RAID-II storage system; this architecture provides a fast data path that moves data rapidly among the disks, high-speed controller memory, and the
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Turso, James A.; Shah, Neerav; Sowers, T. Shane; Owen, A. Karl
2005-01-01
A retrofit architecture for intelligent turbofan engine control and diagnostics that changes the fan speed command to maintain thrust is proposed and its demonstration in a piloted flight simulator is described. The objective of the implementation is to increase the level of autonomy of the propulsion system, thereby reducing pilot workload in the presence of anomalies and engine degradation due to wear. The main functions of the architecture are to diagnose the cause of changes in the engine s operation, warning the pilot if necessary, and to adjust the outer loop control reference signal in response to the changes. This requires that the retrofit control architecture contain the capability to determine the changed relationship between fan speed and thrust, and the intelligence to recognize the cause of the change in order to correct it or warn the pilot. The proposed retrofit architecture is able to determine the fan speed setting through recognition of the degradation level of the engine, and it is able to identify specific faults and warn the pilot. In the flight simulator it was demonstrated that when degradation is introduced into an engine with standard fan speed control, the pilot needs to take corrective action to maintain heading. Utilizing the intelligent retrofit control architecture, the engine thrust is automatically adjusted to its expected value, eliminating yaw without pilot intervention.
Dynamic malware analysis using IntroVirt: a modified hypervisor-based system
NASA Astrophysics Data System (ADS)
White, Joshua S.; Pape, Stephen R.; Meily, Adam T.; Gloo, Richard M.
2013-05-01
In this paper, we present a system for Dynamic Malware Analysis which incorporates the use of IntroVirt™. IntroVirt is an introspective hypervisor architecture and infrastructure that supports advanced analysis techniques for stealth-malwareanalysis. This system allows for complete guest monitoring and interaction, including the manipulation and blocking of system calls. IntroVirt is capable of bypassing virtual machine detection capabilities of even the most sophisticated malware, by spoofing returns to system call responses. Additional fuzzing capabilities can be employed to detect both malware vulnerabilities and polymorphism.
Planning assistance for the NASA 30/20 GHz program. Network control architecture study.
NASA Technical Reports Server (NTRS)
Inukai, T.; Bonnelycke, B.; Strickland, S.
1982-01-01
Network Control Architecture for a 30/20 GHz flight experiment system operating in the Time Division Multiple Access (TDMA) was studied. Architecture development, identification of processing functions, and performance requirements for the Master Control Station (MCS), diversity trunking stations, and Customer Premises Service (CPS) stations are covered. Preliminary hardware and software processing requirements as well as budgetary cost estimates for the network control system are given. For the trunking system control, areas covered include on board SS-TDMA switch organization, frame structure, acquisition and synchronization, channel assignment, fade detection and adaptive power control, on board oscillator control, and terrestrial network timing. For the CPS control, they include on board processing and adaptive forward error correction control.
Ho, Fui Li; Salowi, Mohamad Aziz; Bastion, Mae-Lynn Catherine
2017-01-01
To investigate the effects of postoperative eye patching on clear corneal incision architecture in phacoemulsification. A single-center, randomized controlled trial. A total of 132 patients with uncomplicated phacoemulsification were randomly allocated to the intervention or control group. The intervention group received postoperative eye patching for approximately 18 hours, whereas the control group received eye shield. The clear corneal incision architecture was examined postoperatively at 2 hours, 1 day, and 7 days after surgery using optical coherence tomography. Epithelial gaping was significantly reduced on postoperative day 1 in the intervention group (52.4%) compared with control (74.2%) (P = 0.01). No differences were found for other architectural defects. Descemet membrane detachment was associated with lower intraocular pressure on postoperative day 7 (P = 0.02). Presence of underlying diabetes mellitus did not seem to influence architectural defects. Postoperative eye patching facilitated epithelial healing and reduced the occurrence of epithelial gaping on postoperative day 1. It may play a role in protecting and improving corneal wounds during the critical immediate postoperative period. Copyright 2017 Asia-Pacific Academy of Ophthalmology.
Standardizing the information architecture for spacecraft operations
NASA Technical Reports Server (NTRS)
Easton, C. R.
1994-01-01
This paper presents an information architecture developed for the Space Station Freedom as a model from which to derive an information architecture standard for advanced spacecraft. The information architecture provides a way of making information available across a program, and among programs, assuming that the information will be in a variety of local formats, structures and representations. It provides a format that can be expanded to define all of the physical and logical elements that make up a program, add definitions as required, and import definitions from prior programs to a new program. It allows a spacecraft and its control center to work in different representations and formats, with the potential for supporting existing spacecraft from new control centers. It supports a common view of data and control of all spacecraft, regardless of their own internal view of their data and control characteristics, and of their communications standards, protocols and formats. This information architecture is central to standardizing spacecraft operations, in that it provides a basis for information transfer and translation, such that diverse spacecraft can be monitored and controlled in a common way.
Multi-Agent Diagnosis and Control of an Air Revitalization System for Life Support in Space
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Kowing, Jeffrey; Nieten, Joseph; Graham, Jeffrey s.; Schreckenghost, Debra; Bonasso, Pete; Fleming, Land D.; MacMahon, Matt; Thronesbery, Carroll
2000-01-01
An architecture of interoperating agents has been developed to provide control and fault management for advanced life support systems in space. In this adjustable autonomy architecture, software agents coordinate with human agents and provide support in novel fault management situations. This architecture combines the Livingstone model-based mode identification and reconfiguration (MIR) system with the 3T architecture for autonomous flexible command and control. The MIR software agent performs model-based state identification and diagnosis. MIR identifies novel recovery configurations and the set of commands required for the recovery. The AZT procedural executive and the human operator use the diagnoses and recovery recommendations, and provide command sequencing. User interface extensions have been developed to support human monitoring of both AZT and MIR data and activities. This architecture has been demonstrated performing control and fault management for an oxygen production system for air revitalization in space. The software operates in a dynamic simulation testbed.
Microfabrication Technology for Photonics
1990-06-01
specifically addressed by a "folded," parallel architecture currently being proposed by A. Huang(35) who calls it "Computational Origami ." 25 IV...34Computational Origami " U.S. Patent Pending; H.M. Lu, "computatiortal Origami : A Geometric Approach to Regular Multiprocessing," MIT Master’s Thesis in
A new mobile ubiquitous computing application to control obesity: SapoFit.
Rodrigues, Joel J P C; Lopes, Ivo M C; Silva, Bruno M C; Torre, Isabel de La
2013-01-01
The objective of this work was the proposal, design, construction and validation of a mobile health system for dietetic monitoring and assessment, called SapoFit. This application may be personalized to keep a daily personal health record of an individual's food intake and daily exercise and to share this with a social network. The initiative is a partnership with SAPO - Portugal Telecom. SapoFit uses Web services architecture, a relatively new model for distributed computing and application integration. SapoFit runs on a range of mobile platforms, and it has been implemented successfully in a range of mobile devices and has been evaluated by over 100 users. Most users strongly agree that SapoFit has an attractive design, the environment is user-friendly and intuitive, and the navigation options are clear.
Two-dimensional quantum repeaters
NASA Astrophysics Data System (ADS)
Wallnöfer, J.; Zwerger, M.; Muschik, C.; Sangouard, N.; Dür, W.
2016-11-01
The endeavor to develop quantum networks gave rise to a rapidly developing field with far-reaching applications such as secure communication and the realization of distributed computing tasks. This ultimately calls for the creation of flexible multiuser structures that allow for quantum communication between arbitrary pairs of parties in the network and facilitate also multiuser applications. To address this challenge, we propose a two-dimensional quantum repeater architecture to establish long-distance entanglement shared between multiple communication partners in the presence of channel noise and imperfect local control operations. The scheme is based on the creation of self-similar multiqubit entanglement structures at growing scale, where variants of entanglement swapping and multiparty entanglement purification are combined to create high-fidelity entangled states. We show how such networks can be implemented using trapped ions in cavities.
Controllable Modular Growth of Hierarchical MOF-on-MOF Architectures.
Gu, Yifan; Wu, Yi-Nan; Li, Liangchun; Chen, Wei; Li, Fengting; Kitagawa, Susumu
2017-12-04
Fabrication of hybrid MOF-on-MOF heteroarchitectures can create novel and multifunctional platforms to achieve desired properties. However, only MOFs with similar crystallographic parameters can be hybridized by the classical epitaxial growth method (EGM), which largely suppressed its applications. A general strategy, called internal extended growth method (IEGM), is demonstrated for the feasible assembly of MOFs with distinct crystallographic parameters in an MOF matrix. Various MOFs with diverse functions could be introduced in a modular MOF matrix to form 3D core-satellite pluralistic hybrid system. The number of different MOF crystals interspersed could be varied on demand. More importantly, the different MOF crystals distributed in individual domains could be used to further incorporate functional units or enhance target functions. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
A limit-cycle self-organizing map architecture for stable arm control.
Huang, Di-Wei; Gentili, Rodolphe J; Katz, Garrett E; Reggia, James A
2017-01-01
Inspired by the oscillatory nature of cerebral cortex activity, we recently proposed and studied self-organizing maps (SOMs) based on limit cycle neural activity in an attempt to improve the information efficiency and robustness of conventional single-node, single-pattern representations. Here we explore for the first time the use of limit cycle SOMs to build a neural architecture that controls a robotic arm by solving inverse kinematics in reach-and-hold tasks. This multi-map architecture integrates open-loop and closed-loop controls that learn to self-organize oscillatory neural representations and to harness non-fixed-point neural activity even for fixed-point arm reaching tasks. We show through computer simulations that our architecture generalizes well, achieves accurate, fast, and smooth arm movements, and is robust in the face of arm perturbations, map damage, and variations of internal timing parameters controlling the flow of activity. A robotic implementation is evaluated successfully without further training, demonstrating for the first time that limit cycle maps can control a physical robot arm. We conclude that architectures based on limit cycle maps can be organized to function effectively as neural controllers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Technology architecture guidelines for a health care system.
Jones, D T; Duncan, R; Langberg, M L; Shabot, M M
2000-01-01
Although the demand for use of information technology within the healthcare industry is intensifying, relatively little has been written about guidelines to optimize IT investments. A technology architecture is a set of guidelines for technology integration within an enterprise. The architecture is a critical tool in the effort to control information technology (IT) operating costs by constraining the number of technologies supported. A well-designed architecture is also an important aid to integrating disparate applications, data stores and networks. The authors led the development of a thorough, carefully designed technology architecture for a large and rapidly growing health care system. The purpose and design criteria are described, as well as the process for gaining consensus and disseminating the architecture. In addition, the processes for using, maintaining, and handling exceptions are described. The technology architecture is extremely valuable to health care organizations both in controlling costs and promoting integration.
Technology architecture guidelines for a health care system.
Jones, D. T.; Duncan, R.; Langberg, M. L.; Shabot, M. M.
2000-01-01
Although the demand for use of information technology within the healthcare industry is intensifying, relatively little has been written about guidelines to optimize IT investments. A technology architecture is a set of guidelines for technology integration within an enterprise. The architecture is a critical tool in the effort to control information technology (IT) operating costs by constraining the number of technologies supported. A well-designed architecture is also an important aid to integrating disparate applications, data stores and networks. The authors led the development of a thorough, carefully designed technology architecture for a large and rapidly growing health care system. The purpose and design criteria are described, as well as the process for gaining consensus and disseminating the architecture. In addition, the processes for using, maintaining, and handling exceptions are described. The technology architecture is extremely valuable to health care organizations both in controlling costs and promoting integration. PMID:11079913
Critical Branches and Lucky Loads in Control-Independence Architectures
ERIC Educational Resources Information Center
Malik, Kshitiz
2009-01-01
Branch mispredicts have a first-order impact on the performance of integer applications. Control Independence (CI) architectures aim to overlap the penalties of mispredicted branches with useful execution by spawning control-independent work as separate threads. Although control independent, such threads may consume register and memory values…
A Ground Systems Architecture Transition for a Distributed Operations System
NASA Technical Reports Server (NTRS)
Sellers, Donna; Pitts, Lee; Bryant, Barry
2003-01-01
The Marshall Space Flight Center (MSFC) Ground Systems Department (GSD) recently undertook an architecture change in the product line that serves the ISS program. As a result, the architecture tradeoffs between data system product lines that serve remote users versus those that serve control center flight control teams were explored extensively. This paper describes the resulting architecture that will be used in the International Space Station (ISS) payloads program, and the resulting functional breakdown of the products that support this architecture. It also describes the lessons learned from the path that was followed, as a migration of products cause the need to reevaluate the allocation of functions across the architecture. The result is a set of innovative ground system solutions that is scalable so it can support facilities of wide-ranging sizes, from a small site up to large control centers. Effective use of system automation, custom components, design optimization for data management, data storage, data transmissions, and advanced local and wide area networking architectures, plus the effective use of Commercial-Off-The-Shelf (COTS) products, provides flexible Remote Ground System options that can be tailored to the needs of each user. This paper offers a description of the efficiency and effectiveness of the Ground Systems architectural options that have been implemented, and includes successful implementation examples and lessons learned.
jqcML: an open-source java API for mass spectrometry quality control data in the qcML format.
Bittremieux, Wout; Kelchtermans, Pieter; Valkenborg, Dirk; Martens, Lennart; Laukens, Kris
2014-07-03
The awareness that systematic quality control is an essential factor to enable the growth of proteomics into a mature analytical discipline has increased over the past few years. To this aim, a controlled vocabulary and document structure have recently been proposed by Walzer et al. to store and disseminate quality-control metrics for mass-spectrometry-based proteomics experiments, called qcML. To facilitate the adoption of this standardized quality control routine, we introduce jqcML, a Java application programming interface (API) for the qcML data format. First, jqcML provides a complete object model to represent qcML data. Second, jqcML provides the ability to read, write, and work in a uniform manner with qcML data from different sources, including the XML-based qcML file format and the relational database qcDB. Interaction with the XML-based file format is obtained through the Java Architecture for XML Binding (JAXB), while generic database functionality is obtained by the Java Persistence API (JPA). jqcML is released as open-source software under the permissive Apache 2.0 license and can be downloaded from https://bitbucket.org/proteinspector/jqcml .
A Review of Enterprise Architecture Use in Defence
2014-09-01
dictionary of terms; • architecture description language; • architectural information (pertaining both to specific projects and higher level...UNCLASSIFIED 59 Z39.19 2005 Monolingual Controlled Vocabularies, National Information Standards Organisation, Bethesda: NISO Press, 2005. BABOK 2009...togaf/ Z39.19 2005 ANSI/NISO Z39.19 – Guidelines for the Construction, Format, and Management of Monolingual Controlled Vocabularies, Bethesda: NISO
Mars Science Laboratory thermal control architecture
NASA Technical Reports Server (NTRS)
Bhandari, Pradeep; Birur, Gajanana; Pauken, Michael; Paris, Anthony; Novak, Keith; Prina, Mauro; Ramirez, Brenda; Bame, David
2005-01-01
The Mars Science Laboratory (MSL) mission to land a large rover on Mars is being planned for launch in 2009. This paper will describe the basic architecture of the thermal control system, the challenges and the methods used to overcome them by the use of an innovative architecture to maximize the use of heritage from past projects while meeting the requirements for the design.
Nebot, Patricio; Torres-Sospedra, Joaquín; Martínez, Rafael J
2011-01-01
The control architecture is one of the most important part of agricultural robotics and other robotic systems. Furthermore its importance increases when the system involves a group of heterogeneous robots that should cooperate to achieve a global goal. A new control architecture is introduced in this paper for groups of robots in charge of doing maintenance tasks in agricultural environments. Some important features such as scalability, code reuse, hardware abstraction and data distribution have been considered in the design of the new architecture. Furthermore, coordination and cooperation among the different elements in the system is allowed in the proposed control system. By integrating a network oriented device server Player, Java Agent Development Framework (JADE) and High Level Architecture (HLA), the previous concepts have been considered in the new architecture presented in this paper. HLA can be considered the most important part because it not only allows the data distribution and implicit communication among the parts of the system but also allows to simultaneously operate with simulated and real entities, thus allowing the use of hybrid systems in the development of applications.
Evaluation of an Atmosphere Revitalization Subsystem for Deep Space Exploration Missions
NASA Technical Reports Server (NTRS)
Perry, Jay L.; Abney, Morgan B.; Conrad, Ruth E.; Frederick, Kenneth R.; Greenwood, Zachary W.; Kayatin, Matthew J.; Knox, James C.; Newton, Robert L.; Parrish, Keith J.; Takada, Kevin C.;
2015-01-01
An Atmosphere Revitalization Subsystem (ARS) suitable for deployment aboard deep space exploration mission vehicles has been developed and functionally demonstrated. This modified ARS process design architecture was derived from the International Space Station's (ISS) basic ARS. Primary functions considered in the architecture include trace contaminant control, carbon dioxide removal, carbon dioxide reduction, and oxygen generation. Candidate environmental monitoring instruments were also evaluated. The process architecture rearranges unit operations and employs equipment operational changes to reduce mass, simplify, and improve the functional performance for trace contaminant control, carbon dioxide removal, and oxygen generation. Results from integrated functional demonstration are summarized and compared to the performance observed during previous testing conducted on an ISS-like subsystem architecture and a similarly evolved process architecture. Considerations for further subsystem architecture and process technology development are discussed.
Autonomous Distributed Congestion Control Scheme in WCDMA Network
NASA Astrophysics Data System (ADS)
Ahmad, Hafiz Farooq; Suguri, Hiroki; Choudhary, Muhammad Qaisar; Hassan, Ammar; Liaqat, Ali; Khan, Muhammad Umer
Wireless technology has become widely popular and an important means of communication. A key issue in delivering wireless services is the problem of congestion which has an adverse impact on the Quality of Service (QoS), especially timeliness. Although a lot of work has been done in the context of RRM (Radio Resource Management), the deliverance of quality service to the end user still remains a challenge. Therefore there is need for a system that provides real-time services to the users through high assurance. We propose an intelligent agent-based approach to guarantee a predefined Service Level Agreement (SLA) with heterogeneous user requirements for appropriate bandwidth allocation in QoS sensitive cellular networks. The proposed system architecture exploits Case Based Reasoning (CBR) technique to handle RRM process of congestion management. The system accomplishes predefined SLA through the use of Retrieval and Adaptation Algorithm based on CBR case library. The proposed intelligent agent architecture gives autonomy to Radio Network Controller (RNC) or Base Station (BS) in accepting, rejecting or buffering a connection request to manage system bandwidth. Instead of simply blocking the connection request as congestion hits the system, different buffering durations are allocated to diverse classes of users based on their SLA. This increases the opportunity of connection establishment and reduces the call blocking rate extensively in changing environment. We carry out simulation of the proposed system that verifies efficient performance for congestion handling. The results also show built-in dynamism of our system to cater for variety of SLA requirements.
Assuring SS7 dependability: A robustness characterization of signaling network elements
NASA Astrophysics Data System (ADS)
Karmarkar, Vikram V.
1994-04-01
Current and evolving telecommunication services will rely on signaling network performance and reliability properties to build competitive call and connection control mechanisms under increasing demands on flexibility without compromising on quality. The dimensions of signaling dependability most often evaluated are the Rate of Call Loss and End-to-End Route Unavailability. A third dimension of dependability that captures the concern about large or catastrophic failures can be termed Network Robustness. This paper is concerned with the dependability aspects of the evolving Signaling System No. 7 (SS7) networks and attempts to strike a balance between the probabilistic and deterministic measures that must be evaluated to accomplish a risk-trend assessment to drive architecture decisions. Starting with high-level network dependability objectives and field experience with SS7 in the U.S., potential areas of growing stringency in network element (NE) dependability are identified to improve against current measures of SS7 network quality, as per-call signaling interactions increase. A sensitivity analysis is presented to highlight the impact due to imperfect coverage of duplex network component or element failures (i.e., correlated failures), to assist in the setting of requirements on NE robustness. A benefit analysis, covering several dimensions of dependability, is used to generate the domain of solutions available to the network architect in terms of network and network element fault tolerance that may be specified to meet the desired signaling quality goals.
Yang, Yunpeng; Zhang, Lu; Huang, He; Yang, Chen; Yang, Sheng; Gu, Yang; Jiang, Weihong
2017-01-24
Catabolite control protein A (CcpA) is the master regulator in Gram-positive bacteria that mediates carbon catabolite repression (CCR) and carbon catabolite activation (CCA), two fundamental regulatory mechanisms that enable competitive advantages in carbon catabolism. It is generally regarded that CcpA exerts its regulatory role by binding to a typical 14- to 16-nucleotide (nt) consensus site that is called a catabolite response element (cre) within the target regions. However, here we report a previously unknown noncanonical flexible architecture of the CcpA-binding site in solventogenic clostridia, providing new mechanistic insights into catabolite regulation. This novel CcpA-binding site, named cre var , has a unique architecture that consists of two inverted repeats and an intervening spacer, all of which are variable in nucleotide composition and length, except for a 6-bp core palindromic sequence (TGTAAA/TTTACA). It was found that the length of the intervening spacer of cre var can affect CcpA binding affinity, and moreover, the core palindromic sequence of cre var is the key structure for regulation. Such a variable architecture of cre var shows potential importance for CcpA's diverse and fine regulation. A total of 103 potential cre var sites were discovered in solventogenic Clostridium acetobutylicum, of which 42 sites were picked out for electrophoretic mobility shift assays (EMSAs), and 30 sites were confirmed to be bound by CcpA. These 30 cre var sites are associated with 27 genes involved in many important pathways. Also of significance, the cre var sites are found to be widespread and function in a great number of taxonomically different Gram-positive bacteria, including pathogens, suggesting their global role in Gram-positive bacteria. In Gram-positive bacteria, the global regulator CcpA controls a large number of important physiological and metabolic processes. Although a typical consensus CcpA-binding site, cre, has been identified, it remains poorly explored for the diversity of CcpA-mediated catabolite regulation. Here, we discovered a novel flexible CcpA-binding site architecture (cre var ) that is highly variable in both length and base composition but follows certain principles, providing new insights into how CcpA can differentially recognize a variety of target genes to form a complicated regulatory network. A comprehensive search further revealed the wide distribution of cre var sites in Gram-positive bacteria, indicating it may have a universal function. This finding is the first to characterize such a highly flexible transcription factor-binding site architecture, which would be valuable for deeper understanding of CcpA-mediated global catabolite regulation in bacteria. Copyright © 2017 Yang et al.
A run-time control architecture for the JPL telerobot
NASA Technical Reports Server (NTRS)
Balaram, J.; Lokshin, A.; Kreutz, K.; Beahan, J.
1987-01-01
An architecture for implementing the process-level decision making for a hierarchically structured telerobot currently being implemented at the Jet Propolusion Laboratory (JPL) is described. Constraints on the architecture design, architecture partitioning concepts, and a detailed description of the existing and proposed implementations are provided.
Fuzzy-Neural Controller in Service Requests Distribution Broker for SOA-Based Systems
NASA Astrophysics Data System (ADS)
Fras, Mariusz; Zatwarnicka, Anna; Zatwarnicki, Krzysztof
The evolution of software architectures led to the rising importance of the Service Oriented Architecture (SOA) concept. This architecture paradigm support building flexible distributed service systems. In the paper the architecture of service request distribution broker designed for use in SOA-based systems is proposed. The broker is built with idea of fuzzy control. The functional and non-functional request requirements in conjunction with monitoring of execution and communication links are used to distribute requests. Decisions are made with use of fuzzy-neural network.
Ground support system methodology and architecture
NASA Technical Reports Server (NTRS)
Schoen, P. D.
1991-01-01
A synergistic approach to systems test and support is explored. A building block architecture provides transportability of data, procedures, and knowledge. The synergistic approach also lowers cost and risk for life cycle of a program. The determination of design errors at the earliest phase reduces cost of vehicle ownership. Distributed scaleable architecture is based on industry standards maximizing transparency and maintainability. Autonomous control structure provides for distributed and segmented systems. Control of interfaces maximizes compatibility and reuse, reducing long term program cost. Intelligent data management architecture also reduces analysis time and cost (automation).
Storage system architectures and their characteristics
NASA Technical Reports Server (NTRS)
Sarandrea, Bryan M.
1993-01-01
Not all users storage requirements call for 20 MBS data transfer rates, multi-tier file or data migration schemes, or even automated retrieval of data. The number of available storage solutions reflects the broad range of user requirements. It is foolish to think that any one solution can address the complete range of requirements. For users with simple off-line storage requirements, the cost and complexity of high end solutions would provide no advantage over a more simple solution. The correct answer is to match the requirements of a particular storage need to the various attributes of the available solutions. The goal of this paper is to introduce basic concepts of archiving and storage management in combination with the most common architectures and to provide some insight into how these concepts and architectures address various storage problems. The intent is to provide potential consumers of storage technology with a framework within which to begin the hunt for a solution which meets their particular needs. This paper is not intended to be an exhaustive study or to address all possible solutions or new technologies, but is intended to be a more practical treatment of todays storage system alternatives. Since most commercial storage systems today are built on Open Systems concepts, the majority of these solutions are hosted on the UNIX operating system. For this reason, some of the architectural issues discussed focus around specific UNIX architectural concepts. However, most of the architectures are operating system independent and the conclusions are applicable to such architectures on any operating system.
Status, Vision, and Challenges of an Intelligent Distributed Engine Control Architecture
NASA Technical Reports Server (NTRS)
Behbahani, Alireza; Culley, Dennis; Garg, Sanjay; Millar, Richard; Smith, Bert; Wood, Jim; Mahoney, Tim; Quinn, Ronald; Carpenter, Sheldon; Mailander, Bill;
2007-01-01
A Distributed Engine Control Working Group (DECWG) consisting of the Department of Defense (DoD), the National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) and industry has been formed to examine the current and future requirements of propulsion engine systems. The scope of this study will include an assessment of the paradigm shift from centralized engine control architecture to an architecture based on distributed control utilizing open system standards. Included will be a description of the work begun in the 1990's, which continues today, followed by the identification of the remaining technical challenges which present barriers to on-engine distributed control.
Flexible distributed architecture for semiconductor process control and experimentation
NASA Astrophysics Data System (ADS)
Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.
1997-01-01
Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.
Qubit Architecture with High Coherence and Fast Tunable Coupling.
Chen, Yu; Neill, C; Roushan, P; Leung, N; Fang, M; Barends, R; Kelly, J; Campbell, B; Chen, Z; Chiaro, B; Dunsworth, A; Jeffrey, E; Megrant, A; Mutus, J Y; O'Malley, P J J; Quintana, C M; Sank, D; Vainsencher, A; Wenner, J; White, T C; Geller, Michael R; Cleland, A N; Martinis, John M
2014-11-28
We introduce a superconducting qubit architecture that combines high-coherence qubits and tunable qubit-qubit coupling. With the ability to set the coupling to zero, we demonstrate that this architecture is protected from the frequency crowding problems that arise from fixed coupling. More importantly, the coupling can be tuned dynamically with nanosecond resolution, making this architecture a versatile platform with applications ranging from quantum logic gates to quantum simulation. We illustrate the advantages of dynamical coupling by implementing a novel adiabatic controlled-z gate, with a speed approaching that of single-qubit gates. Integrating coherence and scalable control, the introduced qubit architecture provides a promising path towards large-scale quantum computation and simulation.
The Health Service Bus: an architecture and case study in achieving interoperability in healthcare.
Ryan, Amanda; Eklund, Peter
2010-01-01
Interoperability in healthcare is a requirement for effective communication between entities, to ensure timely access to up to-date patient information and medical knowledge, and thus facilitate consistent patient care. An interoperability framework called the Health Service Bus (HSB), based on the Enterprise Service Bus (ESB) middleware software architecture is presented here as a solution to all three levels of interoperability as defined by the HL7 EHR Interoperability Work group in their definitive white paper "Coming to Terms". A prototype HSB system was implemented based on the Mule Open-Source ESB and is outlined and discussed, followed by a clinically-based example.
A multitasking finite state architecture for computer control of an electric powertrain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burba, J.C.
1984-01-01
Finite state techniques provide a common design language between the control engineer and the computer engineer for event driven computer control systems. They simplify communication and provide a highly maintainable control system understandable by both. This paper describes the development of a control system for an electric vehicle powertrain utilizing finite state concepts. The basics of finite state automata are provided as a framework to discuss a unique multitasking software architecture developed for this application. The architecture employs conventional time-sliced techniques with task scheduling controlled by a finite state machine representation of the control strategy of the powertrain. The complexitiesmore » of excitation variable sampling in this environment are also considered.« less
Launch Vehicle Control Center Architectures
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Levesque, Marl; Williams, Randall; Mclaughlin, Tom
2014-01-01
Launch vehicles within the international community vary greatly in their configuration and processing. Each launch site has a unique processing flow based on the specific launch vehicle configuration. Launch and flight operations are managed through a set of control centers associated with each launch site. Each launch site has a control center for launch operations; however flight operations support varies from being co-located with the launch site to being shared with the space vehicle control center. There is also a nuance of some having an engineering support center which may be co-located with either the launch or flight control center, or in a separate geographical location altogether. A survey of control center architectures is presented for various launch vehicles including the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures shares some similarities in basic structure while differences in functional distribution also exist. The driving functions which lead to these factors are considered and a model of control center architectures is proposed which supports these commonalities and variations.
Fault tolerant architectures for integrated aircraft electronics systems
NASA Technical Reports Server (NTRS)
Levitt, K. N.; Melliar-Smith, P. M.; Schwartz, R. L.
1983-01-01
Work into possible architectures for future flight control computer systems is described. Ada for Fault-Tolerant Systems, the NETS Network Error-Tolerant System architecture, and voting in asynchronous systems are covered.
NASA Technical Reports Server (NTRS)
Boulanger, Richard; Overland, David
2004-01-01
Technologies that facilitate the design and control of complex, hybrid, and resource-constrained systems are examined. This paper focuses on design methodologies, and system architectures, not on specific control methods that may be applied to life support subsystems. Honeywell and Boeing have estimated that 60-80Y0 of the effort in developing complex control systems is software development, and only 20-40% is control system development. It has also been shown that large software projects have failure rates of as high as 50-65%. Concepts discussed include the Unified Modeling Language (UML) and design patterns with the goal of creating a self-improving, self-documenting system design process. Successful architectures for control must not only facilitate hardware to software integration, but must also reconcile continuously changing software with much less frequently changing hardware. These architectures rely on software modules or components to facilitate change. Architecting such systems for change leverages the interfaces between these modules or components.
NASA Technical Reports Server (NTRS)
Lopez, Isaac; Follen, Gregory J.; Gutierrez, Richard; Foster, Ian; Ginsburg, Brian; Larsson, Olle; Martin, Stuart; Tuecke, Steven; Woodford, David
2000-01-01
This paper describes a project to evaluate the feasibility of combining Grid and Numerical Propulsion System Simulation (NPSS) technologies, with a view to leveraging the numerous advantages of commodity technologies in a high-performance Grid environment. A team from the NASA Glenn Research Center and Argonne National Laboratory has been studying three problems: a desktop-controlled parameter study using Excel (Microsoft Corporation); a multicomponent application using ADPAC, NPSS, and a controller program-, and an aviation safety application running about 100 jobs in near real time. The team has successfully demonstrated (1) a Common-Object- Request-Broker-Architecture- (CORBA-) to-Globus resource manager gateway that allows CORBA remote procedure calls to be used to control the submission and execution of programs on workstations and massively parallel computers, (2) a gateway from the CORBA Trader service to the Grid information service, and (3) a preliminary integration of CORBA and Grid security mechanisms. We have applied these technologies to two applications related to NPSS, namely a parameter study and a multicomponent simulation.
NASA Astrophysics Data System (ADS)
Peach, Nicholas
2011-06-01
In this paper, we present a method for a highly decentralized yet structured and flexible approach to achieve systems interoperability by orchestrating data and behavior across distributed military systems and assets with security considerations addressed from the beginning. We describe an architecture of a tool-based design of business processes called Decentralized Operating Procedures (DOP) and the deployment of DOPs onto run time nodes, supporting the parallel execution of each DOP at multiple implementation nodes (fixed locations, vehicles, sensors and soldiers) throughout a battlefield to achieve flexible and reliable interoperability. The described method allows the architecture to; a) provide fine grain control of the collection and delivery of data between systems; b) allow the definition of a DOP at a strategic (or doctrine) level by defining required system behavior through process syntax at an abstract level, agnostic of implementation details; c) deploy a DOP into heterogeneous environments by the nomination of actual system interfaces and roles at a tactical level; d) rapidly deploy new DOPs in support of new tactics and systems; e) support multiple instances of a DOP in support of multiple missions; f) dynamically add or remove run-time nodes from a specific DOP instance as missions requirements change; g) model the passage of, and business reasons for the transmission of each data message to a specific DOP instance to support accreditation; h) run on low powered computers with lightweight tactical messaging. This approach is designed to extend the capabilities of existing standards, such as the Generic Vehicle Architecture (GVA).
NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.
Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions
NASA Astrophysics Data System (ADS)
Buddala, Santhoshi Snigdha
Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.
NASA Astrophysics Data System (ADS)
Zahari, R.; Ariffin, M. H.; Othman, N.
2018-02-01
Free Trade Agreements as implemented by Malaysian government calls out local businesses such as landscape architecture consultant firm to explore internationally and strengthen their performance to compete locally. Performance of landscape architecture firm as a design firm depends entirely on creativity of the subordinates in the firm. Past research has neglected studying the influence of a leader’s capitals on subordinates’ creativity, especially in Malaysian landscape architecture firms. The aim of this research is to investigate the influence of subordinates’ perceptions of the leader’s Bourdieu capitals towards promoting subordinate’s creative behaviours in Malaysian Landscape Architecture firms. The sample chosen for this research are subordinates in registered landscape architecture firm. Data was collected using qualitative semi-structured interviews with 13 respondents and analysed using Qualitative Category Coding. Aspects of the leader’s social capital (i.e. knowledge acquisition, problem solving, motivation boosting), human capital (guidance, demotivating leadership, experiential knowledge, knowledge acquisition), and emotional capital (chemistry with leader, respect, knowledge acquisition, trust, understanding, self-inflicted demotivation) that influence subordinates’ creativity were uncovered from the data. The main finding is that the leader’s capitals promote the subordinate landscape architects or assistant landscape architect to be more creative based on three main things, first is knowledge acquisition, motivation, and ability for the leader to influence through positive relationship. The finding contributes to a new way of understanding the leader’s characteristics that influence subordinates’ creativity.
WebGIS based community services architecture by griddization managements and crowdsourcing services
NASA Astrophysics Data System (ADS)
Wang, Haiyin; Wan, Jianhua; Zeng, Zhe; Zhou, Shengchuan
2016-11-01
Along with the fast economic development of cities, rapid urbanization, population surge, in China, the social community service mechanisms need to be rationalized and the policy standards need to be unified, which results in various types of conflicts and challenges for community services of government. Based on the WebGIS technology, the article provides a community service architecture by gridding management and crowdsourcing service. The WEBGIS service architecture includes two parts: the cloud part and the mobile part. The cloud part refers to community service centres, which can instantaneously response the emergency, visualize the scene of the emergency, and analyse the data from the emergency. The mobile part refers to the mobile terminal, which can call the centre, report the event, collect data and verify the feedback. This WebGIS based community service systems for Huangdao District of Qingdao, were awarded the “2015’ national innovation of social governance case of typical cases”.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karthik, Rajasekar
2014-01-01
In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack withmore » Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.« less
PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.
Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar
2014-01-01
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
A Method to Categorize 2-Dimensional Patterns Using Statistics of Spatial Organization.
López-Sauceda, Juan; Rueda-Contreras, Mara D
2017-01-01
We developed a measurement framework of spatial organization to categorize 2-dimensional patterns from 2 multiscalar biological architectures. We propose that underlying shapes of biological entities can be approached using the statistical concept of degrees of freedom, defining it through expansion of area variability in a pattern. To help scope this suggestion, we developed a mathematical argument recognizing the deep foundations of area variability in a polygonal pattern (spatial heterogeneity). This measure uses a parameter called eutacticity . Our measuring platform of spatial heterogeneity can assign particular ranges of distribution of spatial areas for 2 biological architectures: ecological patterns of Namibia fairy circles and epithelial sheets. The spatial organizations of our 2 analyzed biological architectures are demarcated by being in a particular position among spatial order and disorder. We suggest that this theoretical platform can give us some insights about the nature of shapes in biological systems to understand organizational constraints.
Web-based training: a new paradigm in computer-assisted instruction in medicine.
Haag, M; Maylein, L; Leven, F J; Tönshoff, B; Haux, R
1999-01-01
Computer-assisted instruction (CAI) programs based on internet technologies, especially on the world wide web (WWW), provide new opportunities in medical education. The aim of this paper is to examine different aspects of such programs, which we call 'web-based training (WBT) programs', and to differentiate them from conventional CAI programs. First, we will distinguish five different interaction types: presentation; browsing; tutorial dialogue; drill and practice; and simulation. In contrast to conventional CAI, there are four architectural types of WBT programs: client-based; remote data and knowledge; distributed teaching; and server-based. We will discuss the implications of the different architectures for developing WBT software. WBT programs have to meet other requirements than conventional CAI programs. The most important tools and programming languages for developing WBT programs will be listed and assigned to the architecture types. For the future, we expect a trend from conventional CAI towards WBT programs.
Orthographic Software Modelling: A Novel Approach to View-Based Software Engineering
NASA Astrophysics Data System (ADS)
Atkinson, Colin
The need to support multiple views of complex software architectures, each capturing a different aspect of the system under development, has been recognized for a long time. Even the very first object-oriented analysis/design methods such as the Booch method and OMT supported a number of different diagram types (e.g. structural, behavioral, operational) and subsequent methods such as Fusion, Kruchten's 4+1 views and the Rational Unified Process (RUP) have added many more views over time. Today's leading modeling languages such as the UML and SysML, are also oriented towards supporting different views (i.e. diagram types) each able to portray a different facets of a system's architecture. More recently, so called enterprise architecture frameworks such as the Zachman Framework, TOGAF and RM-ODP have become popular. These add a whole set of new non-functional views to the views typically emphasized in traditional software engineering environments.
High Dynamic Range Cognitive Radio Front Ends: Architecture to Evaluation
NASA Astrophysics Data System (ADS)
Ashok, Arun; Subbiah, Iyappan; Varga, Gabor; Schrey, Moritz; Heinen, Stefan
2016-07-01
Advent of TV white space digitization has released frequencies from 470 MHz to 790 MHz to be utilized opportunistically. The secondary user can utilize these so called TV spaces in the absence of primary users. The most important challenge for this coexistence is mutual interference. While the strong TV stations can completely saturate the receiver of the cognitive radio (CR), the cognitive radio spurious tones can disturb other primary users and white space devices. The aim of this paper is to address the challenges for enabling cognitive radio applications in WLAN and LTE. In this process, architectural considerations for the design of cognitive radio front ends are discussed. With high-IF converters, faster and flexible implementation of CR enabled WLAN and LTE are shown. The effectiveness of the architecture is shown by evaluating the CR front ends for compliance of standards namely 802.11b/g (WLAN) and 3GPP TS 36.101 (LTE).
NASA Technical Reports Server (NTRS)
Goldstein, David
1991-01-01
Extensions to an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS) are discussed. PRAIS strives for transparently parallelizing production (rule-based) systems, even under real-time constraints. PRAIS accomplished these goals (presented at the first annual C Language Integrated Production System (CLIPS) conference) by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors. Results using the original PRAIS architecture over a network of Sun 3's, Sun 4's and VAX's are presented. Mechanisms using the producer-consumer model to extend the architecture for fault-tolerance and distributed truth maintenance initiation are also discussed.
Performance of Optimized Actuator and Sensor Arrays in an Active Noise Control System
NASA Technical Reports Server (NTRS)
Palumbo, D. L.; Padula, S. L.; Lyle, K. H.; Cline, J. H.; Cabell, R. H.
1996-01-01
Experiments have been conducted in NASA Langley's Acoustics and Dynamics Laboratory to determine the effectiveness of optimized actuator/sensor architectures and controller algorithms for active control of harmonic interior noise. Tests were conducted in a large scale fuselage model - a composite cylinder which simulates a commuter class aircraft fuselage with three sections of trim panel and a floor. Using an optimization technique based on the component transfer functions, combinations of 4 out of 8 piezoceramic actuators and 8 out of 462 microphone locations were evaluated against predicted performance. A combinatorial optimization technique called tabu search was employed to select the optimum transducer arrays. Three test frequencies represent the cases of a strong acoustic and strong structural response, a weak acoustic and strong structural response and a strong acoustic and weak structural response. Noise reduction was obtained using a Time Averaged/Gradient Descent (TAGD) controller. Results indicate that the optimization technique successfully predicted best and worst case performance. An enhancement of the TAGD control algorithm was also evaluated. The principal components of the actuator/sensor transfer functions were used in the PC-TAGD controller. The principal components are shown to be independent of each other while providing control as effective as the standard TAGD.
NASA Astrophysics Data System (ADS)
Bernardet, Ulysses; Bermúdez I Badia, Sergi; Duff, Armin; Inderbitzin, Martin; Le Groux, Sylvain; Manzolli, Jônatas; Mathews, Zenon; Mura, Anna; Väljamäe, Aleksander; Verschure, Paul F. M. J.
The eXperience Induction Machine (XIM) is one of the most advanced mixed-reality spaces available today. XIM is an immersive space that consists of physical sensors and effectors and which is conceptualized as a general-purpose infrastructure for research in the field of psychology and human-artifact interaction. In this chapter, we set out the epistemological rational behind XIM by putting the installation in the context of psychological research. The design and implementation of XIM are based on principles and technologies of neuromorphic control. We give a detailed description of the hardware infrastructure and software architecture, including the logic of the overall behavioral control. To illustrate the approach toward psychological experimentation, we discuss a number of practical applications of XIM. These include the so-called, persistent virtual community, the application in the research of the relationship between human experience and multi-modal stimulation, and an investigation of a mixed-reality social interaction paradigm.
Modelling the control of interceptive actions.
Beek, P J; Dessing, J C; Peper, C E; Bullock, D
2003-01-01
In recent years, several phenomenological dynamical models have been formulated that describe how perceptual variables are incorporated in the control of motor variables. We call these short-route models as they do not address how perception-action patterns might be constrained by the dynamical properties of the sensory, neural and musculoskeletal subsystems of the human action system. As an alternative, we advocate a long-route modelling approach in which the dynamics of these subsystems are explicitly addressed and integrated to reproduce interceptive actions. The approach is exemplified through a discussion of a recently developed model for interceptive actions consisting of a neural network architecture for the online generation of motor outflow commands, based on time-to-contact information and information about the relative positions and velocities of hand and ball. This network is shown to be consistent with both behavioural and neurophysiological data. Finally, some problems are discussed with regard to the question of how the motor outflow commands (i.e. the intended movement) might be modulated in view of the musculoskeletal dynamics. PMID:14561342
WARP: Weight Associative Rule Processor. A dedicated VLSI fuzzy logic megacell
NASA Technical Reports Server (NTRS)
Pagni, A.; Poluzzi, R.; Rizzotto, G. G.
1992-01-01
During the last five years Fuzzy Logic has gained enormous popularity in the academic and industrial worlds. The success of this new methodology has led the microelectronics industry to create a new class of machines, called Fuzzy Machines, to overcome the limitations of traditional computing systems when utilized as Fuzzy Systems. This paper gives an overview of the methods by which Fuzzy Logic data structures are represented in the machines (each with its own advantages and inefficiencies). Next, the paper introduces WARP (Weight Associative Rule Processor) which is a dedicated VLSI megacell allowing the realization of a fuzzy controller suitable for a wide range of applications. WARP represents an innovative approach to VLSI Fuzzy controllers by utilizing different types of data structures for characterizing the membership functions during the various stages of the Fuzzy processing. WARP dedicated architecture has been designed in order to achieve high performance by exploiting the computational advantages offered by the different data representations.
Novel x-ray silicon detector for 2D imaging and high-resolution spectroscopy
NASA Astrophysics Data System (ADS)
Castoldi, Andrea; Gatti, Emilio; Guazzoni, Chiara; Longoni, Antonio; Rehak, Pavel; Strueder, Lothar
1999-10-01
A novel x-ray silicon detector for 2D imaging has been recently proposed. The detector, called Controlled-Drift Detector, is operated in integrate-readout mode. Its basic feature is the fast transport of the integrated charge to the output electrode by means of a uniform drift field. The drift time of the charge packet identifies the pixel of incidence. A new architecture to implement the Controlled- Drift Detector concept will be presented. The potential wells for the integration of the signal charge are obtained by means of a suitable pattern of deep n-implants and deep p-implants. During the readout mode the signal electrons are transferred in the drift channel that flanks each column of potential wells where they drift towards the collecting electrode at constant velocity. The first experimental measurements demonstrate the successful integration, transfer and drift of the signal electrons. The low output capacitance of the readout electrode together with the on- chip front-end electronics allows high resolution spectroscopy of the detected photons.
Traffic handling capability of a broadband indoor wireless network using CDMA multiple access
NASA Astrophysics Data System (ADS)
Zhang, Chang G.; Hafez, H. M.; Falconer, David D.
1994-05-01
CDMA (code division multiple access) may be an attractive technique for wireless access to broadband services because of its multiple access simplicity and other appealing features. In order to investigate traffic handling capabilities of a future network providing a variety of integrated services, this paper presents a study of a broadband indoor wireless network supporting high-speed traffic using CDMA multiple access. The results are obtained through the simulation of an indoor environment and the traffic capabilities of the wireless access to broadband 155.5 MHz ATM-SONET networks using the mm-wave band. A distributed system architecture is employed and the system performance is measured in terms of call blocking probability and dropping probability. The impacts of the base station density, traffic load, average holding time, and variable traffic sources on the system performance are examined. The improvement of system performance by implementing various techniques such as handoff, admission control, power control and sectorization are also investigated.
Status, Vision, and Challenges of an Intelligent Distributed Engine Control Architecture (Postprint)
2007-09-18
TERMS turbine engine control, engine health management, FADEC , Universal FADEC , Distributed Controls, UF, UF Platform, common FADEC , Generic FADEC ...Modular FADEC , Adaptive Control 16. SECURITY CLASSIFICATION OF: 19a. NAME OF RESPONSIBLE PERSON (Monitor) a. REPORT Unclassified b. ABSTRACT...Eventually the Full Authority Digital Electronic Control ( FADEC ) became the norm. Presently, this control system architecture accounts for 15 to 20% of
NASA Technical Reports Server (NTRS)
Chien, E. S. K.; Marinho, J. A.; Russell, J. E., Sr.
1988-01-01
The Cellular Access Digital Network (CADN) is the access vehicle through which cellular technology is brought into the mainstream of the evolving integrated telecommunications network. Beyond the integrated end-to-end digital access and per call network services provisioning of the Integrated Services Digital Network (ISDN), the CADN engenders the added capability of mobility freedom via wireless access. One key element of the CADN network architecture is the standard user to network interface that is independent of RF transmission technology. Since the Mobile Satellite System (MSS) is envisioned to not only complement but also enhance the capabilities of the terrestrial cellular telecommunications network, compatibility and interoperability between terrestrial cellular and mobile satellite systems are vitally important to provide an integrated moving telecommunications network of the future. From a network standpoint, there exist very strong commonalities between the terrestrial cellular system and the mobile satellite system. Therefore, the MSS architecture should be designed as an integral part of the CADN. This paper describes the concept of the CADN, the functional architecture of the MSS, and the user-network interface signaling protocols.
Identifying the architecture of a supracellular actomyosin network that induces tissue folding
NASA Astrophysics Data System (ADS)
Yevick, Hannah; Stoop, Norbert; Dunkel, Jorn; Martin, Adam
During embryonic development, the establishment of correct tissue form ensures proper tissue function. Yet, how the thousands of cells within a tissue coordinate force production to sculpt tissue shape is poorly understood. One important tissue shape change is tissue folding where a cell sheet bends to form a closed tube. Drosophila (fruit fly) embryos undergo such a folding event, called ventral furrow formation. The ventral furrow is associated with a supracellular network of actin and myosin, where actin-myosin fibers assemble and connect between cells. It is not known how this tissue-wide network grows and connects over time, how reproducible it is between embryos, and what determines its architecture. Here, we used topological feature analysis to quantitatively and dynamically map the connections and architecture of this supracellular network across hundreds of cells in the folding tissue. We identified the importance of the cell unit in setting up the tissue-scale architecture of the network. Our mathematical framework allows us to explore stereotypic properties of the myosin network such that we can investigate the reproducibility of mechanical connections for a morphogenetic process. NIH F32.
Health Information Research Platform (HIReP)--an architecture pattern.
Schreiweis, Björn; Schneider, Gerd; Eichner, Theresia; Bergh, Björn; Heinze, Oliver
2014-01-01
Secondary use or single source is still far from routine in healthcare, although lots of data are available either structured or unstructured. As data are stored in multiple systems, using them for biomedical research is difficult. Clinical data warehouses already help overcoming this issue, but currently they are only used for certain parts of biomedical research. A comprehensive research platform based on a generic architecture pattern could increase the benefits of existing data warehouses for both patient care and research by meeting two objectives: serving as a so called single point-of-truth and acting as a mediator between them strengthening interaction and close collaboration. Another effect is to reduce boundaries for the implementation of data warehouses. Taking further settings into account the architecture of a clinical data warehouse supporting patient care and biomedical research needs to be integrated with biomaterial banks and other sources. This work provides a solution conceptualizing a comprehensive architecture pattern of a Health Information Research Platform (HIReP) derived from use cases of the patient care and biomedical research domain. It serves as single IT infrastructure providing solutions for any type of use case.
Space Generic Open Avionics Architecture (SGOAA): Overview
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1992-01-01
A space generic open avionics architecture created for NASA is described. It will serve as the basis for entities in spacecraft core avionics, capable of being tailored by NASA for future space program avionics ranging from small vehicles such as Moon ascent/descent vehicles to large ones such as Mars transfer vehicles or orbiting stations. The standard consists of: (1) a system architecture; (2) a generic processing hardware architecture; (3) a six class architecture interface model; (4) a system services functional subsystem architectural model; and (5) an operations control functional subsystem architectural model.
Time domain passivity controller for 4-channel time-delay bilateral teleoperation.
Rebelo, Joao; Schiele, Andre
2015-01-01
This paper presents an extension of the time-domain passivity control approach to a four-channel bilateral controller under the effects of time delays. Time-domain passivity control has been used successfully to stabilize teleoperation systems with position-force and position-position controllers; however, the performance with such control architectures is sub-optimal both with and without time delays. This work extends the network representation of the time-domain passivity controller to the four-channel architecture, which provides perfect transparency to the user without time delay. The proposed architecture is based on modelling the controllers as dependent voltage sources and using only series passivity controllers. The obtained results are shown on a one degree-of-freedom setup and illustrate the stabilization behaviour of the proposed controller when time delay is present in the communication channel.
On-board processing satellite network architecture and control study
NASA Technical Reports Server (NTRS)
Campanella, S. Joseph; Pontano, Benjamin A.; Chalmers, Harvey
1987-01-01
The market for telecommunications services needs to be segmented into user classes having similar transmission requirements and hence similar network architectures. Use of the following transmission architecture was considered: satellite switched TDMA; TDMA up, TDM down; scanning (hopping) beam TDMA; FDMA up, TDM down; satellite switched MF/TDMA; and switching Hub earth stations with double hop transmission. A candidate network architecture will be selected that: comprises multiple access subnetworks optimized for each user; interconnects the subnetworks by means of a baseband processor; and optimizes the marriage of interconnection and access techniques. An overall network control architecture will be provided that will serve the needs of the baseband and satellite switched RF interconnected subnetworks. The results of the studies shall be used to identify elements of network architecture and control that require the greatest degree of technology development to realize an operational system. This will be specified in terms of: requirements of the enabling technology; difference from the current available technology; and estimate of the development requirements needed to achieve an operational system. The results obtained for each of these tasks are presented.
Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes
NASA Astrophysics Data System (ADS)
Huang, Shaoming
2003-06-01
An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.
Open multi-agent control architecture to support virtual-reality-based man-machine interfaces
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel
2001-10-01
Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.
Control and Communication for a Secure and Reconfigurable Power Distribution System
NASA Astrophysics Data System (ADS)
Giacomoni, Anthony Michael
A major transformation is taking place throughout the electric power industry to overlay existing electric infrastructure with advanced sensing, communications, and control system technologies. This transformation to a smart grid promises to enhance system efficiency, increase system reliability, support the electrification of transportation, and provide customers with greater control over their electricity consumption. Upgrading control and communication systems for the end-to-end electric power grid, however, will present many new security challenges that must be dealt with before extensive deployment and implementation of these technologies can begin. In this dissertation, a comprehensive systems approach is taken to minimize and prevent cyber-physical disturbances to electric power distribution systems using sensing, communications, and control system technologies. To accomplish this task, an intelligent distributed secure control (IDSC) architecture is presented and validated in silico for distribution systems to provide greater adaptive protection, with the ability to proactively reconfigure, and rapidly respond to disturbances. Detailed descriptions of functionalities at each layer of the architecture as well as the whole system are provided. To compare the performance of the IDSC architecture with that of other control architectures, an original simulation methodology is developed. The simulation model integrates aspects of cyber-physical security, dynamic price and demand response, sensing, communications, intermittent distributed energy resources (DERs), and dynamic optimization and reconfiguration. Applying this comprehensive systems approach, performance results for the IEEE 123 node test feeder are simulated and analyzed. The results show the trade-offs between system reliability, operational constraints, and costs for several control architectures and optimization algorithms. Additional simulation results are also provided. In particular, the advantages of an IDSC architecture are highlighted when an intermittent DER is present on the system.
Abyssal BEnthic Laboratory (ABEL): a novel approach for long-term investigation at abyssal depths
NASA Astrophysics Data System (ADS)
Berta, M.; Gasparoni, F.; Capobianco, M.
1995-03-01
This study assesses the feasibility of a configuration for a benthic underwater system, called ABEL (Abyssal BEnthic Laboratory), capable of operating both under controlled and autonomous modes for periods of several months to over one year at abyssal depths up to 6000 m. A network of stations, capable of different configurations, has been identified as satisfying the widest range of scientific expectations, and at the same time to address the technological challenge to increase the feasibility of scientific investigations, even when the need is not yet well specified. The overall system consists of a central Benthic Investigation Laboratory, devoted to the execution of the most complex scientific activities, with fixed Satellite Stations acting as nodes of a measuring network and a Mobile Station extending ABEL capabilities with the possibility to carry out surveys over the investigation area and interventions on the fixed stations. ABEL architecture also includes a dedicated deployment and recovery module, as well as sea-surface and land-based facilities. Such an installation constitutes the sea-floor equivalent of a meteorological or geophysical laboratory. Attention has been paid to selecting investigation tools supporting the ABEL system to carry out its mission with high operativity and minimal risk and environmental impact. This demands technologies to enable presence and operation at abyssal depths for the required period of time. Presence can be guaranteed by proper choice of power supply and communication systems. Operations require visual and manipulative capabilities, as well as deployment and retrieval capabilities. Advanced control system architectures must be considered, along with knowledge based approaches, to comply with the requirements for autonomous control. The results of this investigation demonstrate the feasibility of the ABEL concept and the pre-dimensioning of its main components.
Optimal Propellant Maneuver Flight Demonstrations on ISS
NASA Technical Reports Server (NTRS)
Bhatt, Sagar; Bedrossian, Nazareth; Longacre, Kenneth; Nguyen, Louis
2013-01-01
In this paper, first ever flight demonstrations of Optimal Propellant Maneuver (OPM), a method of propulsive rotational state transition for spacecraft controlled using thrusters, is presented for the International Space Station (ISS). On August 1, 2012, two ISS reorientations of about 180deg each were performed using OPMs. These maneuvers were in preparation for the same-day launch and rendezvous of a Progress vehicle, also a first for ISS visiting vehicles. The first maneuver used 9.7 kg of propellant, whereas the second used 10.2 kg. Identical maneuvers performed without using OPMs would have used approximately 151.1kg and 150.9kg respectively. The OPM method is to use a pre-planned attitude command trajectory to accomplish a rotational state transition. The trajectory is designed to take advantage of the complete nonlinear system dynamics. The trajectory choice directly influences the cost of the maneuver, in this case, propellant. For example, while an eigenaxis maneuver is kinematically the shortest path between two orientations, following that path requires overcoming the nonlinear system dynamics, thereby increasing the cost of the maneuver. The eigenaxis path is used for ISS maneuvers using thrusters. By considering a longer angular path, the path dependence of the system dynamics can be exploited to reduce the cost. The benefits of OPM for the ISS include not only reduced lifetime propellant use, but also reduced loads, erosion, and contamination from thrusters due to fewer firings. Another advantage of the OPM is that it does not require ISS flight software modifications since it is a set of commands tailored to the specific attitude control architecture. The OPM takes advantage of the existing ISS control system architecture for propulsive rotation called USTO control mode1. USTO was originally developed to provide ISS Orbiter stack attitude control capability for a contingency tile-repair scenario, where the Orbiter is maneuvered using its robotic manipulator relative to the ISS. Since 2005 USTO has been used for nominal ISS operations.
2010-06-01
Corporation has conducted several comparative studies for SSTO and TSTO system options using Rockets and Airbreather cycles, for both Horizontal...compared as they have less impact on size due to generic uncertainties and tend to be more robust compared to SSTO options. As called for by and in
Expert Systems on Multiprocessor Architectures. Volume 4. Technical Reports
1991-06-01
Floated-Current-Time0 -> The time that this function is called in user time uflts, expressed as a floating point number. Halt- Poligono Arrests the...default a statistics file will be printed out, if it can be. To prevent this make No-Statistics true. Unhalt- Poligono Unarrests the process in which the
ERIC Educational Resources Information Center
Nunez Esquer, Gustavo; Sheremetov, Leonid
This paper reports on the results and future research work within the paradigm of Configurable Collaborative Distance Learning, called Espacios Virtuales de Apredizaje (EVA). The paper focuses on: (1) description of the main concepts, including virtual learning spaces for knowledge, collaboration, consulting, and experimentation, a…
ERIC Educational Resources Information Center
Phung, Dan; Valetto, Giuseppe; Kaiser, Gail E.; Liu, Tiecheng; Kender, John R.
2007-01-01
The increasing popularity of online courses has highlighted the need for collaborative learning tools for student groups. In this article, we present an e-Learning architecture and adaptation model called AI2TV (Adaptive Interactive Internet Team Video), which allows groups of students to collaboratively view instructional videos in synchrony.…
Genetic Characterization of Allium Tuncelianum: An Endemic Edible Allium Species With Garlic Odor
USDA-ARS?s Scientific Manuscript database
A. tuncelianum is a native species to the Eastern Anatolia. Its plant architecture resembles garlic (A. sativum) and it has mild garlic odor and flavor. Because of these similarities, it has been locally called “garlic”. In addition, it has 16 chromosomes number in its diploid genome like garlic. ...
Books Matter: The Place of Traditional Books in Tomorrow's Library
ERIC Educational Resources Information Center
Megarrity, Lyndon
2010-01-01
People who love books can find entering an Australian library in the so-called "cyber-age" to be an unsettling experience. The first thing you notice is the reduced emphasis on book shelves in favour of empty but architecturally pleasing "public spaces", comfortable cushions, computer terminals, sometimes even new cafes and…
A Conversational Intelligent Tutoring System to Automatically Predict Learning Styles
ERIC Educational Resources Information Center
Latham, Annabel; Crockett, Keeley; McLean, David; Edmonds, Bruce
2012-01-01
This paper proposes a generic methodology and architecture for developing a novel conversational intelligent tutoring system (CITS) called Oscar that leads a tutoring conversation and dynamically predicts and adapts to a student's learning style. Oscar aims to mimic a human tutor by implicitly modelling the learning style during tutoring, and…
Integration of Wireless Technologies in Smart University Campus Environment: Framework Architecture
ERIC Educational Resources Information Center
Khamayseh, Yaser; Mardini, Wail; Aljawarneh, Shadi; Yassein, Muneer Bani
2015-01-01
In this paper, the authors are particularly interested in enhancing the education process by integrating new tools to the teaching environments. This enhancement is part of an emerging concept, called smart campus. Smart University Campus will come up with a new ubiquitous computing and communication field and change people's lives radically by…
Application of real-time cooperative editing in urban planning management system
NASA Astrophysics Data System (ADS)
Jing, Changfeng; Liu, Renyi; Liu, Nan; Bao, Weizheng
2007-06-01
With the increasing of business requirement of urban planning bureau, co-edit function is needed urgently, however conventional GIS are not support this. In order to overcome this limitation, a new kind urban 1planning management system with co-edit function is needed. Such a system called PM2006 has been used in Suzhou Urban Planning Bureau. PM2006 is introduced in this paper. In this paper, four main issues of Co-edit system--consistency, responsiveness time, data recoverability and unconstrained operation--were discussed. And for these four questions, resolutions were put forward in paper. To resolve these problems of co-edit GIS system, a data model called FGDB (File and ESRI GeoDatabase) that is mixture architecture of File and ESRI Geodatabase was introduced here. The main components of FGDB data model are ESRI versioned Geodatabase and replicated architecture. With FGDB, client responsiveness, spatial data recoverability and unconstrained operation were overcome. In last of paper, MapServer, the co-edit map server module, is presented. Main functions of MapServer are operation serialization and spatial data replication between file and versioned data.
Design and evaluation of a trilateral shared-control architecture for teleoperated training robots.
Shamaei, Kamran; Kim, Lawrence H; Okamura, Allison M
2015-08-01
Multilateral teleoperated robots can be used to train humans to perform complex tasks that require collaborative interaction and expert supervision, such as laparoscopic surgical procedures. In this paper, we explain the design and performance evaluation of a shared-control architecture that can be used in trilateral teleoperated training robots. The architecture includes dominance and observation factors inspired by the determinants of motor learning in humans, including observational practice, focus of attention, feedback and augmented feedback, and self-controlled practice. Toward the validation of such an architecture, we (1) verify the stability of a trilateral system by applying Llewellyn's criterion on a two-port equivalent architecture, and (2) demonstrate that system transparency remains generally invariant across relevant observation factors and movement frequencies. In a preliminary experimental study, a dyad of two human users (one novice, one expert) collaborated on the control of a robot to follow a trajectory. The experiment showed that the framework can be used to modulate the efforts of the users and adjust the source and level of haptic feedback to the novice user.
An Architecture for SCADA Network Forensics
NASA Astrophysics Data System (ADS)
Kilpatrick, Tim; Gonzalez, Jesus; Chandia, Rodrigo; Papa, Mauricio; Shenoi, Sujeet
Supervisory control and data acquisition (SCADA) systems are widely used in industrial control and automation. Modern SCADA protocols often employ TCP/IP to transport sensor data and control signals. Meanwhile, corporate IT infrastructures are interconnecting with previously isolated SCADA networks. The use of TCP/IP as a carrier protocol and the interconnection of IT and SCADA networks raise serious security issues. This paper describes an architecture for SCADA network forensics. In addition to supporting forensic investigations of SCADA network incidents, the architecture incorporates mechanisms for monitoring process behavior, analyzing trends and optimizing plant performance.
Thermal Control System Automation Project (TCSAP)
NASA Technical Reports Server (NTRS)
Boyer, Roger L.
1991-01-01
Information is given in viewgraph form on the Space Station Freedom (SSF) Thermal Control System Automation Project (TCSAP). Topics covered include the assembly of the External Thermal Control System (ETCS); the ETCS functional schematic; the baseline Fault Detection, Isolation, and Recovery (FDIR), including the development of a knowledge based system (KBS) for application of rule based reasoning to the SSF ETCS; TCSAP software architecture; the High Fidelity Simulator architecture; the TCSAP Runtime Object Database (RODB) data flow; KBS functional architecture and logic flow; TCSAP growth and evolution; and TCSAP relationships.
An open architecture motion controller
NASA Technical Reports Server (NTRS)
Rossol, Lothar
1994-01-01
Nomad, an open architecture motion controller, is described. It is formed by a combination of TMOS, C-WORKS, and other utilities. Nomad software runs in a UNIX environment and provides for sensor-controlled robotic motions, with user replaceable kinematics. It can also be tailored for highly specialized applications. Open controllers such as Nomad should have a major impact on the robotics industry.
NASA Astrophysics Data System (ADS)
Saponara, M.; Tramutola, A.; Creten, P.; Hardy, J.; Philippe, C.
2013-08-01
Optimization-based control techniques such as Model Predictive Control (MPC) are considered extremely attractive for space rendezvous, proximity operations and capture applications that require high level of autonomy, optimal path planning and dynamic safety margins. Such control techniques require high-performance computational needs for solving large optimization problems. The development and implementation in a flight representative avionic architecture of a MPC based Guidance, Navigation and Control system has been investigated in the ESA R&T study “On-line Reconfiguration Control System and Avionics Architecture” (ORCSAT) of the Aurora programme. The paper presents the baseline HW and SW avionic architectures, and verification test results obtained with a customised RASTA spacecraft avionics development platform from Aeroflex Gaisler.
An Architecture for Controlling Multiple Robots
NASA Technical Reports Server (NTRS)
Aghazarian, Hrand; Pirjanian, Paolo; Schenker, Paul; Huntsberger, Terrance
2004-01-01
The Control Architecture for Multirobot Outpost (CAMPOUT) is a distributed-control architecture for coordinating the activities of multiple robots. In the CAMPOUT, multiple-agent activities and sensor-based controls are derived as group compositions and involve coordination of more basic controllers denoted, for present purposes, as behaviors. The CAMPOUT provides basic mechanistic concepts for representation and execution of distributed group activities. One considers a network of nodes that comprise behaviors (self-contained controllers) augmented with hyper-links, which are used to exchange information between the nodes to achieve coordinated activities. Group behavior is guided by a scripted plan, which encodes a conditional sequence of single-agent activities. Thus, higher-level functionality is composed by coordination of more basic behaviors under the downward task decomposition of a multi-agent planner
Parameter Estimation for a Hybrid Adaptive Flight Controller
NASA Technical Reports Server (NTRS)
Campbell, Stefan F.; Nguyen, Nhan T.; Kaneshige, John; Krishnakumar, Kalmanje
2009-01-01
This paper expands on the hybrid control architecture developed at the NASA Ames Research Center by addressing issues related to indirect adaptation using the recursive least squares (RLS) algorithm. Specifically, the hybrid control architecture is an adaptive flight controller that features both direct and indirect adaptation techniques. This paper will focus almost exclusively on the modifications necessary to achieve quality indirect adaptive control. Additionally this paper will present results that, using a full non -linear aircraft model, demonstrate the effectiveness of the hybrid control architecture given drastic changes in an aircraft s dynamics. Throughout the development of this topic, a thorough discussion of the RLS algorithm as a system identification technique will be provided along with results from seven well-known modifications to the popular RLS algorithm.
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1992-01-01
The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
The Integrated Airframe/Propulsion Control System Architecture program (IAPSA)
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Cohen, Gerald C.; Meissner, Charles W.
1990-01-01
The Integrated Airframe/Propulsion Control System Architecture program (IAPSA) is a two-phase program which was initiated by NASA in the early 80s. The first phase, IAPSA 1, studied different architectural approaches to the problem of integrating engine control systems with airframe control systems in an advanced tactical fighter. One of the conclusions of IAPSA 1 was that the technology to construct a suitable system was available, yet the ability to create these complex computer architectures has outpaced the ability to analyze the resulting system's performance. With this in mind, the second phase of IAPSA approached the same problem with the added constraint that the system be designed for validation. The intent of the design for validation requirement is that validation requirements should be shown to be achievable early in the design process. IAPSA 2 has demonstrated that despite diligent efforts, integrated systems can retain characteristics which are difficult to model and, therefore, difficult to validate.
Optical Multi-Gas Monitor Technology Demonstration on the International Space Station
NASA Technical Reports Server (NTRS)
Pilgrim, Jeffrey S.; Wood, William R.; Casias, Miguel E.; Vakhtin, Andrei B.; Johnson, Michael D.; Mudgett, Paul D.
2014-01-01
The International Space Station (ISS) employs a suite of portable and permanently located gas monitors to insure crew health and safety. These sensors are tasked with functions ranging from fixed mass spectrometer based major constituents analysis to portable electrochemical sensor based combustion product monitoring. An all optical multigas sensor is being developed that can provide the specificity of a mass spectrometer with the portability of an electrochemical cell. The technology, developed under the Small Business Innovation Research program, allows for an architecture that is rugged, compact and low power. A four gas version called the Multi-Gas Monitor was launched to ISS in November 2013 aboard Soyuz and activated in February 2014. The portable instrument is comprised of a major constituents analyzer (water vapor, carbon dioxide, oxygen) and high dynamic range real-time ammonia sensor. All species are sensed inside the same enhanced path length optical cell with a separate vertical cavity surface emitting laser (VCSEL) targeted at each species. The prototype is controlled digitally with a field-programmable gate array/microcontroller architecture. The optical and electronic approaches are designed for scalability and future versions could add three important acid gases and carbon monoxide combustion product gases to the four species already sensed. Results obtained to date from the technology demonstration on ISS are presented and discussed.
Lesne, Annick; Bécavin, Christophe; Victor, Jean-Marc
2012-02-01
Allostery is a key concept of molecular biology which refers to the control of an enzyme activity by an effector molecule binding the enzyme at another site rather than the active site (allos = other in Greek). We revisit here allostery in the context of chromatin and argue that allosteric principles underlie and explain the functional architecture required for spacetime coordination of gene expression at all scales from DNA to the whole chromosome. We further suggest that this functional architecture is provided by the chromatin fiber itself. The structural, mechanical and topological features of the chromatin fiber endow chromosomes with a tunable signal transduction from specific (or nonspecific) effectors to specific (or nonspecific) active sites. Mechanical constraints can travel along the fiber all the better since the fiber is more compact and regular, which speaks in favor of the actual existence of the (so-called 30 nm) chromatin fiber. Chromatin fiber allostery reconciles both the physical and biochemical approaches of chromatin. We illustrate this view with two supporting specific examples. Moreover, from a methodological point of view, we suggest that the notion of chromatin fiber allostery is particularly relevant for systemic approaches. Finally we discuss the evolutionary power of allostery in the context of chromatin and its relation to modularity.
NASA Astrophysics Data System (ADS)
Lesne, Annick; Bécavin, Christophe; Victor, Jean–Marc
2012-02-01
Allostery is a key concept of molecular biology which refers to the control of an enzyme activity by an effector molecule binding the enzyme at another site rather than the active site (allos = other in Greek). We revisit here allostery in the context of chromatin and argue that allosteric principles underlie and explain the functional architecture required for spacetime coordination of gene expression at all scales from DNA to the whole chromosome. We further suggest that this functional architecture is provided by the chromatin fiber itself. The structural, mechanical and topological features of the chromatin fiber endow chromosomes with a tunable signal transduction from specific (or nonspecific) effectors to specific (or nonspecific) active sites. Mechanical constraints can travel along the fiber all the better since the fiber is more compact and regular, which speaks in favor of the actual existence of the (so-called 30 nm) chromatin fiber. Chromatin fiber allostery reconciles both the physical and biochemical approaches of chromatin. We illustrate this view with two supporting specific examples. Moreover, from a methodological point of view, we suggest that the notion of chromatin fiber allostery is particularly relevant for systemic approaches. Finally we discuss the evolutionary power of allostery in the context of chromatin and its relation to modularity.
Yue, Xiao; Wang, Huiju; Jin, Dawei; Li, Mingqiang; Jiang, Wei
2016-10-01
Healthcare data are a valuable source of healthcare intelligence. Sharing of healthcare data is one essential step to make healthcare system smarter and improve the quality of healthcare service. Healthcare data, one personal asset of patient, should be owned and controlled by patient, instead of being scattered in different healthcare systems, which prevents data sharing and puts patient privacy at risks. Blockchain is demonstrated in the financial field that trusted, auditable computing is possible using a decentralized network of peers accompanied by a public ledger. In this paper, we proposed an App (called Healthcare Data Gateway (HGD)) architecture based on blockchain to enable patient to own, control and share their own data easily and securely without violating privacy, which provides a new potential way to improve the intelligence of healthcare systems while keeping patient data private. Our proposed purpose-centric access model ensures patient own and control their healthcare data; simple unified Indicator-Centric Schema (ICS) makes it possible to organize all kinds of personal healthcare data practically and easily. We also point out that MPC (Secure Multi-Party Computing) is one promising solution to enable untrusted third-party to conduct computation over patient data without violating privacy.
Rodríguez-Lera, Francisco J; Matellán-Olivera, Vicente; Conde-González, Miguel Á; Martín-Rico, Francisco
2018-05-01
Generation of autonomous behavior for robots is a general unsolved problem. Users perceive robots as repetitive tools that do not respond to dynamic situations. This research deals with the generation of natural behaviors in assistive service robots for dynamic domestic environments, particularly, a motivational-oriented cognitive architecture to generate more natural behaviors in autonomous robots. The proposed architecture, called HiMoP, is based on three elements: a Hierarchy of needs to define robot drives; a set of Motivational variables connected to robot needs; and a Pool of finite-state machines to run robot behaviors. The first element is inspired in Alderfer's hierarchy of needs, which specifies the variables defined in the motivational component. The pool of finite-state machine implements the available robot actions, and those actions are dynamically selected taking into account the motivational variables and the external stimuli. Thus, the robot is able to exhibit different behaviors even under similar conditions. A customized version of the "Speech Recognition and Audio Detection Test," proposed by the RoboCup Federation, has been used to illustrate how the architecture works and how it dynamically adapts and activates robots behaviors taking into account internal variables and external stimuli.
NASA Astrophysics Data System (ADS)
Duda, James L.; Mulligan, Joseph; Valenti, James; Wenkel, Michael
2005-01-01
A key feature of the National Polar-orbiting Operational Environmental Satellite System (NPOESS) is the Northrop Grumman Space Technology patent-pending innovative data routing and retrieval architecture called SafetyNetTM. The SafetyNetTM ground system architecture for the National Polar-orbiting Operational Environmental Satellite System (NPOESS), combined with the Interface Data Processing Segment (IDPS), will together provide low data latency and high data availability to its customers. The NPOESS will cut the time between observation and delivery by a factor of four when compared with today's space-based weather systems, the Defense Meteorological Satellite Program (DMSP) and NOAA's Polar-orbiting Operational Environmental Satellites (POES). SafetyNetTM will be a key element of the NPOESS architecture, delivering near real-time data over commercial telecommunications networks. Scattered around the globe, the 15 unmanned ground receptors are linked by fiber-optic systems to four central data processing centers in the U. S. known as Weather Centrals. The National Environmental Satellite, Data and Information Service; Air Force Weather Agency; Fleet Numerical Meteorology and Oceanography Center, and the Naval Oceanographic Office operate the Centrals. In addition, this ground system architecture will have unused capacity attendant with an infrastructure that can accommodate additional users.
Efficient system interrupt concept design at the microprogramming level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fakharzadeh, M.M.
1989-01-01
Over the past decade the demand for high speed super microcomputers has been tremendously increased. To satisfy this demand many high speed 32-bit microcomputers have been designed. However, the currently available 32-bit systems do not provide an adequate solution to many highly demanding problems such as in multitasking, and in interrupt driven applications, which both require context switching. Systems for these purposes usually incorporate sophisticated software. In order to be efficient, a high end microprocessor based system must satisfy stringent software demands. Although these microprocessors use the latest technology in the fabrication design and run at a very high speed,more » they still suffer from insufficient hardware support for such applications. All too often, this lack also is the premier cause of execution inefficiency. In this dissertation a micro-programmable control unit and operation unit is considered in an advanced design. An automaton controller is designed for high speed micro-level interrupt handling. Different stack models are designed for the single task and multitasking environment. The stacks are used for storage of various components of the processor during the interrupt calls, procedure calls, and task switching. A universal (as an example seven port) register file is designed for high speed parameter passing, and intertask communication in the multitasking environment. In addition, the register file provides a direct path between ALU and the peripheral data which is important in real-time control applications. The overall system is a highly parallel architecture, with no pipeline and internal cache memory, which allows the designer to be able to predict the processor's behavior during the critical times.« less
Architecture and Implementation of OpenPET Firmware and Embedded Software
Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; Peng, Qiyu; Choong, Woon-Seng
2016-01-01
OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the flexibility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics – a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures and implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration. PMID:27110034
Design of virtual display and testing system for moving mass electromechanical actuator
NASA Astrophysics Data System (ADS)
Gao, Zhigang; Geng, Keda; Zhou, Jun; Li, Peng
2015-12-01
Aiming at the problem of control, measurement and movement virtual display of moving mass electromechanical actuator(MMEA), the virtual testing system of MMEA was developed based on the PC-DAQ architecture and the software platform of LabVIEW, and the comprehensive test task such as drive control of MMEA, tests of kinematic parameter, measurement of centroid position and virtual display of movement could be accomplished. The system could solve the alignment for acquisition time between multiple measurement channels in different DAQ cards, then on this basis, the researches were focused on the dynamic 3D virtual display by the LabVIEW, and the virtual display of MMEA were realized by the method of calling DLL and the method of 3D graph drawing controls. Considering the collaboration with the virtual testing system, including the hardware drive, the measurement software of data acquisition, and the 3D graph drawing controls method was selected, which could obtained the synchronization measurement, control and display. The system can measure dynamic centroid position and kinematic position of movable mass block while controlling the MMEA, and the interface of 3D virtual display has realistic effect and motion smooth, which can solve the problem of display and playback about MMEA in the closed shell.
Applications of Payload Directed Flight
NASA Technical Reports Server (NTRS)
Ippolito, Corey; Fladeland, Matthew M.; Yeh, Yoo Hsiu
2009-01-01
Next generation aviation flight control concepts require autonomous and intelligent control system architectures that close control loops directly around payload sensors in manner more integrated and cohesive that in traditional autopilot designs. Research into payload directed flight control at NASA Ames Research Center is investigating new and novel architectures that can satisfy the requirements for next generation control and automation concepts for aviation. Tighter integration between sensor and machine requires definition of specific sensor-directed control modes to tie the sensor data directly into a vehicle control structures throughout the entire control architecture, from low-level stability- and control loops, to higher level mission planning and scheduling reasoning systems. Payload directed flight systems can thus provide guidance, navigation, and control for vehicle platforms hosting a suite of onboard payload sensors. This paper outlines related research into the field of payload directed flight; and outlines requirements and operating concepts for payload directed flight systems based on identified needs from the scientific literature.'
van der Wal, Jaap
2009-01-01
The architecture of the connective tissue, including structures such as fasciae, sheaths, and membranes, is more important for understanding functional meaning than is more traditional anatomy, whose anatomical dissection method neglects and denies the continuity of the connective tissue as integrating matrix of the body. The connective tissue anatomy and architecture exhibits two functional tendencies that are present in all areas of the body in different ways and relationships. In body cavities, the “disconnecting” quality of shaping space enables mobility; between organs and body parts, the “connecting” dimension enables functional mechanical interactions. In the musculoskeletal system, those two features of the connective tissue are also present. They cannot be found by the usual analytic dissection procedures. An architectural description is necessary. This article uses such a methodologic approach and gives such a description for the lateral elbow region. The result is an alternative architectural view of the anatomic substrate involved in the transmission and conveyance of forces over synovial joints. An architectural description of the muscular and connective tissue organized in series with each other to enable the transmission of forces over these dynamic entities is more appropriate than is the classical concept of “passive” force-guiding structures such as ligaments organized in parallel to actively force-transmitting structures such as muscles with tendons. The discrimination between so-called joint receptors and muscle receptors is an artificial distinction when function is considered. Mechanoreceptors, also the so-called muscle receptors, are arranged in the context of force circumstances—that is, of the architecture of muscle and connective tissue rather than of the classical anatomic structures such as muscle, capsules, and ligaments. In the lateral cubital region of the rat, a spectrum of mechanosensitive substrate occurs at the transitional areas between regular dense connective tissue layers and the muscle fascicles organized in series with them. This substrate exhibits features of type and location of the mechanosensitive nerve terminals that usually are considered characteristic for “joint receptors” as well as for “muscle receptors.” The receptors for proprioception are concentrated in those areas where tensile stresses are conveyed over the elbow joint. Structures cannot be divided into either joint receptors or muscle receptors when muscular and collagenous connective tissue structures function in series to maintain joint integrity and stability. In vivo, those connective tissue structures are strained during movements of the skeletal parts, those movements in turn being induced and led by tension in muscular tissue. In principle, because of the architecture, receptors can also be stimulated by changes in muscle tension without skeletal movement, or by skeletal movement without change in muscle tension. A mutual relationship exists between structure (and function) of the mechanoreceptors and the architecture of the muscular and regular dense connective tissue. Both are instrumental in the coding of proprioceptive information to the central nervous system. PMID:21589740
Integration of Sensors, Controllers and Instruments Using a Novel OPC Architecture
2017-01-01
The interconnection between sensors, controllers and instruments through a communication network plays a vital role in the performance and effectiveness of a control system. Since its inception in the 90s, the Object Linking and Embedding for Process Control (OPC) protocol has provided open connectivity for monitoring and automation systems. It has been widely used in several environments such as industrial facilities, building and energy automation, engineering education and many others. This paper presents a novel OPC-based architecture to implement automation systems devoted to R&D and educational activities. The proposal is a novel conceptual framework, structured into four functional layers where the diverse components are categorized aiming to foster the systematic design and implementation of automation systems involving OPC communication. Due to the benefits of OPC, the proposed architecture provides features like open connectivity, reliability, scalability, and flexibility. Furthermore, four successful experimental applications of such an architecture, developed at the University of Extremadura (UEX), are reported. These cases are a proof of concept of the ability of this architecture to support interoperability for different domains. Namely, the automation of energy systems like a smart microgrid and photobioreactor facilities, the implementation of a network-accessible industrial laboratory and the development of an educational hardware-in-the-loop platform are described. All cases include a Programmable Logic Controller (PLC) to automate and control the plant behavior, which exchanges operative data (measurements and signals) with a multiplicity of sensors, instruments and supervisory systems under the structure of the novel OPC architecture. Finally, the main conclusions and open research directions are highlighted. PMID:28654002
Integration of Sensors, Controllers and Instruments Using a Novel OPC Architecture.
González, Isaías; Calderón, Antonio José; Barragán, Antonio Javier; Andújar, José Manuel
2017-06-27
The interconnection between sensors, controllers and instruments through a communication network plays a vital role in the performance and effectiveness of a control system. Since its inception in the 90s, the Object Linking and Embedding for Process Control (OPC) protocol has provided open connectivity for monitoring and automation systems. It has been widely used in several environments such as industrial facilities, building and energy automation, engineering education and many others. This paper presents a novel OPC-based architecture to implement automation systems devoted to R&D and educational activities. The proposal is a novel conceptual framework, structured into four functional layers where the diverse components are categorized aiming to foster the systematic design and implementation of automation systems involving OPC communication. Due to the benefits of OPC, the proposed architecture provides features like open connectivity, reliability, scalability, and flexibility. Furthermore, four successful experimental applications of such an architecture, developed at the University of Extremadura (UEX), are reported. These cases are a proof of concept of the ability of this architecture to support interoperability for different domains. Namely, the automation of energy systems like a smart microgrid and photobioreactor facilities, the implementation of a network-accessible industrial laboratory and the development of an educational hardware-in-the-loop platform are described. All cases include a Programmable Logic Controller (PLC) to automate and control the plant behavior, which exchanges operative data (measurements and signals) with a multiplicity of sensors, instruments and supervisory systems under the structure of the novel OPC architecture. Finally, the main conclusions and open research directions are highlighted.
Lewis, Richard L; Shvartsman, Michael; Singh, Satinder
2013-07-01
We explore the idea that eye-movement strategies in reading are precisely adapted to the joint constraints of task structure, task payoff, and processing architecture. We present a model of saccadic control that separates a parametric control policy space from a parametric machine architecture, the latter based on a small set of assumptions derived from research on eye movements in reading (Engbert, Nuthmann, Richter, & Kliegl, 2005; Reichle, Warren, & McConnell, 2009). The eye-control model is embedded in a decision architecture (a machine and policy space) that is capable of performing a simple linguistic task integrating information across saccades. Model predictions are derived by jointly optimizing the control of eye movements and task decisions under payoffs that quantitatively express different desired speed-accuracy trade-offs. The model yields distinct eye-movement predictions for the same task under different payoffs, including single-fixation durations, frequency effects, accuracy effects, and list position effects, and their modulation by task payoff. The predictions are compared to-and found to accord with-eye-movement data obtained from human participants performing the same task under the same payoffs, but they are found not to accord as well when the assumptions concerning payoff optimization and processing architecture are varied. These results extend work on rational analysis of oculomotor control and adaptation of reading strategy (Bicknell & Levy, ; McConkie, Rayner, & Wilson, 1973; Norris, 2009; Wotschack, 2009) by providing evidence for adaptation at low levels of saccadic control that is shaped by quantitatively varying task demands and the dynamics of processing architecture. Copyright © 2013 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Stevens, H. D.; Miles, E. S.; Rock, S. J.; Cannon, R. H.
1994-01-01
Expanding man's presence in space requires capable, dexterous robots capable of being controlled from the Earth. Traditional 'hand-in-glove' control paradigms require the human operator to directly control virtually every aspect of the robot's operation. While the human provides excellent judgment and perception, human interaction is limited by low bandwidth, delayed communications. These delays make 'hand-in-glove' operation from Earth impractical. In order to alleviate many of the problems inherent to remote operation, Stanford University's Aerospace Robotics Laboratory (ARL) has developed the Object-Based Task-Level Control architecture. Object-Based Task-Level Control (OBTLC) removes the burden of teleoperation from the human operator and enables execution of tasks not possible with current techniques. OBTLC is a hierarchical approach to control where the human operator is able to specify high-level, object-related tasks through an intuitive graphical user interface. Infrequent task-level command replace constant joystick operations, eliminating communications bandwidth and time delay problems. The details of robot control and task execution are handled entirely by the robot and computer control system. The ARL has implemented the OBTLC architecture on a set of Free-Flying Space Robots. The capability of the OBTLC architecture has been demonstrated by controlling the ARL Free-Flying Space Robots from NASA Ames Research Center.
A flexible architecture for advanced process control solutions
NASA Astrophysics Data System (ADS)
Faron, Kamyar; Iourovitski, Ilia
2005-05-01
Advanced Process Control (APC) is now mainstream practice in the semiconductor manufacturing industry. Over the past decade and a half APC has evolved from a "good idea", and "wouldn"t it be great" concept to mandatory manufacturing practice. APC developments have primarily dealt with two major thrusts, algorithms and infrastructure, and often the line between them has been blurred. The algorithms have evolved from very simple single variable solutions to sophisticated and cutting edge adaptive multivariable (input and output) solutions. Spending patterns in recent times have demanded that the economics of a comprehensive APC infrastructure be completely justified for any and all cost conscious manufacturers. There are studies suggesting integration costs as high as 60% of the total APC solution costs. Such cost prohibitive figures clearly diminish the return on APC investments. This has limited the acceptance and development of pure APC infrastructure solutions for many fabs. Modern APC solution architectures must satisfy the wide array of requirements from very manual R&D environments to very advanced and automated "lights out" manufacturing facilities. A majority of commercially available control solutions and most in house developed solutions lack important attributes of scalability, flexibility, and adaptability and hence require significant resources for integration, deployment, and maintenance. Many APC improvement efforts have been abandoned and delayed due to legacy systems and inadequate architectural design. Recent advancements (Service Oriented Architectures) in the software industry have delivered ideal technologies for delivering scalable, flexible, and reliable solutions that can seamlessly integrate into any fabs" existing system and business practices. In this publication we shall evaluate the various attributes of the architectures required by fabs and illustrate the benefits of a Service Oriented Architecture to satisfy these requirements. Blue Control Technologies has developed an advance service oriented architecture Run to Run Control System which addresses these requirements.
Benchmarking hardware architecture candidates for the NFIRAOS real-time controller
NASA Astrophysics Data System (ADS)
Smith, Malcolm; Kerley, Dan; Herriot, Glen; Véran, Jean-Pierre
2014-07-01
As a part of the trade study for the Narrow Field Infrared Adaptive Optics System, the adaptive optics system for the Thirty Meter Telescope, we investigated the feasibility of performing real-time control computation using a Linux operating system and Intel Xeon E5 CPUs. We also investigated a Xeon Phi based architecture which allows higher levels of parallelism. This paper summarizes both the CPU based real-time controller architecture and the Xeon Phi based RTC. The Intel Xeon E5 CPU solution meets the requirements and performs the computation for one AO cycle in an average of 767 microseconds. The Xeon Phi solution did not meet the 1200 microsecond time requirement and also suffered from unpredictable execution times. More detailed benchmark results are reported for both architectures.
ERIC Educational Resources Information Center
van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels
2012-01-01
This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use…
Development of Onboard Computer Complex for Russian Segment of ISS
NASA Technical Reports Server (NTRS)
Branets, V.; Brand, G.; Vlasov, R.; Graf, I.; Clubb, J.; Mikrin, E.; Samitov, R.
1998-01-01
Report present a description of the Onboard Computer Complex (CC) that was developed during the period of 1994-1998 for the Russian Segment of ISS. The system was developed in co-operation with NASA and ESA. ESA developed a new computation system under the RSC Energia Technical Assignment, called DMS-R. The CC also includes elements developed by Russian experts and organizations. A general architecture of the computer system and the characteristics of primary elements of this system are described. The system was integrated at RSC Energia with the participation of American and European specialists. The report contains information on software simulators, verification and de-bugging facilities witch were been developed for both stand-alone and integrated tests and verification. This CC serves as the basis for the Russian Segment Onboard Control Complex on ISS.
Structure of the Repulsive Guidance Molecule (RGM)—Neogenin Signaling Hub
Bell, Christian H.; Bishop, Benjamin; Tang, Chenxiang; Gilbert, Robert J.C.; Aricescu, A. Radu; Pasterkamp, R. Jeroen; Siebold, Christian
2016-01-01
Repulsive guidance molecule family members (RGMs) control fundamental and diverse cellular processes, including motility and adhesion, immune cell regulation, and systemic iron metabolism. However, it is not known how RGMs initiate signaling through their common cell-surface receptor, neogenin (NEO1). Here, we present crystal structures of the NEO1 RGM-binding region and its complex with human RGMB (also called dragon). The RGMB structure reveals a previously unknown protein fold and a functionally important autocatalytic cleavage mechanism and provides a framework to explain numerous disease-linked mutations in RGMs. In the complex, two RGMB ectodomains conformationally stabilize the juxtamembrane regions of two NEO1 receptors in a pH-dependent manner. We demonstrate that all RGM-NEO1 complexes share this architecture, which therefore represents the core of multiple signaling pathways. PMID:23744777
Concentric transmon qubit featuring fast tunability and an anisotropic magnetic dipole moment
NASA Astrophysics Data System (ADS)
Braumüller, Jochen; Sandberg, Martin; Vissers, Michael R.; Schneider, Andre; Schlör, Steffen; Grünhaupt, Lukas; Rotzinger, Hannes; Marthaler, Michael; Lukashenko, Alexander; Dieter, Amadeus; Ustinov, Alexey V.; Weides, Martin; Pappas, David P.
2016-01-01
We present a planar qubit design based on a superconducting circuit that we call concentric transmon. While employing a straightforward fabrication process using Al evaporation and lift-off lithography, we observe qubit lifetimes and coherence times in the order of 10 μ s . We systematically characterize loss channels such as incoherent dielectric loss, Purcell decay and radiative losses. The implementation of a gradiometric SQUID loop allows for a fast tuning of the qubit transition frequency and therefore for full tomographic control of the quantum circuit. Due to the large loop size, the presented qubit architecture features a strongly increased magnetic dipole moment as compared to conventional transmon designs. This renders the concentric transmon a promising candidate to establish a site-selective passive direct Z ̂ coupling between neighboring qubits, being a pending quest in the field of quantum simulation.
Autonomous Deep-Space Optical Navigation Project
NASA Technical Reports Server (NTRS)
D'Souza, Christopher
2014-01-01
This project will advance the Autonomous Deep-space navigation capability applied to Autonomous Rendezvous and Docking (AR&D) Guidance, Navigation and Control (GNC) system by testing it on hardware, particularly in a flight processor, with a goal of limited testing in the Integrated Power, Avionics and Software (IPAS) with the ARCM (Asteroid Retrieval Crewed Mission) DRO (Distant Retrograde Orbit) Autonomous Rendezvous and Docking (AR&D) scenario. The technology, which will be harnessed, is called 'optical flow', also known as 'visual odometry'. It is being matured in the automotive and SLAM (Simultaneous Localization and Mapping) applications but has yet to be applied to spacecraft navigation. In light of the tremendous potential of this technique, we believe that NASA needs to design a optical navigation architecture that will use this technique. It is flexible enough to be applicable to navigating around planetary bodies, such as asteroids.
Nanoscale Engineering of Designer Cellulosomes.
Gunnoo, Melissabye; Cazade, Pierre-André; Galera-Prat, Albert; Nash, Michael A; Czjzek, Mirjam; Cieplak, Marek; Alvarez, Beatriz; Aguilar, Marina; Karpol, Alon; Gaub, Hermann; Carrión-Vázquez, Mariano; Bayer, Edward A; Thompson, Damien
2016-07-01
Biocatalysts showcase the upper limit obtainable for high-speed molecular processing and transformation. Efforts to engineer functionality in synthetic nanostructured materials are guided by the increasing knowledge of evolving architectures, which enable controlled molecular motion and precise molecular recognition. The cellulosome is a biological nanomachine, which, as a fundamental component of the plant-digestion machinery from bacterial cells, has a key potential role in the successful development of environmentally-friendly processes to produce biofuels and fine chemicals from the breakdown of biomass waste. Here, the progress toward so-called "designer cellulosomes", which provide an elegant alternative to enzyme cocktails for lignocellulose breakdown, is reviewed. Particular attention is paid to rational design via computational modeling coupled with nanoscale characterization and engineering tools. Remaining challenges and potential routes to industrial application are put forward. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Prasetyo, S. Y. J.; Hartomo, K. D.
2018-01-01
The Spatial Plan of the Province of Central Java 2009-2029 identifies that most regencies or cities in Central Java Province are very vulnerable to landslide disaster. The data are also supported by other data from Indonesian Disaster Risk Index (In Indonesia called Indeks Risiko Bencana Indonesia) 2013 that suggest that some areas in Central Java Province exhibit a high risk of natural disasters. This research aims to develop an application architecture and analysis methodology in GIS to predict and to map rainfall distribution. We propose our GIS architectural application of “Multiplatform Architectural Spatiotemporal” and data analysis methods of “Triple Exponential Smoothing” and “Spatial Interpolation” as our significant scientific contribution. This research consists of 2 (two) parts, namely attribute data prediction using TES method and spatial data prediction using Inverse Distance Weight (IDW) method. We conduct our research in 19 subdistricts in the Boyolali Regency, Central Java Province, Indonesia. Our main research data is the biweekly rainfall data in 2000-2016 Climatology, Meteorology, and Geophysics Agency (In Indonesia called Badan Meteorologi, Klimatologi, dan Geofisika) of Central Java Province and Laboratory of Plant Disease Observations Region V Surakarta, Central Java. The application architecture and analytical methodology of “Multiplatform Architectural Spatiotemporal” and spatial data analysis methodology of “Triple Exponential Smoothing” and “Spatial Interpolation” can be developed as a GIS application framework of rainfall distribution for various applied fields. The comparison between the TES and IDW methods show that relative to time series prediction, spatial interpolation exhibit values that are approaching actual. Spatial interpolation is closer to actual data because computed values are the rainfall data of the nearest location or the neighbour of sample values. However, the IDW’s main weakness is that some area might exhibit the rainfall value of 0. The representation of 0 in the spatial interpolation is mainly caused by the absence of rainfall data in the nearest sample point or too far distance that produces smaller weight.
Fan, Yu; Xi, Liu; Hughes, Daniel S T; Zhang, Jianjun; Zhang, Jianhua; Futreal, P Andrew; Wheeler, David A; Wang, Wenyi
2016-08-24
Subclonal mutations reveal important features of the genetic architecture of tumors. However, accurate detection of mutations in genetically heterogeneous tumor cell populations using next-generation sequencing remains challenging. We develop MuSE ( http://bioinformatics.mdanderson.org/main/MuSE ), Mutation calling using a Markov Substitution model for Evolution, a novel approach for modeling the evolution of the allelic composition of the tumor and normal tissue at each reference base. MuSE adopts a sample-specific error model that reflects the underlying tumor heterogeneity to greatly improve the overall accuracy. We demonstrate the accuracy of MuSE in calling subclonal mutations in the context of large-scale tumor sequencing projects using whole exome and whole genome sequencing.
Evaluation of an Outer Loop Retrofit Architecture for Intelligent Turbofan Engine Thrust Control
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Sowers, T. Shane
2006-01-01
The thrust control capability of a retrofit architecture for intelligent turbofan engine control and diagnostics is evaluated. The focus of the study is on the portion of the hierarchical architecture that performs thrust estimation and outer loop thrust control. The inner loop controls fan speed so the outer loop automatically adjusts the engine's fan speed command to maintain thrust at the desired level, based on pilot input, even as the engine deteriorates with use. The thrust estimation accuracy is assessed under nominal and deteriorated conditions at multiple operating points, and the closed loop thrust control performance is studied, all in a complex real-time nonlinear turbofan engine simulation test bed. The estimation capability, thrust response, and robustness to uncertainty in the form of engine degradation are evaluated.
Software control architecture for autonomous vehicles
NASA Astrophysics Data System (ADS)
Nelson, Michael L.; DeAnda, Juan R.; Fox, Richard K.; Meng, Xiannong
1999-07-01
The Strategic-Tactical-Execution Software Control Architecture (STESCA) is a tri-level approach to controlling autonomous vehicles. Using an object-oriented approach, STESCA has been developed as a generalization of the Rational Behavior Model (RBM). STESCA was initially implemented for the Phoenix Autonomous Underwater Vehicle (Naval Postgraduate School -- Monterey, CA), and is currently being implemented for the Pioneer AT land-based wheeled vehicle. The goals of STESCA are twofold. First is to create a generic framework to simplify the process of creating a software control architecture for autonomous vehicles of any type. Second is to allow for mission specification system by 'anyone' with minimal training to control the overall vehicle functionality. This paper describes the prototype implementation of STESCA for the Pioneer AT.
VASSAR: Value assessment of system architectures using rules
NASA Astrophysics Data System (ADS)
Selva, D.; Crawley, E. F.
A key step of the mission development process is the selection of a system architecture, i.e., the layout of the major high-level system design decisions. This step typically involves the identification of a set of candidate architectures and a cost-benefit analysis to compare them. Computational tools have been used in the past to bring rigor and consistency into this process. These tools can automatically generate architectures by enumerating different combinations of decisions and options. They can also evaluate these architectures by applying cost models and simplified performance models. Current performance models are purely quantitative tools that are best fit for the evaluation of the technical performance of mission design. However, assessing the relative merit of a system architecture is a much more holistic task than evaluating performance of a mission design. Indeed, the merit of a system architecture comes from satisfying a variety of stakeholder needs, some of which are easy to quantify, and some of which are harder to quantify (e.g., elegance, scientific value, political robustness, flexibility). Moreover, assessing the merit of a system architecture at these very early stages of design often requires dealing with a mix of: a) quantitative and semi-qualitative data; objective and subjective information. Current computational tools are poorly suited for these purposes. In this paper, we propose a general methodology that can used to assess the relative merit of several candidate system architectures under the presence of objective, subjective, quantitative, and qualitative stakeholder needs. The methodology called VASSAR (Value ASsessment for System Architectures using Rules). The major underlying assumption of the VASSAR methodology is that the merit of a system architecture can assessed by comparing the capabilities of the architecture with the stakeholder requirements. Hence for example, a candidate architecture that fully satisfies all critical sta- eholder requirements is a good architecture. The assessment process is thus fundamentally seen as a pattern matching process where capabilities match requirements, which motivates the use of rule-based expert systems (RBES). This paper describes the VASSAR methodology and shows how it can be applied to a large complex space system, namely an Earth observation satellite system. Companion papers show its applicability to the NASA space communications and navigation program and the joint NOAA-DoD NPOESS program.
Robust Software Architecture for Robots
NASA Technical Reports Server (NTRS)
Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael
2009-01-01
Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.
Centralized and distributed control architectures under Foundation Fieldbus network.
Persechini, Maria Auxiliadora Muanis; Jota, Fábio Gonçalves
2013-01-01
This paper aims at discussing possible automation and control system architectures based on fieldbus networks in which the controllers can be implemented either in a centralized or in a distributed form. An experimental setup is used to demonstrate some of the addressed issues. The control and automation architecture is composed of a supervisory system, a programmable logic controller and various other devices connected to a Foundation Fieldbus H1 network. The procedures used in the network configuration, in the process modelling and in the design and implementation of controllers are described. The specificities of each one of the considered logical organizations are also discussed. Finally, experimental results are analysed using an algorithm for the assessment of control loops to compare the performances between the centralized and the distributed implementations. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
CPG-inspired workspace trajectory generation and adaptive locomotion control for quadruped robots.
Liu, Chengju; Chen, Qijun; Wang, Danwei
2011-06-01
This paper deals with the locomotion control of quadruped robots inspired by the biological concept of central pattern generator (CPG). A control architecture is proposed with a 3-D workspace trajectory generator and a motion engine. The workspace trajectory generator generates adaptive workspace trajectories based on CPGs, and the motion engine realizes joint motion imputes. The proposed architecture is able to generate adaptive workspace trajectories online by tuning the parameters of the CPG network to adapt to various terrains. With feedback information, a quadruped robot can walk through various terrains with adaptive joint control signals. A quadruped platform AIBO is used to validate the proposed locomotion control system. The experimental results confirm the effectiveness of the proposed control architecture. A comparison by experiments shows the superiority of the proposed method against the traditional CPG-joint-space control method.
Layered Architectures for Quantum Computers and Quantum Repeaters
NASA Astrophysics Data System (ADS)
Jones, Nathan C.
This chapter examines how to organize quantum computers and repeaters using a systematic framework known as layered architecture, where machine control is organized in layers associated with specialized tasks. The framework is flexible and could be used for analysis and comparison of quantum information systems. To demonstrate the design principles in practice, we develop architectures for quantum computers and quantum repeaters based on optically controlled quantum dots, showing how a myriad of technologies must operate synchronously to achieve fault-tolerance. Optical control makes information processing in this system very fast, scalable to large problem sizes, and extendable to quantum communication.
A candidate architecture for monitoring and control in chemical transfer propulsion systems
NASA Technical Reports Server (NTRS)
Binder, Michael P.; Millis, Marc G.
1990-01-01
To support the exploration of space, a reusable space-based rocket engine must be developed. This engine must sustain superior operability and man-rated levels of reliability over several missions with limited maintenance or inspection between flights. To meet these requirements, an expander cycle engine incorporating a highly capable control and health monitoring system is planned. Alternatives for the functional organization and the implementation architecture of the engine's monitoring and control system are discussed. On the basis of this discussion, a decentralized architecture is favored. The trade-offs between several implementation options are outlined and future work is proposed.
A knowledge-base generating hierarchical fuzzy-neural controller.
Kandadai, R M; Tien, J M
1997-01-01
We present an innovative fuzzy-neural architecture that is able to automatically generate a knowledge base, in an extractable form, for use in hierarchical knowledge-based controllers. The knowledge base is in the form of a linguistic rule base appropriate for a fuzzy inference system. First, we modify Berenji and Khedkar's (1992) GARIC architecture to enable it to automatically generate a knowledge base; a pseudosupervised learning scheme using reinforcement learning and error backpropagation is employed. Next, we further extend this architecture to a hierarchical controller that is able to generate its own knowledge base. Example applications are provided to underscore its viability.
Cognitive architectures and autonomy: Commentary and Response
NASA Astrophysics Data System (ADS)
2012-11-01
Editors: Włodzisław Duch / Ah-Hwee Tan / Stan Franklin Autonomy for AGI Cristiano Castelfranchi 31 Are Disembodied Agents Really Autonomous? Antonio Chella 33 The Perception-…-Action Cycle Cognitive Architecture and Autonomy: the View from the Brain Vassilis Cutsuridis 36 Autonomy Requires Creativity and Meta-Learning Włodzisław Duch 39 Meta Learning, Change of Internal Workings, and LIDA Ryan McCall / Stan Franklin 42 An Appeal for Declaring Research Goals Brandon Rohrer 45 The Development of Cognition as the Basis for Autonomy Frank van der Velde 47 Autonomy and Intelligence Pei Wang 49 Autonomy, Isolation, and Collective Intelligence Nikolaos Mavridis 51 Response to Comments Kristinn R. Thórisson / Helgi Páll Helgasson 56
Timeline-Based Mission Operations Architecture: An Overview
NASA Technical Reports Server (NTRS)
Chung, Seung H.; Bindschadler, Duane L.
2012-01-01
Some of the challenges in developing a mission operations system and operating a mission can be traced back to the challenge of integrating a mission operations system from its many components and to the challenge of maintaining consistent and accountable information throughout the operations processes. An important contributing factor to both of these challenges is the file-centric nature of today's systems. In this paper, we provide an overview of these challenges and argue the need to move toward an information-centric mission operations system. We propose an information representation called Timeline as an approach to enable such a move, and we provide an overview of a Timeline-based Mission Operations System architecture.
An evolutionary algorithm that constructs recurrent neural networks.
Angeline, P J; Saunders, G M; Pollack, J B
1994-01-01
Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARL's empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods.
Global Location-Based Access to Web Applications Using Atom-Based Automatic Update
NASA Astrophysics Data System (ADS)
Singh, Kulwinder; Park, Dong-Won
We propose an architecture which enables people to enquire about information available in directory services by voice using regular phones. We implement a Virtual User Agent (VUA) which mediates between the human user and a business directory service. The system enables the user to search for the nearest clinic, gas station by price, motel by price, food / coffee, banks/ATM etc. and fix an appointment, or automatically establish a call between the user and the business party if the user prefers. The user also has an option to receive appointment confirmation by phone, SMS, or e-mail. The VUA is accessible by a toll free DID (Direct Inward Dialing) number using a phone by anyone, anywhere, anytime. We use the Euclidean formula for distance measurement. Since, shorter geodesic distances (on the Earth’s surface) correspond to shorter Euclidean distances (measured by a straight line through the Earth). Our proposed architecture uses Atom XML syndication format protocol for data integration, VoiceXML for creating the voice user interface (VUI) and CCXML for controlling the call components. We also provide an efficient algorithm for parsing Atom feeds which provide data to the system. Moreover, we describe a cost-effective way for providing global access to the VUA based on Asterisk (an open source IP-PBX). We also provide some information on how our system can be integrated with GPS for locating the user coordinates and therefore efficiently and spontaneously enhancing the system response. Additionally, the system has a mechanism for validating the phone numbers in its database, and it updates the number and other information such as daily price of gas, motel etc. automatically using an Atom-based feed. Currently, the commercial directory services (Example 411) do not have facilities to update the listing in the database automatically, so that why callers most of the times get out-of-date phone numbers or other information. Our system can be integrated very easily with an existing web infrastructure, thereby making the wealth of Web information easily available to the user by phone. This kind of system can be deployed as an extension to 911 and 411 services to share the workload with human operators. This paper presents all the underlying principles, architecture, features, and an example of the real world deployment of our proposed system. The source code and documentations are available for commercial productions.
Use of a New "Moodle" Module for Improving the Teaching of a Basic Course on Computer Architecture
ERIC Educational Resources Information Center
Trenas, M. A.; Ramos, J.; Gutierrez, E. D.; Romero, S.; Corbera, F.
2011-01-01
This paper describes how a new "Moodle" module, called "CTPracticals", is applied to the teaching of the practical content of a basic computer organization course. In the core of the module, an automatic verification engine enables it to process the VHDL designs automatically as they are submitted. Moreover, a straightforward…
Natural Interaction in Intelligent Spaces: Designing for Architecture and Entertainment
NASA Astrophysics Data System (ADS)
Sparacino, Flavia
Designing responsive environments for various venues has become trendy today. Museums wish to create attractive "hands-on" exhibits that can engage and interest their visitors. Several research groups are building an "aware home" that can assist elderly people or chronic patients to have an autonomous life, while still calling for or providing immediate assistance when needed.
TangoLab-2 Card Troubleshooting
2017-10-17
iss053e105442 (Oct. 17, 2017) --- Flight Engineer Mark Vande Hei swaps out a payload card from the TangoLab-1 facility and places into the TangoLab-2 facility. TangoLab provides a standardized platform and open architecture for experimental modules called CubeLabs. CubeLab modules may be developed for use in 3-dimensional tissue and cell cultures.
Transmission control unit drive based on the AUTOSAR standard
NASA Astrophysics Data System (ADS)
Guo, Xiucai; Qin, Zhen
2018-03-01
It is a trend of automotive electronics industry in the future that automotive electronics embedded system development based on the AUTOSAR standard. AUTOSAR automotive architecture standard has proposed the transmission control unit (TCU) development architecture and designed its interfaces and configurations in detail. This essay has discussed that how to drive the TCU based on AUTOSAR standard architecture. The results show that driving the TCU with the AUTOSAR system improves reliability and shortens development cycles.
Active Control of Cryogenic Propellants in Space
NASA Technical Reports Server (NTRS)
Notardonato, William
2011-01-01
A new era of space exploration is being planned. Exploration architectures under consideration require the long term storage of cryogenic propellants in space. This requires development of active control systems to mitigate the effect of heat leak. This work summarizes current state of the art, proposes operational design strategies and presents options for future architectures. Scaling and integration of active systems will be estimated. Ideal long range spacecraft systems will be proposed with Exploration architecture benefits considered.
NASA Technical Reports Server (NTRS)
Bhandari, Pradeep; Birur, Gajanana; Prina, Mauro; Ramirez, Brenda; Paris, Anthony; Novak, Keith; Pauken, Michael
2006-01-01
This viewgraph presentation reviews the heat rejection and heat recovery system for thermal control of the Mars Science Laboratory (MSL). The MSL mission will use mechanically pumped fluid loop based architecture for thermal control of the spacecraft and rover. The architecture is designed to harness waste heat from an Multi Mission Radioisotope Thermo-electric Generator (MMRTG) during Mars surface operations for thermal control during cold conditions and also reject heat during the cruise aspect of the mission. There are several test that are being conducted that will insure the safety of this concept. This architecture can be used during any future interplanetary missions utilizing radioisotope power systems for power generation.
On-Line Tracking Controller for Brushless DC Motor Drives Using Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rubaai, Ahmed
1996-01-01
A real-time control architecture is developed for time-varying nonlinear brushless dc motors operating in a high performance drives environment. The developed control architecture possesses the capabilities of simultaneous on-line identification and control. The dynamics of the motor are modeled on-line and controlled using an artificial neural network, as the system runs. The control architecture combines the experience and dependability of adaptive tracking systems with potential and promise of the neural computing technology. The sensitivity of real-time controller to parametric changes that occur during training is investigated. Such changes are usually manifested by rapid changes in the load of the brushless motor drives. This sudden change in the external load is simulated for the sigmoidal and sinusoidal reference tracks. The ability of the neuro-controller to maintain reasonable tracking accuracy in the presence of external noise is also verified for a number of desired reference trajectories.
NASA Astrophysics Data System (ADS)
Acernese, Fausto; Barone, Fabrizio; De Rosa, Rosario; Eleuteri, Antonio; Milano, Leopoldo; Pardi, Silvio; Ricciardi, Iolanda; Russo, Guido
2004-09-01
One of the main requirements of a digital system for the control of interferometric detectors of gravitational waves is the computing power, that is a direct consequence of the increasing complexity of the digital algorithms necessary for the control signals generation. For this specific task many specialized non standard real-time architectures have been developed, often very expensive and difficult to upgrade. On the other hand, such computing power is generally fully available for off-line applications on standard Pc based systems. Therefore, a possible and obvious solution may be provided by the integration of both the real-time and off-line architecture resulting in a hybrid control system architecture based on standards available components, trying to get both the advantages of the perfect data synchronization provided by the real-time systems and by the large computing power available on Pc based systems. Such integration may be provided by the implementation of the link between the two different architectures through the standard Ethernet network, whose data transfer speed is largely increasing in these years, using the TCP/IP, UDP and raw Ethernet protocols. In this paper we describe the architecture of an hybrid Ethernet based real-time control system prototype we implemented in Napoli, discussing its characteristics and performances. Finally we discuss a possible application to the real-time control of a suspended mass of the mode cleaner of the 3m prototype optical interferometer for gravitational wave detection (IDGW-3P) operational in Napoli.
NASA Technical Reports Server (NTRS)
Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)
1998-01-01
The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.
Albattat, Ali; Gruenwald, Benjamin C.; Yucelen, Tansel
2016-01-01
The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches. PMID:27537894
Albattat, Ali; Gruenwald, Benjamin C; Yucelen, Tansel
2016-08-16
The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches.
FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.
Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora
2013-09-01
In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.
Zheng, Song; Zhang, Qi; Zheng, Rong; Huang, Bi-Qin; Song, Yi-Lin; Chen, Xin-Chu
2017-01-01
In recent years, the smart home field has gained wide attention for its broad application prospects. However, families using smart home systems must usually adopt various heterogeneous smart devices, including sensors and devices, which makes it more difficult to manage and control their home system. How to design a unified control platform to deal with the collaborative control problem of heterogeneous smart devices is one of the greatest challenges in the current smart home field. The main contribution of this paper is to propose a universal smart home control platform architecture (IAPhome) based on a multi-agent system and communication middleware, which shows significant adaptability and advantages in many aspects, including heterogeneous devices connectivity, collaborative control, human-computer interaction and user self-management. The communication middleware is an important foundation to design and implement this architecture which makes it possible to integrate heterogeneous smart devices in a flexible way. A concrete method of applying the multi-agent software technique to solve the integrated control problem of the smart home system is also presented. The proposed platform architecture has been tested in a real smart home environment, and the results indicate that the effectiveness of our approach for solving the collaborative control problem of different smart devices. PMID:28926957
Zheng, Song; Zhang, Qi; Zheng, Rong; Huang, Bi-Qin; Song, Yi-Lin; Chen, Xin-Chu
2017-09-16
In recent years, the smart home field has gained wide attention for its broad application prospects. However, families using smart home systems must usually adopt various heterogeneous smart devices, including sensors and devices, which makes it more difficult to manage and control their home system. How to design a unified control platform to deal with the collaborative control problem of heterogeneous smart devices is one of the greatest challenges in the current smart home field. The main contribution of this paper is to propose a universal smart home control platform architecture (IAPhome) based on a multi-agent system and communication middleware, which shows significant adaptability and advantages in many aspects, including heterogeneous devices connectivity, collaborative control, human-computer interaction and user self-management. The communication middleware is an important foundation to design and implement this architecture which makes it possible to integrate heterogeneous smart devices in a flexible way. A concrete method of applying the multi-agent software technique to solve the integrated control problem of the smart home system is also presented. The proposed platform architecture has been tested in a real smart home environment, and the results indicate that the effectiveness of our approach for solving the collaborative control problem of different smart devices.
Domain specific software architectures: Command and control
NASA Technical Reports Server (NTRS)
Braun, Christine; Hatch, William; Ruegsegger, Theodore; Balzer, Bob; Feather, Martin; Goldman, Neil; Wile, Dave
1992-01-01
GTE is the Command and Control contractor for the Domain Specific Software Architectures program. The objective of this program is to develop and demonstrate an architecture-driven, component-based capability for the automated generation of command and control (C2) applications. Such a capability will significantly reduce the cost of C2 applications development and will lead to improved system quality and reliability through the use of proven architectures and components. A major focus of GTE's approach is the automated generation of application components in particular subdomains. Our initial work in this area has concentrated in the message handling subdomain; we have defined and prototyped an approach that can automate one of the most software-intensive parts of C2 systems development. This paper provides an overview of the GTE team's DSSA approach and then presents our work on automated support for message processing.
Nuclear propulsion control and health monitoring
NASA Technical Reports Server (NTRS)
Walter, P. B.; Edwards, R. M.
1993-01-01
An integrated control and health monitoring architecture is being developed for the Pratt & Whitney XNR2000 nuclear rocket. Current work includes further development of the dynamic simulation modeling and the identification and configuration of low level controllers to give desirable performance for the various operating modes and faulted conditions. Artificial intelligence and knowledge processing technologies need to be investigated and applied in the development of an intelligent supervisory controller module for this control architecture.
Nuclear propulsion control and health monitoring
NASA Astrophysics Data System (ADS)
Walter, P. B.; Edwards, R. M.
1993-11-01
An integrated control and health monitoring architecture is being developed for the Pratt & Whitney XNR2000 nuclear rocket. Current work includes further development of the dynamic simulation modeling and the identification and configuration of low level controllers to give desirable performance for the various operating modes and faulted conditions. Artificial intelligence and knowledge processing technologies need to be investigated and applied in the development of an intelligent supervisory controller module for this control architecture.
NASA Astrophysics Data System (ADS)
Park, Soomyung; Joo, Seong-Soon; Yae, Byung-Ho; Lee, Jong-Hyun
2002-07-01
In this paper, we present the Optical Cross-Connect (OXC) Management Control System Architecture, which has the scalability and robust maintenance and provides the distributed managing environment in the optical transport network. The OXC system we are developing, which is divided into the hardware and the internal and external software for the OXC system, is made up the OXC subsystem with the Optical Transport Network (OTN) sub layers-hardware and the optical switch control system, the signaling control protocol subsystem performing the User-to-Network Interface (UNI) and Network-to-Network Interface (NNI) signaling control, the Operation Administration Maintenance & Provisioning (OAM&P) subsystem, and the network management subsystem. And the OXC management control system has the features that can support the flexible expansion of the optical transport network, provide the connectivity to heterogeneous external network elements, be added or deleted without interrupting OAM&P services, be remotely operated, provide the global view and detail information for network planner and operator, and have Common Object Request Broker Architecture (CORBA) based the open system architecture adding and deleting the intelligent service networking functions easily in future. To meet these considerations, we adopt the object oriented development method in the whole developing steps of the system analysis, design, and implementation to build the OXC management control system with the scalability, the maintenance, and the distributed managing environment. As a consequently, the componentification for the OXC operation management functions of each subsystem makes the robust maintenance, and increases code reusability. Also, the component based OXC management control system architecture will have the flexibility and scalability in nature.
Expert system technologies for Space Shuttle decision support: Two case studies
NASA Technical Reports Server (NTRS)
Ortiz, Christopher J.; Hasan, David A.
1994-01-01
This paper addresses the issue of integrating the C Language Integrated Production System (CLIPS) into distributed data acquisition environments. In particular, it presents preliminary results of some ongoing software development projects aimed at exploiting CLIPS technology in the new mission control center (MCC) being built at NASA Johnson Space Center. One interesting aspect of the control center is its distributed architecture; it consists of networked workstations which acquire and share data through the NASA/JSC-developed information sharing protocol (ISP). This paper outlines some approaches taken to integrate CLIPS and ISP in order to permit the development of intelligent data analysis applications which can be used in the MCC. Three approaches to CLIPS/IPS integration are discussed. The initial approach involves clearly separating CLIPS from ISP using user-defined functions for gathering and sending data to and from a local storage buffer. Memory and performance drawbacks of this design are summarized. The second approach involves taking full advantage of CLIPS and the CLIPS Object-Oriented Language (COOL) by using objects to directly transmit data and state changes from ISP to COOL. Any changes within the object slots eliminate the need for both a data structure and external function call thus taking advantage of the object matching capabilities within CLIPS 6.0. The final approach is to treat CLIPS and ISP as peer toolkits. Neither is embedded in the other; rather the application interweaves calls to each directly in the application source code.
Reinventing User Applications for Mission Control
NASA Technical Reports Server (NTRS)
Trimble, Jay Phillip; Crocker, Alan R.
2010-01-01
In 2006, NASA Ames Research Center's (ARC) Intelligent Systems Division, and NASA Johnson Space Centers (JSC) Mission Operations Directorate (MOD) began a collaboration to move user applications for JSC's mission control center to a new software architecture, intended to replace the existing user applications being used for the Space Shuttle and the International Space Station. It must also carry NASA/JSC mission operations forward to the future, meeting the needs for NASA's exploration programs beyond low Earth orbit. Key requirements for the new architecture, called Mission Control Technologies (MCT) are that end users must be able to compose and build their own software displays without the need for programming, or direct support and approval from a platform services organization. Developers must be able to build MCT components using industry standard languages and tools. Each component of MCT must be interoperable with other components, regardless of what organization develops them. For platform service providers and MOD management, MCT must be cost effective, maintainable and evolvable. MCT software is built from components that are presented to users as composable user objects. A user object is an entity that represents a domain object such as a telemetry point, a command, a timeline, an activity, or a step in a procedure. User objects may be composed and reused, for example a telemetry point may be used in a traditional monitoring display, and that same telemetry user object may be composed into a procedure step. In either display, that same telemetry point may be shown in different views, such as a plot, an alpha numeric, or a meta-data view and those views may be changed live and in place. MCT presents users with a single unified user environment that contains all the objects required to perform applicable flight controller tasks, thus users do not have to use multiple applications, the traditional boundaries that exist between multiple heterogeneous applications disappear, leaving open the possibility of new operations concepts that are not constrained by the traditional applications paradigm.
A Neuromorphic Architecture for Object Recognition and Motion Anticipation Using Burst-STDP
Balduzzi, David; Tononi, Giulio
2012-01-01
In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP). STDP is responsible for the strengthening (or weakening) of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse) spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF) neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips. PMID:22615855
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1988-01-01
Research directed at developing a graph theoretical model for describing data and control flow associated with the execution of large grained algorithms in a special distributed computer environment is presented. This model is identified by the acronym ATAMM which represents Algorithms To Architecture Mapping Model. The purpose of such a model is to provide a basis for establishing rules for relating an algorithm to its execution in a multiprocessor environment. Specifications derived from the model lead directly to the description of a data flow architecture which is a consequence of the inherent behavior of the data and control flow described by the model. The purpose of the ATAMM based architecture is to provide an analytical basis for performance evaluation. The ATAMM model and architecture specifications are demonstrated on a prototype system for concept validation.
A task control architecture for autonomous robots
NASA Technical Reports Server (NTRS)
Simmons, Reid; Mitchell, Tom
1990-01-01
An architecture is presented for controlling robots that have multiple tasks, operate in dynamic domains, and require a fair degree of autonomy. The architecture is built on several layers of functionality, including a distributed communication layer, a behavior layer for querying sensors, expanding goals, and executing commands, and a task level for managing the temporal aspects of planning and achieving goals, coordinating tasks, allocating resources, monitoring, and recovering from errors. Application to a legged planetary rover and an indoor mobile manipulator is described.
Control of Macromolecular Architectures for Renewable Polymers: Case Studies
NASA Astrophysics Data System (ADS)
Tang, Chuanbing
The development of sustainable polymers from nature biomass is growing, but facing fierce competition from existing petrochemical-based counterparts. Controlling macromolecular architectures to maximize the properties of renewable polymers is a desirable approach to gain advantages. Given the complexity of biomass, there needs special consideration other than traditional design. In the presentation, I will talk about a few case studies on how macromolecular architectures could tune the properties of sustainable bioplastics and elastomers from renewable biomass such as resin acids (natural rosin) and plant oils.
An AI Approach to Ground Station Autonomy for Deep Space Communications
NASA Technical Reports Server (NTRS)
Fisher, Forest; Estlin, Tara; Mutz, Darren; Paal, Leslie; Law, Emily; Stockett, Mike; Golshan, Nasser; Chien, Steve
1998-01-01
This paper describes an architecture for an autonomous deep space tracking station (DS-T). The architecture targets fully automated routine operations encompassing scheduling and resource allocation, antenna and receiver predict generation. track procedure generation from service requests, and closed loop control and error recovery for the station subsystems. This architecture has been validated by the construction of a prototype DS-T station, which has performed a series of demonstrations of autonomous ground station control for downlink services with NASA's Mars Global Surveyor (MGS).
Proton beam therapy control system
Baumann, Michael A [Riverside, CA; Beloussov, Alexandre V [Bernardino, CA; Bakir, Julide [Alta Loma, CA; Armon, Deganit [Redlands, CA; Olsen, Howard B [Colton, CA; Salem, Dana [Riverside, CA
2008-07-08
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A.; Beloussov, Alexandre V.; Bakir, Julide; Armon, Deganit; Olsen, Howard B.; Salem, Dana
2010-09-21
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana
2013-06-25
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana
2013-12-03
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
NASA Astrophysics Data System (ADS)
Hortos, William S.
1999-03-01
A hybrid neural network approach is presented to estimate radio propagation characteristics and multiuser interference and to evaluate their combined impact on throughput, latency and information loss in third-generation (3G) wireless networks. The latter three performance parameters influence the quality of service (QoS) for multimedia services under consideration for 3G networks. These networks, based on a hierarchical architecture of overlaying macrocells on top of micro- and picocells, are planned to operate in mobile urban and indoor environments with service demands emanating from circuit-switched, packet-switched and satellite-based traffic sources. Candidate radio interfaces for these networks employ a form of wideband CDMA in 5-MHz and wider-bandwidth channels, with possible asynchronous operation of the mobile subscribers. The proposed neural network (NN) architecture allocates network resources to optimize QoS metrics. Parameters of the radio propagation channel are estimated, followed by control of an adaptive antenna array at the base station to minimize interference, and then joint multiuser detection is performed at the base station receiver. These adaptive processing stages are implemented as a sequence of NN techniques that provide their estimates as inputs to a final- stage Kohonen self-organizing feature map (SOFM). The SOFM optimizes the allocation of available network resources to satisfy QoS requirements for variable-rate voice, data and video services. As the first stage of the sequence, a modified feed-forward multilayer perceptron NN is trained on the pilot signals of the mobile subscribers to estimate the parameters of shadowing, multipath fading and delays on the uplinks. A recurrent NN (RNN) forms the second stage to control base stations' adaptive antenna arrays to minimize intra-cell interference. The third stage is based on a Hopfield NN (HNN), modified to detect multiple users on the uplink radio channels to mitigate multiaccess interference, control carrier-sense multiple-access (CSMA) protocols, and refine call handoff procedures. In the final stage, the Kohonen SOFM, operating in a hybrid continuous and discrete space, adaptively allocates the resources of antenna-based cell sectorization, activity monitoring, variable-rate coding, power control, handoff and caller admission to meet user demands for various multimedia services at minimum QoS levels. The performance of the NN cascade is evaluated through simulation of a candidate 3G wireless network using W-CDMA parameters in a small-cell environment. The simulated network consists of a representative number of cells. Mobile users with typical movement patterns are assumed. QoS requirements for different classes of multimedia services are considered. The proposed method is shown to provide relatively low probability of new call blocking and handoff dropping, while maintaining efficient use of the network's radio resources.
Launch Vehicle Control Center Architectures
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Williams, Randall; McLaughlin, Tom
2014-01-01
This analysis is a survey of control center architectures of the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures have similarities in basic structure, and differences in functional distribution of responsibilities for the phases of operations: (a) Launch vehicles in the international community vary greatly in configuration and process; (b) Each launch site has a unique processing flow based on the specific configurations; (c) Launch and flight operations are managed through a set of control centers associated with each launch site, however the flight operations may be a different control center than the launch center; and (d) The engineering support centers are primarily located at the design center with a small engineering support team at the launch site.
NASA Technical Reports Server (NTRS)
Dorais, Gregory A.; Nicewarner, Keith
2006-01-01
We present an multi-agent model-based autonomy architecture with monitoring, planning, diagnosis, and execution elements. We discuss an internal spacecraft free-flying robot prototype controlled by an implementation of this architecture and a ground test facility used for development. In addition, we discuss a simplified environment control life support system for the spacecraft domain also controlled by an implementation of this architecture. We discuss adjustable autonomy and how it applies to this architecture. We describe an interface that provides the user situation awareness of both autonomous systems and enables the user to dynamically edit the plans prior to and during execution as well as control these agents at various levels of autonomy. This interface also permits the agents to query the user or request the user to perform tasks to help achieve the commanded goals. We conclude by describing a scenario where these two agents and a human interact to cooperatively detect, diagnose and recover from a simulated spacecraft fault.
Controlling Styrene Maleic Acid Lipid Particles through RAFT.
Smith, Anton A A; Autzen, Henriette E; Laursen, Tomas; Wu, Vincent; Yen, Max; Hall, Aaron; Hansen, Scott D; Cheng, Yifan; Xu, Ting
2017-11-13
The ability of styrene maleic acid copolymers to dissolve lipid membranes into nanosized lipid particles is a facile method of obtaining membrane proteins in solubilized lipid discs while conserving part of their native lipid environment. While the currently used copolymers can readily extract membrane proteins in native nanodiscs, their highly disperse composition is likely to influence the dispersity of the discs as well as the extraction efficiency. In this study, reversible addition-fragmentation chain transfer was used to control the polymer architecture and dispersity of molecular weights with a high-precision. Based on Monte Carlo simulations of the polymerizations, the monomer composition was predicted and allowed a structure-function analysis of the polymer architecture, in relation to their ability to assemble into lipid nanoparticles. We show that a higher degree of control of the polymer architecture generates more homogeneous samples. We hypothesize that low dispersity copolymers, with control of polymer architecture are an ideal framework for the rational design of polymers for customized isolation and characterization of integral membrane proteins in native lipid bilayer systems.
Scalable service architecture for providing strong service guarantees
NASA Astrophysics Data System (ADS)
Christin, Nicolas; Liebeherr, Joerg
2002-07-01
For the past decade, a lot of Internet research has been devoted to providing different levels of service to applications. Initial proposals for service differentiation provided strong service guarantees, with strict bounds on delays, loss rates, and throughput, but required high overhead in terms of computational complexity and memory, both of which raise scalability concerns. Recently, the interest has shifted to service architectures with low overhead. However, these newer service architectures only provide weak service guarantees, which do not always address the needs of applications. In this paper, we describe a service architecture that supports strong service guarantees, can be implemented with low computational complexity, and only requires to maintain little state information. A key mechanism of the proposed service architecture is that it addresses scheduling and buffer management in a single algorithm. The presented architecture offers no solution for controlling the amount of traffic that enters the network. Instead, we plan on exploiting feedback mechanisms of TCP congestion control algorithms for the purpose of regulating the traffic entering the network.
Renaissance architecture for Ground Data Systems
NASA Technical Reports Server (NTRS)
Perkins, Dorothy C.; Zeigenfuss, Lawrence B.
1994-01-01
The Mission Operations and Data Systems Directorate (MO&DSD) has embarked on a new approach for developing and operating Ground Data Systems (GDS) for flight mission support. This approach is driven by the goals of minimizing cost and maximizing customer satisfaction. Achievement of these goals is realized through the use of a standard set of capabilities which can be modified to meet specific user needs. This approach, which is called the Renaissance architecture, stresses the engineering of integrated systems, based upon workstation/local area network (LAN)/fileserver technology and reusable hardware and software components called 'building blocks.' These building blocks are integrated with mission specific capabilities to build the GDS for each individual mission. The building block approach is key to the reduction of development costs and schedules. Also, the Renaissance approach allows the integration of GDS functions that were previously provided via separate multi-mission facilities. With the Renaissance architecture, the GDS can be developed by the MO&DSD or all, or part, of the GDS can be operated by the user at their facility. Flexibility in operation configuration allows both selection of a cost-effective operations approach and the capability for customizing operations to user needs. Thus the focus of the MO&DSD is shifted from operating systems that we have built to building systems and, optionally, operations as separate services. Renaissance is actually a continuous process. Both the building blocks and the system architecture will evolve as user needs and technology change. Providing GDS on a per user basis enables this continuous refinement of the development process and product and allows the MO&DSD to remain a customer-focused organization. This paper will present the activities and results of the MO&DSD initial efforts toward the establishment of the Renaissance approach for the development of GDS, with a particular focus on both the technical and process implications posed by Renaissance to the MO&DSD.
An architecture for rapid prototyping of control schemes for artificial ventricles.
Ficola, Antonio; Pagnottelli, Stefano; Valigi, Paolo; Zoppitelli, Maurizio
2004-01-01
This paper presents an experimental system aimed at rapid prototyping of feedback control schemes for ventricular assist devices, and artificial ventricles in general. The system comprises a classical mock circulatory system, an actuated bellow-based ventricle chamber, and a software architecture for control schemes implementation and experimental data acquisition, visualization and storing. Several experiments have been carried out, showing good performance of ventricular pressure tracking control schemes.
NASA Technical Reports Server (NTRS)
Schoppers, Marcel
1994-01-01
The design of a flexible, real-time software architecture for trajectory planning and automatic control of redundant manipulators is described. Emphasis is placed on a technique of designing control systems that are both flexible and robust yet have good real-time performance. The solution presented involves an artificial intelligence algorithm that dynamically reprograms the real-time control system while planning system behavior.
A Network Scheduling Model for Distributed Control Simulation
NASA Technical Reports Server (NTRS)
Culley, Dennis; Thomas, George; Aretskin-Hariton, Eliot
2016-01-01
Distributed engine control is a hardware technology that radically alters the architecture for aircraft engine control systems. Of its own accord, it does not change the function of control, rather it seeks to address the implementation issues for weight-constrained vehicles that can limit overall system performance and increase life-cycle cost. However, an inherent feature of this technology, digital communication networks, alters the flow of information between critical elements of the closed-loop control. Whereas control information has been available continuously in conventional centralized control architectures through virtue of analog signaling, moving forward, it will be transmitted digitally in serial fashion over the network(s) in distributed control architectures. An underlying effect is that all of the control information arrives asynchronously and may not be available every loop interval of the controller, therefore it must be scheduled. This paper proposes a methodology for modeling the nominal data flow over these networks and examines the resulting impact for an aero turbine engine system simulation.
Integrating Software Modules For Robot Control
NASA Technical Reports Server (NTRS)
Volpe, Richard A.; Khosla, Pradeep; Stewart, David B.
1993-01-01
Reconfigurable, sensor-based control system uses state variables in systematic integration of reusable control modules. Designed for open-architecture hardware including many general-purpose microprocessors, each having own local memory plus access to global shared memory. Implemented in software as extension of Chimera II real-time operating system. Provides transparent computing mechanism for intertask communication between control modules and generic process-module architecture for multiprocessor realtime computation. Used to control robot arm. Proves useful in variety of other control and robotic applications.
NASA Technical Reports Server (NTRS)
Swei, Sean
2014-01-01
We propose to develop a robust guidance and control system for the ADEPT (Adaptable Deployable Entry and Placement Technology) entry vehicle. A control-centric model of ADEPT will be developed to quantify the performance of candidate guidance and control architectures for both aerocapture and precision landing missions. The evaluation will be based on recent breakthroughs in constrained controllability/reachability analysis of control systems and constrained-based energy-minimum trajectory optimization for guidance development operating in complex environments.
Exploration Architecture Options - ECLSS, EVA, TCS Implications
NASA Technical Reports Server (NTRS)
Chambliss, Joe; Henninger, Don; Lawrence, Carl
2010-01-01
Many options for exploration of space have been identified and evaluated since the Vision for Space Exploration (VSE) was announced in 2004. Lunar architectures have been identified and addressed in the Lunar Surface Systems team to establish options for how to get to and then inhabit and explore the moon. The Augustine Commission evaluated human space flight for the Obama administration and identified many options for how to conduct human spaceflight in the future. This paper will evaluate the options for exploration of space for the implications of architectures on the Environmental Control and Life Support (ECLSS), ExtraVehicular Activity (EVA) and Thermal Control System (TCS) Systems. The advantages and disadvantages of each architecture and options are presented.
NASA Technical Reports Server (NTRS)
Ruiz, B. Ian; Burke, Gary R.; Lung, Gerald; Whitaker, William D.; Nowicki, Robert M.
2004-01-01
This viewgraph presentation reviews the architecture of the The CIA-AlA chip-set is a set of mixed-signal ASICs that provide a flexible high level interface between the spacecraft's command and data handling (C&DH) electronics and lower level functions in other spacecraft subsystems. Due to the open-systems architecture of the chip-set including an embedded micro-controller a variety of applications are possible. The chip-set was developed for the missions to the outer planets. The chips were developed to provide a single solution for both the switching and regulation of a spacecraft power bus. The Open-Systems Architecture allows for other powerful applications.
Bannwarth, Markus B; Utech, Stefanie; Ebert, Sandro; Weitz, David A; Crespy, Daniel; Landfester, Katharina
2015-03-24
The assembly of nanoparticles into polymer-like architectures is challenging and usually requires highly defined colloidal building blocks. Here, we show that the broad size-distribution of a simple dispersion of magnetic nanocolloids can be exploited to obtain various polymer-like architectures. The particles are assembled under an external magnetic field and permanently linked by thermal sintering. The remarkable variety of polymer-analogue architectures that arises from this simple process ranges from statistical and block copolymer-like sequencing to branched chains and networks. This library of architectures can be realized by controlling the sequencing of the particles and the junction points via a size-dependent self-assembly of the single building blocks.
Advanced flight control system study
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Wall, J. E., Jr.; Rang, E. R.; Lee, H. P.; Schulte, R. W.; Ng, W. K.
1982-01-01
A fly by wire flight control system architecture designed for high reliability includes spare sensor and computer elements to permit safe dispatch with failed elements, thereby reducing unscheduled maintenance. A methodology capable of demonstrating that the architecture does achieve the predicted performance characteristics consists of a hierarchy of activities ranging from analytical calculations of system reliability and formal methods of software verification to iron bird testing followed by flight evaluation. Interfacing this architecture to the Lockheed S-3A aircraft for flight test is discussed. This testbed vehicle can be expanded to support flight experiments in advanced aerodynamics, electromechanical actuators, secondary power systems, flight management, new displays, and air traffic control concepts.
An architecture for rule based system explanation
NASA Technical Reports Server (NTRS)
Fennel, T. R.; Johannes, James D.
1990-01-01
A system architecture is presented which incorporate both graphics and text into explanations provided by rule based expert systems. This architecture facilitates explanation of the knowledge base content, the control strategies employed by the system, and the conclusions made by the system. The suggested approach combines hypermedia and inference engine capabilities. Advantages include: closer integration of user interface, explanation system, and knowledge base; the ability to embed links to deeper knowledge underlying the compiled knowledge used in the knowledge base; and allowing for more direct control of explanation depth and duration by the user. User models are suggested to control the type, amount, and order of information presented.
A Stigmergic Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Thomas G.; O'Neil, Daniel; Craft, Michael A.
2004-01-01
In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. A prototype cooperative multi-robot control architecture which may be suitable for the eventual construction of large space structures has been developed which emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically, i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
Using manufacturing message specification for monitor and control at Venus
NASA Technical Reports Server (NTRS)
Heuser, W. Randy; Chen, Richard L.; Stockett, Michael H.
1993-01-01
The flexibility and robustness of a monitor and control (M&C) system are a direct result of the underlying interprocessor communications architecture. A new architecture for M&C at the Deep Space Communications Complexes (DSCC's) has been developed based on the Manufacturing Message Specification (MMS) process control standard of the Open System Interconnection (OSI) suite of protocols. This architecture has been tested both in a laboratory environment and under operational conditions at the Deep Space Network (DSN) experimental Venus station (DSS-13). The Venus experience in the application of OSI standards to support M&C has been extremely successful. MMS meets the functional needs of the station and provides a level of flexibility and responsiveness previously unknown in that environment. The architecture is robust enough to meet current operational needs and flexible enough to provide a migration path for new subsystems. This paper will describe the architecture of the Venus M&C system, discuss how MMS was used and the requirements this imposed on other parts of the system, and provide results from systems and operational testing at the Venus site.
Design and reliability analysis of DP-3 dynamic positioning control architecture
NASA Astrophysics Data System (ADS)
Wang, Fang; Wan, Lei; Jiang, Da-Peng; Xu, Yu-Ru
2011-12-01
As the exploration and exploitation of oil and gas proliferate throughout deepwater area, the requirements on the reliability of dynamic positioning system become increasingly stringent. The control objective ensuring safety operation at deep water will not be met by a single controller for dynamic positioning. In order to increase the availability and reliability of dynamic positioning control system, the triple redundancy hardware and software control architectures were designed and developed according to the safe specifications of DP-3 classification notation for dynamically positioned ships and rigs. The hardware redundant configuration takes the form of triple-redundant hot standby configuration including three identical operator stations and three real-time control computers which connect each other through dual networks. The function of motion control and redundancy management of control computers were implemented by software on the real-time operating system VxWorks. The software realization of task loose synchronization, majority voting and fault detection were presented in details. A hierarchical software architecture was planed during the development of software, consisting of application layer, real-time layer and physical layer. The behavior of the DP-3 dynamic positioning control system was modeled by a Markov model to analyze its reliability. The effects of variation in parameters on the reliability measures were investigated. The time domain dynamic simulation was carried out on a deepwater drilling rig to prove the feasibility of the proposed control architecture.
NASA Astrophysics Data System (ADS)
Rucinski, Marek; Coates, Adam; Montano, Giuseppe; Allouis, Elie; Jameux, David
2015-09-01
The Lightweight Advanced Robotic Arm Demonstrator (LARAD) is a state-of-the-art, two-meter long robotic arm for planetary surface exploration currently being developed by a UK consortium led by Airbus Defence and Space Ltd under contract to the UK Space Agency (CREST-2 programme). LARAD has a modular design, which allows for experimentation with different electronics and control software. The control system architecture includes the on-board computer, control software and firmware, and the communication infrastructure (e.g. data links, switches) connecting on-board computer(s), sensors, actuators and the end-effector. The purpose of the control system is to operate the arm according to pre-defined performance requirements, monitoring its behaviour in real-time and performing safing/recovery actions in case of faults. This paper reports on the results of a recent study about the feasibility of the development and integration of a novel control system architecture for LARAD fully based on the SpaceWire protocol. The current control system architecture is based on the combination of two communication protocols, Ethernet and CAN. The new SpaceWire-based control system will allow for improved monitoring and telecommanding performance thanks to higher communication data rate, allowing for the adoption of advanced control schemes, potentially based on multiple vision sensors, and for the handling of sophisticated end-effectors that require fine control, such as science payloads or robotic hands.