Tank waste remediation system configuration management implementation plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vann, J.M.
1998-03-31
The Tank Waste Remediation System (TWRS) Configuration Management Implementation Plan describes the actions that will be taken by Project Hanford Management Contract Team to implement the TWRS Configuration Management program defined in HNF 1900, TWRS Configuration Management Plan. Over the next 25 years, the TWRS Project will transition from a safe storage mission to an aggressive retrieval, storage, and disposal mission in which substantial Engineering, Construction, and Operations activities must be performed. This mission, as defined, will require a consolidated configuration management approach to engineering, design, construction, as-building, and operating in accordance with the technical baselines that emerge from themore » life cycles. This Configuration Management Implementation Plan addresses the actions that will be taken to strengthen the TWRS Configuration Management program.« less
Tank waste remediation system configuration management plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vann, J.M.
The configuration management program for the Tank Waste Remediation System (TWRS) Project Mission supports management of the project baseline by providing the mechanisms to identify, document, and control the functional and physical characteristics of the products. This document is one of the tools used to develop and control the mission and work. It is an integrated approach for control of technical, cost, schedule, and administrative information necessary to manage the configurations for the TWRS Project Mission. Configuration management focuses on five principal activities: configuration management system management, configuration identification, configuration status accounting, change control, and configuration management assessments. TWRS Projectmore » personnel must execute work in a controlled fashion. Work must be performed by verbatim use of authorized and released technical information and documentation. Application of configuration management will be consistently applied across all TWRS Project activities and assessed accordingly. The Project Hanford Management Contract (PHMC) configuration management requirements are prescribed in HNF-MP-013, Configuration Management Plan (FDH 1997a). This TWRS Configuration Management Plan (CMP) implements those requirements and supersedes the Tank Waste Remediation System Configuration Management Program Plan described in Vann, 1996. HNF-SD-WM-CM-014, Tank Waste Remediation System Configuration Management Implementation Plan (Vann, 1997) will be revised to implement the requirements of this plan. This plan provides the responsibilities, actions and tools necessary to implement the requirements as defined in the above referenced documents.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaus, P.S.
This Configuration Management Implementation Plan (CMIP) was developed to assist in managing systems, structures, and components (SSCS), to facilitate the effective control and statusing of changes to SSCS, and to ensure technical consistency between design, performance, and operational requirements. Its purpose is to describe the approach Privatization Infrastructure will take in implementing a configuration management program, to identify the Program`s products that need configuration management control, to determine the rigor of control, and to identify the mechanisms for that control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This standard presents program criteria and implementation guidance for an operational configuration management program for DOE nuclear and non-nuclear facilities. This Part 2 includes chapters on implementation guidance for operational configuration management, implementation guidance for design reconstitution, and implementation guidance for material condition and aging management. Appendices are included on design control, examples of design information, conduct of walkdowns, and content of design information summaries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgard, K.G.
This Configuration Management Implementation Plan was developed to assist in the management of systems, structures, and components, to facilitate the effective control and statusing of changes to systems, structures, and components; and to ensure technical consistency between design, performance, and operational requirements. Its purpose is to describe the approach Project W-464 will take in implementing a configuration management control, to determine the rigor of control, and to identify the mechanisms for imposing that control.This Configuration Management Implementation Plan was developed to assist in the management of systems, structures, and components, to facilitate the effective control and statusing of changes tomore » systems, structures, and components; and to ensure technical consistency between design, performance, and operational requirements. Its purpose is to describe the approach Project W-464 will take in implementing a configuration management control, to determine the rigor of control, and to identify the mechanisms for imposing that control.« less
An Approach for Implementation of Project Management Information Systems
NASA Astrophysics Data System (ADS)
Běrziša, Solvita; Grabis, Jānis
Project management is governed by project management methodologies, standards, and other regulatory requirements. This chapter proposes an approach for implementing and configuring project management information systems according to requirements defined by these methodologies. The approach uses a project management specification framework to describe project management methodologies in a standardized manner. This specification is used to automatically configure the project management information system by applying appropriate transformation mechanisms. Development of the standardized framework is based on analysis of typical project management concepts and process and existing XML-based representations of project management. A demonstration example of project management information system's configuration is provided.
Comparison of DOE and NIRMA approaches to configuration management programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, E.Y.; Kulzick, K.C.
One of the major management programs used for commercial, laboratory, and defense nuclear facilities is configuration management. The safe and efficient operation of a nuclear facility requires constant vigilance in maintaining the facility`s design basis with its as-built condition. Numerous events have occurred that can be attributed to (either directly or indirectly) the extent to which configuration management principles have been applied. The nuclear industry, as a whole, has been addressing this management philosophy with efforts taken on by its constituent professional organizations. The purpose of this paper is to compare and contrast the implementation plans for enhancing a configurationmore » management program as outlined in the U.S. Department of Energy`s (DOE`s) DOE-STD-1073-93, {open_quotes}Guide for Operational Configuration Management Program,{close_quotes} with the following guidelines developed by the Nuclear Information and Records Management Association (NIRMA): 1. PP02-1994, {open_quotes}Position Paper on Configuration Management{close_quotes} 2. PP03-1992, {open_quotes}Position Paper for Implementing a Configuration Management Enhancement Program for a Nuclear Facility{close_quotes} 3. PP04-1994 {open_quotes}Position Paper for Configuration Management Information Systems.{close_quotes}« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This standard presents program criteria and implementation guidance for an operational configuration management program for DOE nuclear and non-nuclear facilities in the operational phase. Portions of this standard are also useful for other DOE processes, activities, and programs. This Part 1 contains foreword, glossary, acronyms, bibliography, and Chapter 1 on operational configuration management program principles. Appendices are included on configuration management program interfaces, and background material and concepts for operational configuration management.
Configuration Management Plan for K Basins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weir, W.R.; Laney, T.
This plan describes a configuration management program for K Basins that establishes the systems, processes, and responsibilities necessary for implementation. The K Basins configuration management plan provides the methodology to establish, upgrade, reconstitute, and maintain the technical consistency among the requirements, physical configuration, and documentation. The technical consistency afforded by this plan ensures accurate technical information necessary to achieve the mission objectives that provide for the safe, economic, and environmentally sound management of K Basins and the stored material. The configuration management program architecture presented in this plan is based on the functional model established in the DOE Standard, DOE-STD-1073-93,more » {open_quotes}Guide for Operational Configuration Management Program{close_quotes}.« less
Operational concepts and implementation strategies for the design configuration management process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trauth, Sharon Lee
2007-05-01
This report describes operational concepts and implementation strategies for the Design Configuration Management Process (DCMP). It presents a process-based systems engineering model for the successful configuration management of the products generated during the operation of the design organization as a business entity. The DCMP model focuses on Pro/E and associated activities and information. It can serve as the framework for interconnecting all essential aspects of the product design business. A design operation scenario offers a sense of how to do business at a time when DCMP is second nature within the design organization.
Spent Nuclear Fuel Project Configuration Management Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reilly, M.A.
This document is a rewrite of the draft ``C`` that was agreed to ``in principle`` by SNF Project level 2 managers on EDT 609835, dated March 1995 (not released). The implementation process philosphy was changed in keeping with the ongoing reengineering of the WHC Controlled Manuals to achieve configuration management within the SNF Project.
Software Configuration Management Plan for the B-Plant Canyon Ventilation Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
MCDANIEL, K.S.
1999-08-31
Project W-059 installed a new B Plant Canyon Ventilation System. Monitoring and control of the system is implemented by the Canyon Ventilation Control System (CVCS). This Software Configuration Management Plan provides instructions for change control of the CVCS.
Aeropropulsion facilities configuration control: Procedures manual
NASA Technical Reports Server (NTRS)
Lavelle, James J.
1990-01-01
Lewis Research Center senior management directed that the aeropropulsion facilities be put under configuration control. A Configuration Management (CM) program was established by the Facilities Management Branch of the Aeropropulsion Facilities and Experiments Division. Under the CM program, a support service contractor was engaged to staff and implement the program. The Aeronautics Directorate has over 30 facilities at Lewis of various sizes and complexities. Under the program, a Facility Baseline List (FBL) was established for each facility, listing which systems and their documents were to be placed under configuration control. A Change Control System (CCS) was established requiring that any proposed changes to FBL systems or their documents were to be processed as per the CCS. Limited access control of the FBL master drawings was implemented and an audit system established to ensure all facility changes are properly processed. This procedures manual sets forth the policy and responsibilities to ensure all key documents constituting a facilities configuration are kept current, modified as needed, and verified to reflect any proposed change. This is the essence of the CM program.
Integrated Advanced Sounding Unit-A (AMSU-A). Configuration Management Plan
NASA Technical Reports Server (NTRS)
Cavanaugh, J.
1996-01-01
The purpose of this plan is to identify the baseline to be established during the development life cycle of the integrated AMSU-A, and define the methods and procedures which Aerojet will follow in the implementation of configuration control for each established baseline. Also this plan establishes the Configuration Management process to be used for the deliverable hardware, software, and firmware of the Integrated AMSU-A during development, design, fabrication, test, and delivery.
NASA Technical Reports Server (NTRS)
Nichols, J. D.; Britten, R. A.; Parks, G. S.; Voss, J. M.
1990-01-01
NASA's JPL has completed a feasibility study using infrared technologies for wildland fire suppression and management. The study surveyed user needs, examined available technologies, matched the user needs with technologies, and defined an integrated infrared wildland fire mapping concept system configuration. System component trade-offs were presented for evaluation in the concept system configuration. The economic benefits of using infrared technologies in fire suppression and management were examined. Follow-on concept system configuration development and implementation were proposed.
PTC MathCAD and Workgroup Manager: Implementation in a Multi-Org System
NASA Technical Reports Server (NTRS)
Jones, Corey
2015-01-01
In this presentation, the presenter will review what was done at Kennedy Space Center to deploy and implement PTC MathCAD and PTC Workgroup Manager in a multi-org system. During the presentation the presenter will explain how they configured PTC Windchill to create custom soft-types and object initialization rules for their custom numbering scheme and why they choose these methods. This presentation will also include how to modify the EPM default soft-type file in the PTC Windchill server codebase folder. The presenter will also go over the code used in a start up script to initiate PTC MathCAD and PTC Workgroup Manager in the proper order, and also set up the environment variables when running both PTC Workgroup Manager and PTC Creo. The configuration.ini file the presenter used will also be reviewed to show you how to set up the PTC Workgroup Manager and customized it to their user community. This presentation will be of interest to administrators trying to create a similar set-up in either a single org or multiple org system deployment. The big take away will be ideas and best practices learned through implementing this system, and the lessons learned what to do and not to do when setting up this configuration. Attendees will be exposed to several different sets of code used and that worked well and will hear some limitations on what the software can accomplish when configured this way.
Experiment Management System for the SND Detector
NASA Astrophysics Data System (ADS)
Pugachev, K.
2017-10-01
We present a new experiment management system for the SND detector at the VEPP-2000 collider (Novosibirsk). An important part to report about is access to experimental databases (configuration, conditions and metadata). The system is designed in client-server architecture. User interaction comes true using web-interface. The server side includes several logical layers: user interface templates; template variables description and initialization; implementation details. The templates are meant to involve as less IT knowledge as possible. Experiment configuration, conditions and metadata are stored in a database. To implement the server side Node.js, a modern JavaScript framework, has been chosen. A new template engine having an interesting feature is designed. A part of the system is put into production. It includes templates dealing with showing and editing first level trigger configuration and equipment configuration and also showing experiment metadata and experiment conditions data index.
Plan for the Characterization of HIRF Effects on a Fault-Tolerant Computer Communication System
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.; Koppen, Sandra V.
2008-01-01
This report presents the plan for the characterization of the effects of high intensity radiated fields on a prototype implementation of a fault-tolerant data communication system. Various configurations of the communication system will be tested. The prototype system is implemented using off-the-shelf devices. The system will be tested in a closed-loop configuration with extensive real-time monitoring. This test is intended to generate data suitable for the design of avionics health management systems, as well as redundancy management mechanisms and policies for robust distributed processing architectures.
NASA Astrophysics Data System (ADS)
Terminanto, A.; Swantoro, H. A.; Hidayanto, A. N.
2017-12-01
Enterprise Resource Planning (ERP) is an integrated information system to manage business processes of companies of various business scales. Because of the high cost of ERP investment, ERP implementation is usually done in large-scale enterprises, Due to the complexity of implementation problems, the success rate of ERP implementation is still low. Open Source System ERP becomes an alternative choice of ERP application to SME companies in terms of cost and customization. This study aims to identify characteristics and configure the implementation of OSS ERP Payroll module in KKPS (Employee Cooperative PT SRI) using OSS ERP Odoo and using ASAP method. This study is classified into case study research and action research. Implementation of OSS ERP Payroll module is done because the HR section of KKPS has not been integrated with other parts. The results of this study are the characteristics and configuration of OSS ERP payroll module in KKPS.
Understanding managerial behaviour during initial steps of a clinical information system adoption
2011-01-01
Background While the study of the information technology (IT) implementation process and its outcomes has received considerable attention, the examination of pre-adoption and pre-implementation stages of configurable IT uptake appear largely under-investigated. This paper explores managerial behaviour during the periods prior the effective implementation of a clinical information system (CIS) by two Canadian university multi-hospital centers. Methods Adopting a structurationist theoretical stance and a case study research design, the processes by which CIS managers' patterns of discourse contribute to the configuration of the new technology in their respective organizational contexts were longitudinally examined over 33 months. Results Although managers seemed to be aware of the risks and organizational impact of the adoption of a new clinical information system, their decisions and actions over the periods examined appeared rather to be driven by financial constraints and power struggles between different groups involved in the process. Furthermore, they largely emphasized technological aspects of the implementation, with organizational dimensions being put aside. In view of these results, the notion of 'rhetorical ambivalence' is proposed. Results are further discussed in relation to the significance of initial decisions and actions for the subsequent implementation phases of the technology being configured. Conclusions Theoretical and empirically grounded, the paper contributes to the underdeveloped body of literature on information system pre-implementation processes by revealing the crucial role played by managers during the initial phases of a CIS adoption. PMID:21682885
Software control and system configuration management - A process that works
NASA Technical Reports Server (NTRS)
Petersen, K. L.; Flores, C., Jr.
1983-01-01
A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.
Software control and system configuration management: A systems-wide approach
NASA Technical Reports Server (NTRS)
Petersen, K. L.; Flores, C., Jr.
1984-01-01
A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.
FY 95 engineering work plan for the design reconstitution implementation action plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bigbee, J.D.
Design reconstitution work is to be performed as part of an overall effort to upgrade Configuration Management (CM) at TWRS. WHC policy is to implement a program that is compliant with DOE-STD-1073-93, Guide for Operational Configuration Management Program. DOE-STD-1073 requires an adjunct program for reconstituting design information. WHC-SD-WM-CM-009, Design Reconstitution Program Plan for Waste Tank Farms and 242-A Evaporator of Tank Waste Remediation System, is the TWRS plan for meeting DOE-STD-1073 design reconstitution requirements. The design reconstitution plan is complex requiring significant time and effort for implementation. In order to control costs, and integrate the work into other TWRS activities,more » a Design Reconstitution Implementation Action Plan (DR IAP) will be developed, and approved by those organizations having ownership or functional interest in this activity.« less
GI-conf: A configuration tool for the GI-cat distributed catalog
NASA Astrophysics Data System (ADS)
Papeschi, F.; Boldrini, E.; Bigagli, L.; Mazzetti, P.
2009-04-01
In this work we present a configuration tool for the GI-cat. In an Service-Oriented Architecture (SOA) framework, GI-cat implements a distributed catalog service providing advanced capabilities, such as: caching, brokering and mediation functionalities. GI-cat applies a distributed approach, being able to distribute queries to the remote service providers of interest in an asynchronous style, and notifies the status of the queries to the caller implementing an incremental feedback mechanism. Today, GI-cat functionalities are made available through two standard catalog interfaces: the OGC CSW ISO and CSW Core Application Profiles. However, two other interfaces are under testing: the CIM and the EO Extension Packages of the CSW ebRIM Application Profile. GI-cat is able to interface a multiplicity of discovery and access services serving heterogeneous Earth and Space Sciences resources. They include international standards like the OGC Web Services -i.e. OGC CSW, WCS, WFS and WMS, as well as interoperability arrangements (i.e. community standards) such as: UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services, and SibESS-C infrastructure services. GI-conf implements user-friendly configuration tool for GI-cat. This is a GUI application that employs a visual and very simple approach to configure both the GI-cat publishing and distribution capabilities, in a dynamic way. The tool allows to set one or more GI-cat configurations. Each configuration consists of: a) the catalog standards interfaces published by GI-cat; b) the resources (i.e. services/servers) to be accessed and mediated -i.e. federated. Simple icons are used for interfaces and resources, implementing a user-friendly visual approach. The main GI-conf functionalities are: • Interfaces and federated resources management: user can set which interfaces must be published; besides, she/he can add a new resource, update or remove an already federated resource. • Multiple configuration management: multiple GI-cat configurations can be defined; every configuration identifies a set of published interfaces and a set of federated resources. Configurations can be edited, added, removed, exported, and even imported. • HTML report creation: an HTML report can be created, showing the current active GI-cat configuration, including the resources that are being federated and the published interface endpoints. The configuration tool is shipped with GI-cat and can be used to configure the service after its installation is completed.
An ontology-based semantic configuration approach to constructing Data as a Service for enterprises
NASA Astrophysics Data System (ADS)
Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi
2016-03-01
To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.
GSC configuration management plan
NASA Technical Reports Server (NTRS)
Withers, B. Edward
1990-01-01
The tools and methods used for the configuration management of the artifacts (including software and documentation) associated with the Guidance and Control Software (GCS) project are described. The GCS project is part of a software error studies research program. Three implementations of GCS are being produced in order to study the fundamental characteristics of the software failure process. The Code Management System (CMS) is used to track and retrieve versions of the documentation and software. Application of the CMS for this project is described and the numbering scheme is delineated for the versions of the project artifacts.
YAMM - Yet Another Menu Manager
NASA Technical Reports Server (NTRS)
Mazer, Alan S.; Weidner, Richard J.
1991-01-01
Yet Another Menu Manager (YAMM) computer program an application-independent menuing package of software designed to remove much difficulty and save much time inherent in implementation of front ends of large packages of software. Provides complete menuing front end for wide variety of applications, with provisions for independence from specific types of terminals, configurations that meet specific needs of users, and dynamic creation of menu trees. Consists of two parts: description of menu configuration and body of application code. Written in C.
Beck, Peter; Truskaller, Thomas; Rakovac, Ivo; Cadonna, Bruno; Pieber, Thomas R
2006-01-01
In this paper we describe the approach to build a web-based clinical data management infrastructure on top of an entity-attribute-value (EAV) database which provides for flexible definition and extension of clinical data sets as well as efficient data handling and high performance query execution. A "mixed" EAV implementation provides a flexible and configurable data repository and at the same time utilizes the performance advantages of conventional database tables for rarely changing data structures. A dynamically configurable data dictionary contains further information for data validation. The online user interface can also be assembled dynamically. A data transfer object which encapsulates data together with all required metadata is populated by the backend and directly used to dynamically render frontend forms and handle incoming data. The "mixed" EAV model enables flexible definition and modification of clinical data sets while reducing performance drawbacks of pure EAV implementations to a minimum. The system currently is in use in an electronic patient record with focus on flexibility and a quality management application (www.healthgate.at) with high performance requirements.
Management of the Space Station Freedom onboard local area network
NASA Technical Reports Server (NTRS)
Miller, Frank W.; Mitchell, Randy C.
1991-01-01
An operational approach is proposed to managing the Data Management System Local Area Network (LAN) on Space Station Freedom. An overview of the onboard LAN elements is presented first, followed by a proposal of the operational guidelines by which management of the onboard network may be effected. To implement the guidelines, a recommendation is then presented on a set of network management parameters which should be made available in the onboard Network Operating System Computer Software Configuration Item and Fiber Distributed Data Interface firmware. Finally, some implications for the implementation of the various network management elements are discussed.
NASA Technical Reports Server (NTRS)
Pepe, J. T.
1972-01-01
A functional design of software executive system for the space shuttle avionics computer is presented. Three primary functions of the executive are emphasized in the design: task management, I/O management, and configuration management. The executive system organization is based on the applications software and configuration requirements established during the Phase B definition of the Space Shuttle program. Although the primary features of the executive system architecture were derived from Phase B requirements, it was specified for implementation with the IBM 4 Pi EP aerospace computer and is expected to be incorporated into a breadboard data management computer system at NASA Manned Spacecraft Center's Information system division. The executive system was structured for internal operation on the IBM 4 Pi EP system with its external configuration and applications software assumed to the characteristic of the centralized quad-redundant avionics systems defined in Phase B.
NASA Technical Reports Server (NTRS)
Cavanaugh, J.
1994-01-01
This plan describes methods and procedures Aerojet will follow in the implementation of configuration control for each established baseline. The plan is written in response to the GSFC EOS CM Plan 420-02-02, dated January 1990, and also meets he requirements specified in DOD-STD-480, DOD-D 1000B, MIL-STD-483A, and MIL-STD-490B. The plan establishes the configuration management process to be used for the deliverable hardware, software, and firmware of the EOS/AMSU-A during development, design, fabrication, test, and delivery. This revision includes minor updates to reflect Aerojet's CM policies.
Managing computer-controlled operations
NASA Technical Reports Server (NTRS)
Plowden, J. B.
1985-01-01
A detailed discussion of Launch Processing System Ground Software Production is presented to establish the interrelationships of firing room resource utilization, configuration control, system build operations, and Shuttle data bank management. The production of a test configuration identifier is traced from requirement generation to program development. The challenge of the operational era is to implement fully automated utilities to interface with a resident system build requirements document to eliminate all manual intervention in the system build operations. Automatic update/processing of Shuttle data tapes will enhance operations during multi-flow processing.
On I/O Virtualization Management
NASA Astrophysics Data System (ADS)
Danciu, Vitalian A.; Metzker, Martin G.
The quick adoption of virtualization technology in general and the advent of the Cloud business model entail new requirements on the structure and the configuration of back-end I/O systems. Several approaches to virtualization of I/O links are being introduced, which aim at implementing a more flexible I/O channel configuration without compromising performance. While previously the management of I/O devices could be limited to basic technical requirments (e.g. the establishment and termination of fixed-point links), the additional flexibility carries in its wake additional management requirements on the representation and control of I/O sub-systems.
Furberg, Robert D; Ortiz, Alexa M; Zulkiewicz, Brittany A; Hudson, Jordan P; Taylor, Olivia M; Lewis, Megan A
2016-06-27
Tablet-based health care interventions have the potential to encourage patient care in a timelier manner, allow physicians convenient access to patient records, and provide an improved method for patient education. However, along with the continued adoption of tablet technologies, there is a concomitant need to develop protocols focusing on the configuration, management, and maintenance of these devices within the health care setting to support the conduct of clinical research. Develop three protocols to support tablet configuration, tablet management, and tablet maintenance. The Configurator software, Tile technology, and current infection control recommendations were employed to develop three distinct protocols for tablet-based digital health interventions. Configurator is a mobile device management software specifically for iPhone operating system (iOS) devices. The capabilities and current applications of Configurator were reviewed and used to develop the protocol to support device configuration. Tile is a tracking tag associated with a free mobile app available for iOS and Android devices. The features associated with Tile were evaluated and used to develop the Tile protocol to support tablet management. Furthermore, current recommendations on preventing health care-related infections were reviewed to develop the infection control protocol to support tablet maintenance. This article provides three protocols: the Configurator protocol, the Tile protocol, and the infection control protocol. These protocols can help to ensure consistent implementation of tablet-based interventions, enhance fidelity when employing tablets for research purposes, and serve as a guide for tablet deployments within clinical settings.
Management system for the SND experiments
NASA Astrophysics Data System (ADS)
Pugachev, K.; Korol, A.
2017-09-01
A new management system for the SND detector experiments (at VEPP-2000 collider in Novosibirsk) is developed. We describe here the interaction between a user and the SND databases. These databases contain experiment configuration, conditions and metadata. The new system is designed in client-server architecture. It has several logical layers corresponding to the users roles. A new template engine is created. A web application is implemented using Node.js framework. At the time the application provides: showing and editing configuration; showing experiment metadata and experiment conditions data index; showing SND log (prototype).
An IEEE 1451.1 Architecture for ISHM Applications
NASA Technical Reports Server (NTRS)
Morris, Jon A.; Turowski, Mark; Schmalzel, John L.; Figueroa, Jorge F.
2007-01-01
The IEEE 1451.1 Standard for a Smart Transducer Interface defines a common network information model for connecting and managing smart elements in control and data acquisition networks using network-capable application processors (NCAPs). The Standard is a network-neutral design model that is easily ported across operating systems and physical networks for implementing complex acquisition and control applications by simply plugging in the appropriate network level drivers. To simplify configuration and tracking of transducer and actuator details, the family of 1451 standards defines a Transducer Electronic Data Sheet (TEDS) that is associated with each physical element. The TEDS contains all of the pertinent information about the physical operations of a transducer (such as operating regions, calibration tables, and manufacturer information), which the NCAP uses to configure the system to support a specific transducer. The Integrated Systems Health Management (ISHM) group at NASA's John C. Stennis Space Center (SSC) has been developing an ISHM architecture that utilizes IEEE 1451.1 as the primary configuration and data acquisition mechanism for managing and collecting information from a network of distributed intelligent sensing elements. This work has involved collaboration with other NASA centers, universities and aerospace industries to develop IEEE 1451.1 compliant sensors and interfaces tailored to support health assessment of complex systems. This paper and presentation describe the development and implementation of an interface for the configuration, management and communication of data, information and knowledge generated by a distributed system of IEEE 1451.1 intelligent elements monitoring a rocket engine test system. In this context, an intelligent element is defined as one incorporating support for the IEEE 1451.x standards and additional ISHM functions. Our implementation supports real-time collection of both measurement data (raw ADC counts and converted engineering units) and health statistics produced by each intelligent element. The handling of configuration, calibration and health information is automated by using the TEDS in combination with other electronic data sheets extensions to convey health parameters. By integrating the IEEE 1451.1 Standard for a Smart Transducer Interface with ISHM technologies, each element within a complex system becomes a highly flexible computation engine capable of self-validation and performing other measures of the quality of information it is producing.
A distributed data base management capability for the deep space network
NASA Technical Reports Server (NTRS)
Bryan, A. I.
1976-01-01
The Configuration Control and Audit Assembly (CCA) is reported that has been designed to provide a distributed data base management capability for the DSN. The CCA utilizes capabilities provided by the DSN standard minicomputer and the DSN standard nonreal time high level management oriented programming language, MBASIC. The characteristics of the CCA for the first phase of implementation are described.
Autonomic Management in a Distributed Storage System
NASA Astrophysics Data System (ADS)
Tauber, Markus
2010-07-01
This thesis investigates the application of autonomic management to a distributed storage system. Effects on performance and resource consumption were measured in experiments, which were carried out in a local area test-bed. The experiments were conducted with components of one specific distributed storage system, but seek to be applicable to a wide range of such systems, in particular those exposed to varying conditions. The perceived characteristics of distributed storage systems depend on their configuration parameters and on various dynamic conditions. For a given set of conditions, one specific configuration may be better than another with respect to measures such as resource consumption and performance. Here, configuration parameter values were set dynamically and the results compared with a static configuration. It was hypothesised that under non-changing conditions this would allow the system to converge on a configuration that was more suitable than any that could be set a priori. Furthermore, the system could react to a change in conditions by adopting a more appropriate configuration. Autonomic management was applied to the peer-to-peer (P2P) and data retrieval components of ASA, a distributed storage system. The effects were measured experimentally for various workload and churn patterns. The management policies and mechanisms were implemented using a generic autonomic management framework developed during this work. The experimental evaluations of autonomic management show promising results, and suggest several future research topics. The findings of this thesis could be exploited in building other distributed storage systems that focus on harnessing storage on user workstations, since these are particularly likely to be exposed to varying, unpredictable conditions.
NASA Astrophysics Data System (ADS)
Lisio, Giovanni; Candia, Sante; Campolo, Giovanni; Pascucci, Dario
2011-08-01
Thales Alenia Space Italy has carried out the definition of a configurable (on mission basis) PUS ECSS-E_70- 41A see [3] Centralised Services Layer, characterised by:- a mission-independent set of 'classes' implementing the services logic.- a mission-dependent set of configuration data and selection flags.The software components belonging to this layer implement the PUS standard services ECSS-E_70-41A and a set of mission-specific services. The design of this layer has been performed by separating the services mechanisms (mission-independent execution logic) from the services configuration information (mission-dependent data). Once instantiated for a specific mission, the PUS Centralised Services Layer offers a large set of capabilities available to the CSCI's Applications Layer. This paper describes the building blocks PUS architectural solution developed by Thales Alenia Space Italy, emphasizing the mechanisms which allow easy configuration of the Scalable PUS library to fulfill the requirements of different missions. This paper also focus the Thales Alenia Space solution to automatically generate the mission-specific "PUS Services" flight software based on mission specific requirements. Building the PUS services mechanisms, which are configurable on mission basis is part of the PRIMA (Multipurpose Spacecraft Bus ) 'missionisation' process improvement. PRIMA Platform Avionics Software (ASW) is continuously evolving to improve modularity and standardization of interfaces and of SW components (see references in [1]).
Furberg, Robert D; Zulkiewicz, Brittany A; Hudson, Jordan P; Taylor, Olivia M; Lewis, Megan A
2016-01-01
Background Tablet-based health care interventions have the potential to encourage patient care in a timelier manner, allow physicians convenient access to patient records, and provide an improved method for patient education. However, along with the continued adoption of tablet technologies, there is a concomitant need to develop protocols focusing on the configuration, management, and maintenance of these devices within the health care setting to support the conduct of clinical research. Objective Develop three protocols to support tablet configuration, tablet management, and tablet maintenance. Methods The Configurator software, Tile technology, and current infection control recommendations were employed to develop three distinct protocols for tablet-based digital health interventions. Configurator is a mobile device management software specifically for iPhone operating system (iOS) devices. The capabilities and current applications of Configurator were reviewed and used to develop the protocol to support device configuration. Tile is a tracking tag associated with a free mobile app available for iOS and Android devices. The features associated with Tile were evaluated and used to develop the Tile protocol to support tablet management. Furthermore, current recommendations on preventing health care–related infections were reviewed to develop the infection control protocol to support tablet maintenance. Results This article provides three protocols: the Configurator protocol, the Tile protocol, and the infection control protocol. Conclusions These protocols can help to ensure consistent implementation of tablet-based interventions, enhance fidelity when employing tablets for research purposes, and serve as a guide for tablet deployments within clinical settings. PMID:27350013
Satellite control system nucleus for the Brazilian complete space mission
NASA Astrophysics Data System (ADS)
Yamaguti, Wilson; Decarvalhovieira, Anastacio Emanuel; Deoliveira, Julia Leocadia; Cardoso, Paulo Eduardo; Dacosta, Petronio Osorio
1990-10-01
The nucleus of the satellite control system for the Brazilian data collecting and remote sensing satellites is described. The system is based on Digital Equipment Computers and the VAX/VMS operating system. The nucleus provides the access control, the system configuration, the event management, history files management, time synchronization, wall display control, and X25 data communication network access facilities. The architecture of the nucleus and its main implementation aspects are described. The implementation experience acquired is considered.
NASA Astrophysics Data System (ADS)
Kumar, M.; Seyednasrollah, B.; Link, T. E.
2013-12-01
In upland snowfed forested watersheds, where the majority of melt recharge occurs, there is growing interest among water and forest managers to strike a balance between maximizing forest productivity and minimizing impacts on water resources. Implementation of forest management strategies that involve reduction of forest cover generally result in increased water yield and peak flows from forests, which has potentially detrimental consequences including increased erosion, stream destabilization, water shortages in late melt season, and degradation of water quality and ecosystem health. These ill effects can be partially negated by implementing optimal gap patterns and vegetation densities through forest management, that may minimize net radiation on snow-covered forest floor (NRSF). A small NRSF can moderate peak flows and increase water availability late in the melt season. Since forest canopies reduce direct solar (0.28 - 3.5 μm) radiation but increase longwave (3.5-100 μm) radiation at the snow surface, by performing detailed quantification of individual radiation components for a range of vegetation density and and gap configurations, we identify the optimal vegetation configurations. We also evaluate the role of site location, its topographic setting, local meteorological conditions and vegetation morphological characteristics, on the optimal configurations. The results can be used to assist forest managers to quantify the radiative regime alteration for various thinning and gap-creation scenarios, as a function of latitudinal, topographic, climatic and vegetation characteristics.
CM Process Improvement and the International Space Station Program (ISSP)
NASA Technical Reports Server (NTRS)
Stephenson, Ginny
2007-01-01
This viewgraph presentation reviews the Configuration Management (CM) process improvements planned and undertaken for the International Space Station Program (ISSP). It reviews the 2004 findings and recommendations and the progress towards their implementation.
NASA Technical Reports Server (NTRS)
1976-01-01
This redundant strapdown INS preliminary design study demonstrates the practicality of a skewed sensor system configuration by means of: (1) devising a practical system mechanization utilizing proven strapdown instruments, (2) thoroughly analyzing the skewed sensor redundancy management concept to determine optimum geometry, data processing requirements, and realistic reliability estimates, and (3) implementing the redundant computers into a low-cost, maintainable configuration.
Study of China green supply chain management policies and standard
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxin; Huang, Jin; Lin, Ling
2017-11-01
With the highlight of the environment issues, manufacturing industry needs to be environmentally managed with integrated methods in system aspect. Green supply chain management, integrating the environment aspect into each step of the implement of supply chain management, is the key measure to improve the efficiency of environmental management and to remit the pollution. It also helps to make best use and configuration of the resources and has been attracting much attention from our government, enterprises and academia in recent years. This paper introduced the definition and content of green supply chain management, concluded the research progress of green supply chain management by domestic scholars, stated the characteristic and achievement of the implement of green supply chain management in China as well as analyzed the current existing problems and suggestions in the future.
NASA Astrophysics Data System (ADS)
Lapotre, Vianney; Gogniat, Guy; Baghdadi, Amer; Diguet, Jean-Philippe
2017-12-01
The multiplication of connected devices goes along with a large variety of applications and traffic types needing diverse requirements. Accompanying this connectivity evolution, the last years have seen considerable evolutions of wireless communication standards in the domain of mobile telephone networks, local/wide wireless area networks, and Digital Video Broadcasting (DVB). In this context, intensive research has been conducted to provide flexible turbo decoder targeting high throughput, multi-mode, multi-standard, and power consumption efficiency. However, flexible turbo decoder implementations have not often considered dynamic reconfiguration issues in this context that requires high speed configuration switching. Starting from this assessment, this paper proposes the first solution that allows frame-by-frame run-time configuration management of a multi-processor turbo decoder without compromising the decoding performances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Lee H.; Laros, James H., III
This paper describes a methodology for implementing disk-less cluster systems using the Network File System (NFS) that scales to thousands of nodes. This method has been successfully deployed and is currently in use on several production systems at Sandia National Labs. This paper will outline our methodology and implementation, discuss hardware and software considerations in detail and present cluster configurations with performance numbers for various management operations like booting.
2011-09-01
PBL may see changes as the design is actually implemented . Such changes are typically for practical reasons of adapting to either specific...shall use a configuration management approach to establish and control product attributes and the product baseline across the total system life cycle... practice that helps prevent government interference in subcontracts, holds the prime contractor accountable for their end product (s), limits the potential
Systems engineering implementation in the preliminary design phase of the Giant Magellan Telescope
NASA Astrophysics Data System (ADS)
Maiten, J.; Johns, M.; Trancho, G.; Sawyer, D.; Mady, P.
2012-09-01
Like many telescope projects today, the 24.5-meter Giant Magellan Telescope (GMT) is truly a complex system. The primary and secondary mirrors of the GMT are segmented and actuated to support two operating modes: natural seeing and adaptive optics. GMT is a general-purpose telescope supporting multiple science instruments operated in those modes. GMT is a large, diverse collaboration and development includes geographically distributed teams. The need to implement good systems engineering processes for managing the development of systems like GMT becomes imperative. The management of the requirements flow down from the science requirements to the component level requirements is an inherently difficult task in itself. The interfaces must also be negotiated so that the interactions between subsystems and assemblies are well defined and controlled. This paper will provide an overview of the systems engineering processes and tools implemented for the GMT project during the preliminary design phase. This will include requirements management, documentation and configuration control, interface development and technical risk management. Because of the complexity of the GMT system and the distributed team, using web-accessible tools for collaboration is vital. To accomplish this GMTO has selected three tools: Cognition Cockpit, Xerox Docushare, and Solidworks Enterprise Product Data Management (EPDM). Key to this is the use of Cockpit for managing and documenting the product tree, architecture, error budget, requirements, interfaces, and risks. Additionally, drawing management is accomplished using an EPDM vault. Docushare, a documentation and configuration management tool is used to manage workflow of documents and drawings for the GMT project. These tools electronically facilitate collaboration in real time, enabling the GMT team to track, trace and report on key project metrics and design parameters.
Adaptive momentum management for large space structures
NASA Technical Reports Server (NTRS)
Hahn, E.
1987-01-01
Momentum management is discussed for a Large Space Structure (LSS) with the structure selected configuration being the Initial Orbital Configuration (IOC) of the dual keel space station. The external forces considered were gravity gradient and aerodynamic torques. The goal of the momentum management scheme developed is to remove the bias components of the external torques and center the cyclic components of the stored angular momentum. The scheme investigated is adaptive to uncertainties of the inertia tensor and requires only approximate knowledge of principle moments of inertia. Computational requirements are minimal and should present no implementation problem in a flight type computer and the method proposed is shown to be effective in the presence of attitude control bandwidths as low as .01 radian/sec.
Adaptive momentum management for the dual keel Space Station
NASA Technical Reports Server (NTRS)
Hopkins, M.; Hahn, E.
1987-01-01
The report discusses momentum management for a large space structure with the structure selected configuration being the Initial Orbital Configuration of the dual-keel Space Station. The external torques considered were gravity gradient and aerodynamic torques. The goal of the momentum management scheme developed is to remove the bias components of the external torques and center the cyclic components of the stored angular momentum. The scheme investigated is adaptive to uncertainties of the inertia tensor and requires only approximate knowledge of principal moments of inertia. Computational requirements are minimal and should present no implementation problem in a flight-type computer. The method proposed is shown to be effective in the presence of attitude control bandwidths as low as 0.01 radian/sec.
The M68HC11 gripper controller software. Thesis
NASA Technical Reports Server (NTRS)
Tsai, Jodi Wei-Duk
1991-01-01
This thesis discusses the development of firmware for the 68HC11 gripper controller. A general description of the software and hardware interfaces is given. The C library interface for the gripper is then described and followed by a detailed discussion of the software architecture of the firmware. A procedure to assemble and download 68HC11 programs is presented in the form of a tutorial. The tools used to implement this environment are then described. Finally, the implementation of the configuration management scheme used to manage all CIRSSE software is presented.
Joshua Adkins; Christopher Barton; Scott Grubbs; Jeffrey Stringer; Randy Kolka
2016-01-01
Headwater streams generally comprise the majority of stream area in a watershed and can have a strong influence on downstream food webs. Our objective was to determine the effect of altering streamside management zone (SMZ) configurations on headwater aquatic insect communities. Timber harvests were implemented within six watersheds in eastern Kentucky. The SMZ...
Managing EEE part standardisation and procurement
NASA Astrophysics Data System (ADS)
Serieys, C.; Bensoussan, A.; Petitmangin, A.; Rigaud, M.; Barbaresco, P.; Lyan, C.
2002-12-01
This paper presents the development activities in space components selection and procurement dealing with a new data base tool implemented at Alcatel Space using TransForm softwaa re configurator developed by Techform S.A. Based on TransForm, Access Ingenierie has devv eloped a software product named OLG@DOS which facilitate the part nomenclatures analyses for new equipment design and manufacturing in term of ACCESS data base implementation. Hi-Rel EEE part type technical, production and quality information are collected and compiled usingproduction data base issued from production tools implemented for equipment definition, description and production based on Manufacturing Resource Planning (MRP II Control Open) and Parametric Design Manager (PDM Work Manager). The analysis of any new equipment nomenclature may be conducted through this means for standardisation purpose, cost containment program and management procurement activities as well as preparation of Component reviews as Part Approval Document and Declared Part List validation.
Evaluating the Potential of Commercial GIS for Accelerator Configuration Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
T.L. Larrieu; Y.R. Roblin; K. White
2005-10-10
The Geographic Information System (GIS) is a tool used by industries needing to track information about spatially distributed assets. A water utility, for example, must know not only the precise location of each pipe and pump, but also the respective pressure rating and flow rate of each. In many ways, an accelerator such as CEBAF (Continuous Electron Beam Accelerator Facility) can be viewed as an ''electron utility''. Whereas the water utility uses pipes and pumps, the ''electron utility'' uses magnets and RF cavities. At Jefferson lab we are exploring the possibility of implementing ESRI's ArcGIS as the framework for buildingmore » an all-encompassing accelerator configuration database that integrates location, configuration, maintenance, and connectivity details of all hardware and software. The possibilities of doing so are intriguing. From the GIS, software such as the model server could always extract the most-up-to-date layout information maintained by the Survey & Alignment for lattice modeling. The Mechanical Engineering department could use ArcGIS tools to generate CAD drawings of machine segments from the same database. Ultimately, the greatest benefit of the GIS implementation could be to liberate operators and engineers from the limitations of the current system-by-system view of machine configuration and allow a more integrated regional approach. The commercial GIS package provides a rich set of tools for database-connectivity, versioning, distributed editing, importing and exporting, and graphical analysis and querying, and therefore obviates the need for much custom development. However, formidable challenges to implementation exist and these challenges are not only technical and manpower issues, but also organizational ones. The GIS approach would crosscut organizational boundaries and require departments, which heretofore have had free reign to manage their own data, to cede some control and agree to a centralized framework.« less
A Graphical User Interface for the Low Cost Combat Direction System
1991-09-16
the same tasks. These shipboard tasks, which include contact management , moving geometry calculations, intelligence compila- tion, area plotting and...Display Defaults Analysis This category covers a wide range of required data input and system configuration issues. To keep the screen display manageable ...parts or dialog boxes. The implementation of an Ada application using STARS is quite straightforward, although knowlede of X Protocol primitives is
[Requirements for the successful installation of an data management system].
Benson, M; Junger, A; Quinzio, L; Hempelmann, G
2002-08-01
Due to increasing requirements on medical documentation, especially with reference to the German Social Law binding towards quality management and introducing a new billing system (DRGs), an increasing number of departments consider to implement a patient data management system (PDMS). The installation should be professionally planned as a project in order to insure and complete a successful installation. The following aspects are essential: composition of the project group, definition of goals, finance, networking, space considerations, hardware, software, configuration, education and support. Project and finance planning must be prepared before beginning the project and the project process must be constantly evaluated. In selecting the software, certain characteristics should be considered: use of standards, configurability, intercommunicability and modularity. Our experience has taught us that vaguely defined goals, insufficient project planning and the existing management culture are responsible for the failure of PDMS installations. The software used tends to play a less important role.
NASA Technical Reports Server (NTRS)
Doreswamy, Rajiv
1990-01-01
The Marshall Space Flight Center (MSFC) owns and operates a space station module power management and distribution (SSM-PMAD) testbed. This system, managed by expert systems, is used to analyze and develop power system automation techniques for Space Station Freedom. The Lewis Research Center (LeRC), Cleveland, Ohio, has developed and implemented a space station electrical power system (EPS) testbed. This system and its power management controller are representative of the overall Space Station Freedom power system. A virtual link is being implemented between the testbeds at MSFC and LeRC. This link would enable configuration of SSM-PMAD as a load center for the EPS testbed at LeRC. This connection will add to the versatility of both systems, and provide an environment of enhanced realism for operation of both testbeds.
NASA Astrophysics Data System (ADS)
Shamugam, Veeramani; Murray, I.; Leong, J. A.; Sidhu, Amandeep S.
2016-03-01
Cloud computing provides services on demand instantly, such as access to network infrastructure consisting of computing hardware, operating systems, network storage, database and applications. Network usage and demands are growing at a very fast rate and to meet the current requirements, there is a need for automatic infrastructure scaling. Traditional networks are difficult to automate because of the distributed nature of their decision making process for switching or routing which are collocated on the same device. Managing complex environments using traditional networks is time-consuming and expensive, especially in the case of generating virtual machines, migration and network configuration. To mitigate the challenges, network operations require efficient, flexible, agile and scalable software defined networks (SDN). This paper discuss various issues in SDN and suggests how to mitigate the network management related issues. A private cloud prototype test bed was setup to implement the SDN on the OpenStack platform to test and evaluate the various network performances provided by the various configurations.
Pattern Driven Selection and Configuration of S&D Mechanisms at Runtime
NASA Astrophysics Data System (ADS)
Crespo, Beatriz Gallego-Nicasio; Piñuela, Ana; Soria-Rodriguez, Pedro; Serrano, Daniel; Maña, Antonio
In order to satisfy the requests of SERENITY-aware applications, the SERENITY Runtime Framework’s main task is to perform pattern selection, to provide the application with the most suitable S&D Solution that satisfies the request. The result of this selection process depends on two main factors: the content of the S&D Library and the information stored and managed by the Context Manager. Three processes are involved: searching of the S&D Library to get the initial set of candidates to be selected; filtering and ordering the collection, based on the SRF configuration; and perform a loop to check S&D Pattern preconditions over the remaining S&D Artifacts in order to select the most suitable S&D Pattern first, and later the appropriate S&D Implementation for the environment conditions. Once the S&D Implementation is selected, the SERENITY Runtime Framework instantiates an Executable Component (EC) and provides the application with the necessary information and mechanism to make use of the EC.
Human-Technology Centric In Cyber Security Maintenance For Digital Transformation Era
NASA Astrophysics Data System (ADS)
Ali, Firkhan Ali Bin Hamid; Zalisham Jali, Mohd, Dr
2018-05-01
The development of the digital transformation in the organizations has become more expanding in these present and future years. This is because of the active demand to use the ICT services among all the organizations whether in the government agencies or private sectors. While digital transformation has led manufacturers to incorporate sensors and software analytics into their offerings, the same innovation has also brought pressure to offer clients more accommodating appliance deployment options. So, their needs a well plan to implement the cyber infrastructures and equipment. The cyber security play important role to ensure that the ICT components or infrastructures execute well along the organization’s business successful. This paper will present a study of security management models to guideline the security maintenance on existing cyber infrastructures. In order to perform security model for the currently existing cyber infrastructures, combination of the some security workforces and security process of extracting the security maintenance in cyber infrastructures. In the assessment, the focused on the cyber security maintenance within security models in cyber infrastructures and presented a way for the theoretical and practical analysis based on the selected security management models. Then, the proposed model does evaluation for the analysis which can be used to obtain insights into the configuration and to specify desired and undesired configurations. The implemented cyber security maintenance within security management model in a prototype and evaluated it for practical and theoretical scenarios. Furthermore, a framework model is presented which allows the evaluation of configuration changes in the agile and dynamic cyber infrastructure environments with regard to properties like vulnerabilities or expected availability. In case of a security perspective, this evaluation can be used to monitor the security levels of the configuration over its lifetime and to indicate degradations.
Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha
2016-02-27
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-03-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-01-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335
Requirements management for Gemini Observatory: a small organization with big development projects
NASA Astrophysics Data System (ADS)
Close, Madeline; Serio, Andrew; Cordova, Martin; Hardie, Kayla
2016-08-01
Gemini Observatory is an astronomical observatory operating two premier 8m-class telescopes, one in each hemisphere. As an operational facility, a majority of Gemini's resources are spent on operations however the observatory undertakes major development projects as well. Current projects include new facility science instruments, an operational paradigm shift to full remote operations, and new operations tools for planning, configuration and change control. Three years ago, Gemini determined that a specialized requirements management tool was needed. Over the next year, the Gemini Systems Engineering Group investigated several tools, selected one for a trial period and configured it for use. Configuration activities including definition of systems engineering processes, development of a requirements framework, and assignment of project roles to tool roles. Test projects were implemented in the tool. At the conclusion of the trial, the group determined that the Gemini could meet its requirements management needs without use of a specialized requirements management tool, and the group identified a number of lessons learned which are described in the last major section of this paper. These lessons learned include how to conduct an organizational needs analysis prior to pursuing a tool; caveats concerning tool criteria and the selection process; the prerequisites and sequence of activities necessary to achieve an optimum configuration of the tool; the need for adequate staff resources and staff training; and a special note regarding organizations in transition and archiving of requirements.
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
[Lean thinking and brain-dead patient assistance in the organ donation process].
Pestana, Aline Lima; dos Santos, José Luís Guedes; Erdmann, Rolf Hermann; da Silva, Elza Lima; Erdmann, Alacoque Lorenzini
2013-02-01
Organ donation is a complex process that challenges health system professionals and managers. This study aimed to introduce a theoretical model to organize brain-dead patient assistance and the organ donation process guided by the main lean thinking ideas, which enable production improvement through planning cycles and the development of a proper environment for successful implementation. Lean thinking may make the process of organ donation more effective and efficient and may contribute to improvements in information systematization and professional qualifications for excellence of assistance. The model is configured as a reference that is available for validation and implementation by health and nursing professionals and managers in the management of potential organ donors after brain death assistance and subsequent transplantation demands.
NASA Technical Reports Server (NTRS)
1983-01-01
The overall configuration and modules of the initial and evolved space station are described as well as tended industrial and polar platforms. The mass properties that are the basis for costing are summarized. User friendly attributes (interfaces, resources, and facilities) are identified for commercial; science and applications; industrial park; international participation; national security; and the external tank option. Configuration alternates studied to determine a baseline are examined. Commonality for clustered 3-man and 9-man stations are considered as well as the use of tethered platforms. Requirements are indicated for electrical, communication and tracking; data management Subsystem requirements for electrical, data management, communication and tracking, environment control/life support system; and guidance navigation and control subsystems are identified.
NASA Astrophysics Data System (ADS)
1983-04-01
The overall configuration and modules of the initial and evolved space station are described as well as tended industrial and polar platforms. The mass properties that are the basis for costing are summarized. User friendly attributes (interfaces, resources, and facilities) are identified for commercial; science and applications; industrial park; international participation; national security; and the external tank option. Configuration alternates studied to determine a baseline are examined. Commonality for clustered 3-man and 9-man stations are considered as well as the use of tethered platforms. Requirements are indicated for electrical, communication and tracking; data management Subsystem requirements for electrical, data management, communication and tracking, environment control/life support system; and guidance navigation and control subsystems are identified.
Khanassov, Vladimir; Vedel, Isabelle; Pluye, Pierre
2014-01-01
PURPOSE Results of case management designed for patients with dementia and their caregivers in community-based primary health care (CBPHC) were inconsistent. Our objective was to identify the relationships between key outcomes of case management and barriers to implementation. METHODS We conducted a systematic mixed studies review (including quantitative and qualitative studies). Literature search was performed in MEDLINE, PsycINFO, Embase, and Cochrane Library (1995 up to August 2012). Case management intervention studies were used to assess clinical outcomes for patients, service use, caregiver outcomes, satisfaction, and cost-effectiveness. Qualitative studies were used to examine barriers to case management implementation. Patterns in the relationships between barriers to implementation and outcomes were identified using the configurational comparative method. The quality of studies was assessed using the Mixed Methods Appraisal Tool. RESULTS Forty-three studies were selected (31 quantitative and 12 qualitative). Case management had a limited positive effect on behavioral symptoms of dementia and length of hospital stay for patients and on burden and depression for informal caregivers. Interventions that addressed a greater number of barriers to implementation resulted in increased number of positive outcomes. Results suggested that high-intensity case management was necessary and sufficient to produce positive clinical outcomes for patients and to optimize service use. Effective communication within the CBPHC team was necessary and sufficient for positive outcomes for caregivers. CONCLUSIONS Clinicians and managers who implement case management in CBPHC should take into account high-intensity case management (small caseload, regular proactive patient follow-up, regular contact between case managers and family physicians) and effective communication between case managers and other CBPHC professionals and services. PMID:25354410
Khanassov, Vladimir; Vedel, Isabelle; Pluye, Pierre
2014-01-01
Results of case management designed for patients with dementia and their caregivers in community-based primary health care (CBPHC) were inconsistent. Our objective was to identify the relationships between key outcomes of case management and barriers to implementation. We conducted a systematic mixed studies review (including quantitative and qualitative studies). Literature search was performed in MEDLINE, PsycINFO, Embase, and Cochrane Library (1995 up to August 2012). Case management intervention studies were used to assess clinical outcomes for patients, service use, caregiver outcomes, satisfaction, and cost-effectiveness. Qualitative studies were used to examine barriers to case management implementation. Patterns in the relationships between barriers to implementation and outcomes were identified using the configurational comparative method. The quality of studies was assessed using the Mixed Methods Appraisal Tool. Forty-three studies were selected (31 quantitative and 12 qualitative). Case management had a limited positive effect on behavioral symptoms of dementia and length of hospital stay for patients and on burden and depression for informal caregivers. Interventions that addressed a greater number of barriers to implementation resulted in increased number of positive outcomes. Results suggested that high-intensity case management was necessary and sufficient to produce positive clinical outcomes for patients and to optimize service use. Effective communication within the CBPHC team was necessary and sufficient for positive outcomes for caregivers. Clinicians and managers who implement case management in CBPHC should take into account high-intensity case management (small caseload, regular proactive patient follow-up, regular contact between case managers and family physicians) and effective communication between case managers and other CBPHC professionals and services. © 2014 Annals of Family Medicine, Inc.
Tools to manage the enterprise-wide picture archiving and communications system environment.
Lannum, L M; Gumpf, S; Piraino, D
2001-06-01
The presentation will focus on the implementation and utilization of a central picture archiving and communications system (PACS) network-monitoring tool that allows for enterprise-wide operations management and support of the image distribution network. The MagicWatch (Siemens, Iselin, NJ) PACS/radiology information system (RIS) monitoring station from Siemens has allowed our organization to create a service support structure that has given us proactive control of our environment and has allowed us to meet the service level performance expectations of the users. The Radiology Help Desk has used the MagicWatch PACS monitoring station as an applications support tool that has allowed the group to monitor network activity and individual systems performance at each node. Fast and timely recognition of the effects of single events within the PACS/RIS environment has allowed the group to proactively recognize possible performance issues and resolve problems. The PACS/operations group performs network management control, image storage management, and software distribution management from a single, central point in the enterprise. The MagicWatch station allows for the complete automation of software distribution, installation, and configuration process across all the nodes in the system. The tool has allowed for the standardization of the workstations and provides a central configuration control for the establishment and maintenance of the system standards. This report will describe the PACS management and operation prior to the implementation of the MagicWatch PACS monitoring station and will highlight the operational benefits of a centralized network and system-monitoring tool.
Computer-Managed Instruction: Theory, Application, and Some Key Implementation Issues.
1984-03-01
who have endorsed computer technology but fail to adopt it . As one educational consultant claims: "Educators appear to have a deep-set skepticism toward...widespread use. i-1 II. BACKGROUND A. HISTORICAL PERSPECTIVE In the mid-1950’s, while still in its infancy, computer technology entered the world of education...to utilize the new technology , and to do it most.. extensively. Implementation of CMI in a standalone configuration using microcomputers has been
High resolution microwave spectrometer sounder (HIMSS), volume 1, book 2
NASA Technical Reports Server (NTRS)
1990-01-01
The following topics are presented with respect to the high resolution microwave spectrometer sounder (HIMSS) that is to be used as an instrument for NASA's Earth Observing System (EOS): (1) preliminary program plans; (2) contract end item (CEI) specification; and (3) the instrument interface description document. Under the preliminary program plans section, plans dealing with the following subject areas are discussed: spares, performance assurance, configuration management, software implementation, contamination, calibration management, and verification.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas
2003-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
NASA Astrophysics Data System (ADS)
Xu, Boyi; Xu, Li Da; Fei, Xiang; Jiang, Lihong; Cai, Hongming; Wang, Shuai
2017-08-01
Facing the rapidly changing business environments, implementation of flexible business process is crucial, but difficult especially in data-intensive application areas. This study aims to provide scalable and easily accessible information resources to leverage business process management. In this article, with a resource-oriented approach, enterprise data resources are represented as data-centric Web services, grouped on-demand of business requirement and configured dynamically to adapt to changing business processes. First, a configurable architecture CIRPA involving information resource pool is proposed to act as a scalable and dynamic platform to virtualise enterprise information resources as data-centric Web services. By exposing data-centric resources as REST services in larger granularities, tenant-isolated information resources could be accessed in business process execution. Second, dynamic information resource pool is designed to fulfil configurable and on-demand data accessing in business process execution. CIRPA also isolates transaction data from business process while supporting diverse business processes composition. Finally, a case study of using our method in logistics application shows that CIRPA provides an enhanced performance both in static service encapsulation and dynamic service execution in cloud computing environment.
Configuration Management Plan for the Tank Farm Contractor
DOE Office of Scientific and Technical Information (OSTI.GOV)
WEIR, W.R.
The Configuration Management Plan for the Tank Farm Contractor describes configuration management the contractor uses to manage and integrate its technical baseline with the programmatic and functional operations to perform work. The Configuration Management Plan for the Tank Farm Contractor supports the management of the project baseline by providing the mechanisms to identify, document, and control the technical characteristics of the products, processes, and structures, systems, and components (SSC). This plan is one of the tools used to identify and provide controls for the technical baseline of the Tank Farm Contractor (TFC). The configuration management plan is listed in themore » management process documents for TFC as depicted in Attachment 1, TFC Document Structure. The configuration management plan is an integrated approach for control of technical, schedule, cost, and administrative processes necessary to manage the mission of the TFC. Configuration management encompasses the five functional elements of: (1) configuration management administration, (2) configuration identification, (3) configuration status accounting, (4) change control, and (5 ) configuration management assessments.« less
78 FR 35033 - Privacy Act of 1974; Notice of an Updated System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-11
...) and Fort Worth (Region 7) as well as Cloud components as part of GSA's implementation of Google Apps... Management and Budget (OMB) when necessary to the review of private relief legislation pursuant to OMB... configured in the application by the program office for their program requirements. SAFEGUARDS: Cloud systems...
Configuring School Image Assets of Colleges in Taiwan
ERIC Educational Resources Information Center
Lee, Chia Kun; Chen, Hsin Chu
2018-01-01
Higher education in Taiwan faces various challenges, such as the low-birth rate, blurred positioning, and lack of marketing concepts. In order to sustain, more effect strategies and actions resource should be implemented to enhance service of the colleges and universities. Therefore, image asset management becomes a critical start. This study aims…
Configuration Management of an Optimization Application in a Research Environment
NASA Technical Reports Server (NTRS)
Townsend, James C.; Salas, Andrea O.; Schuler, M. Patricia
1999-01-01
Multidisciplinary design optimization (MDO) research aims to increase interdisciplinary communication and reduce design cycle time by combining system analyses (simulations) with design space search and decision making. The High Performance Computing and Communication Program's current High Speed Civil Transport application, HSCT4.0, at NASA Langley Research Center involves a highly complex analysis process with high-fidelity analyses that are more realistic than previous efforts at the Center. The multidisciplinary processes have been integrated to form a distributed application by using the Java language and Common Object Request Broker Architecture (CORBA) software techniques. HSCT4.0 is a research project in which both the application problem and the implementation strategy have evolved as the MDO and integration issues became better understood. Whereas earlier versions of the application and integrated system were developed with a simple, manual software configuration management (SCM) process, it was evident that this larger project required a more formal SCM procedure. This report briefly describes the HSCT4.0 analysis and its CORBA implementation and then discusses some SCM concepts and their application to this project. In anticipation that SCM will prove beneficial for other large research projects, the report concludes with some lessons learned in overcoming SCM implementation problems for HSCT4.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casella, R.
RESTful (REpresentational State Transfer) web services are an alternative implementation to SOAP/RPC web services in a client/server model. BNLs IT Division has started deploying RESTful Web Services for enterprise data retrieval and manipulation. Data is currently used by system administrators for tracking configuration information and as it is expanded will be used by Cyber Security for vulnerability management and as an aid to cyber investigations. This talk will describe the implementation and outstanding issues as well as some of the reasons for choosing RESTful over SOAP/RPC and future directions.
Dynamic Communication Resource Negotiations
NASA Technical Reports Server (NTRS)
Chow, Edward; Vatan, Farrokh; Paloulian, George; Frisbie, Steve; Srostlik, Zuzana; Kalomiris, Vasilios; Apgar, Daniel
2012-01-01
Today's advanced network management systems can automate many aspects of the tactical networking operations within a military domain. However, automation of joint and coalition tactical networking across multiple domains remains challenging. Due to potentially conflicting goals and priorities, human agreement is often required before implementation into the network operations. This is further complicated by incompatible network management systems and security policies, rendering it difficult to implement automatic network management, thus requiring manual human intervention to the communication protocols used at various network routers and endpoints. This process of manual human intervention is tedious, error-prone, and slow. In order to facilitate a better solution, we are pursuing a technology which makes network management automated, reliable, and fast. Automating the negotiation of the common network communication parameters between different parties is the subject of this paper. We present the technology that enables inter-force dynamic communication resource negotiations to enable ad-hoc inter-operation in the field between force domains, without pre-planning. It also will enable a dynamic response to changing conditions within the area of operations. Our solution enables the rapid blending of intra-domain policies so that the forces involved are able to inter-operate effectively without overwhelming each other's networks with in-appropriate or un-warranted traffic. It will evaluate the policy rules and configuration data for each of the domains, then generate a compatible inter-domain policy and configuration that will update the gateway systems between the two domains.
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J. (Editor)
2008-01-01
The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes configuration management and quality assurance documents from the GCS project. Volume 4 contains six appendices: A. Software Accomplishment Summary for the Guidance and Control Software Project; B. Software Configuration Index for the Guidance and Control Software Project; C. Configuration Management Records for the Guidance and Control Software Project; D. Software Quality Assurance Records for the Guidance and Control Software Project; E. Problem Report for the Pluto Implementation of the Guidance and Control Software Project; and F. Support Documentation Change Reports for the Guidance and Control Software Project.
autokonf - A Configuration Script Generator Implemented in Perl
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reus, J F
This paper discusses configuration scripts in general and the scripting language issues involved. A brief description of GNU autoconf is provided along with a contrasting overview of autokonf, a configuration script generator implemented in Perl, whose macros are implemented in Perl, generating a configuration script in Perl. It is very portable, easily extensible, and readily mastered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
RIECK, C.A.
1999-02-23
This Software Configuration Management Plan (SCMP) provides the instructions for change control of the W-211 Project, Retrieval Control System (RCS) software after initial approval/release but prior to the transfer of custody to the waste tank operations contractor. This plan applies to the W-211 system software developed by the project, consisting of the computer human-machine interface (HMI) and programmable logic controller (PLC) software source and executable code, for production use by the waste tank operations contractor. The plan encompasses that portion of the W-211 RCS software represented on project-specific AUTOCAD drawings that are released as part of the C1 definitive designmore » package (these drawings are identified on the drawing list associated with each C-1 package), and the associated software code. Implementation of the plan is required for formal acceptance testing and production release. The software configuration management plan does not apply to reports and data generated by the software except where specifically identified. Control of information produced by the software once it has been transferred for operation is the responsibility of the receiving organization.« less
Throughput Benefit Assessment for Tactical Runway Configuration Management (TRCM)
NASA Technical Reports Server (NTRS)
Phojanamongkolkij, Nipa; Oseguera-Lohr, Rosa M.; Lohr, Gary W.; Fenbert, James W.
2014-01-01
The System-Oriented Runway Management (SORM) concept is a collection of needed capabilities focused on a more efficient use of runways while considering all of the factors that affect runway use. Tactical Runway Configuration Management (TRCM), one of the SORM capabilities, provides runway configuration and runway usage recommendations, monitoring the active runway configuration for suitability given existing factors, based on a 90 minute planning horizon. This study evaluates the throughput benefits using a representative sample of today's traffic volumes at three airports: Memphis International Airport (MEM), Dallas-Fort Worth International Airport (DFW), and John F. Kennedy International Airport (JFK). Based on this initial assessment, there are statistical throughput benefits for both arrivals and departures at MEM with an average of 4% for arrivals, and 6% for departures. For DFW, there is a statistical benefit for arrivals with an average of 3%. Although there is an average of 1% benefit observed for departures, it is not statistically significant. For JFK, there is a 12% benefit for arrivals, but a 2% penalty for departures. The results obtained are for current traffic volumes and should show greater benefit for increased future demand. This paper also proposes some potential TRCM algorithm improvements for future research. A continued research plan is being worked to implement these improvements and to re-assess the throughput benefit for today and future projected traffic volumes.
Sidek, Yusof Haji; Martins, Jorge Tiago
2017-11-01
Electronic health records (EHR) make health care more efficient. They improve the quality of care by making patients' medical history more accessible. However, little is known about the factors contributing to the successful EHR implementation in dental clinics. This article aims to identify the perceived critical success factors of EHR system implementation in a dental clinic context. We used Grounded Theory to analyse data collected in the context of Brunei's national EHR - the Healthcare Information and Management System (Bru-HIMS). Data analysis followed the stages of open, axial and selective coding. Six perceived critical success factors emerged: usability of the system, emergent behaviours, requirements analysis, training, change management, and project organisation. The study identified a mismatch between end-users and product owner/vendor perspectives. Workflow changes were significant challenges to clinicians' confident use, particularly as the system offered limited modularity and configurability. Recommendations are made for all the parties involved in healthcare information systems implementation to manage the change process by agreeing system goals and functionalities through wider consensual debate, and participated supporting strategies realised through common commitment. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
SUMC fault tolerant computer system
NASA Technical Reports Server (NTRS)
1980-01-01
The results of the trade studies are presented. These trades cover: establishing the basic configuration, establishing the CPU/memory configuration, establishing an approach to crosstrapping interfaces, defining the requirements of the redundancy management unit (RMU), establishing a spare plane switching strategy for the fault-tolerant memory (FTM), and identifying the most cost effective way of extending the memory addressing capability beyond the 64 K-bytes (K=1024) of SUMC-II B. The results of the design are compiled in Contract End Item (CEI) Specification for the NASA Standard Spacecraft Computer II (NSSC-II), IBM 7934507. The implementation of the FTM and memory address expansion.
Development and implementation of a PACS network and resource manager
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Taira, Ricky K.; Dwyer, Samuel J., III; Huang, H. K.
1992-07-01
Clinical acceptance of PACS is predicated upon maximum uptime. Upon component failure, detection, diagnosis, reconfiguration and repair must occur immediately. Our current PACS network is large, heterogeneous, complex and wide-spread geographically. The overwhelming number of network devices, computers and software processes involved in a departmental or inter-institutional PACS makes development of tools for network and resource management critical. The authors have developed and implemented a comprehensive solution (PACS Network-Resource Manager) using the OSI Network Management Framework with network element agents that respond to queries and commands for network management stations. Managed resources include: communication protocol layers for Ethernet, FDDI and UltraNet; network devices; computer and operating system resources; and application, database and network services. The Network-Resource Manager is currently being used for warning, fault, security violation and configuration modification event notification. Analysis, automation and control applications have been added so that PACS resources can be dynamically reconfigured and so that users are notified when active involvement is required. Custom data and error logging have been implemented that allow statistics for each PACS subsystem to be charted for performance data. The Network-Resource Manager allows our departmental PACS system to be monitored continuously and thoroughly, with a minimal amount of personal involvement and time.
Carrier Based Air Logistics Study: Maintenance Analysis.
1982-01-01
MONITORING AGENCY NAME & ADDRESS(If dIierent loan Controling 01116.) 1S. SECURITY CLASS. (of Od. report) gel Unclassified IS&. DECL ASSI IlCATION/ OOWNGRAOIN...Management System AECL Avionics Equipment Configuration List AIMD Aircraft Intermediate Maintenance Department ASO Aviation Supply Office ASW...implementation. Component-specific data, and indentured[2] relationships between components extracted from the Aviation Supply Office ( ASO ) weapon
2013-03-20
a result, DoD managers could not differentiate for management purposes the value of Transfers – Current-Year Authority Transfers In and Transfers...under the Military Munitions Response Program 115 2995.9517 Estimated Cleanup Cost Liability- Other Accrued Environmental Liability Active...lnfonnation Structure Tr:UJsaction Library posting logic needed to report its financial data properly." O USO<C) RESPONSE: PartiaUy Concur·. · n1e
Configuration and Data Management Process and the System Safety Professional
NASA Technical Reports Server (NTRS)
Shivers, Charles Herbert; Parker, Nelson C. (Technical Monitor)
2001-01-01
This article presents a discussion of the configuration management (CM) and the Data Management (DM) functions and provides a perspective of the importance of configuration and data management processes to the success of system safety activities. The article addresses the basic requirements of configuration and data management generally based on NASA configuration and data management policies and practices, although the concepts are likely to represent processes of any public or private organization's well-designed configuration and data management program.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Person, L. H., Jr.
1981-01-01
The NASA developed, implemented, and flight tested a flight management algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control. This algorithm provides a 3D path with time control (4D) for the TCV B-737 airplane to make an idle-thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms are described and flight test results are presented.
Spatial strategies for managing visitor impacts in National Parks
Leung, Y.-F.; Marion, J.L.
1999-01-01
Resource and social impacts caused by recreationists and tourists have become a management concern in national parks and equivalent protected areas. The need to contain visitor impacts within acceptable limits has prompted park and protected area managers to implement a wide variety of strategies and actions, many of which are spatial in nature. This paper classifies and illustrates the basic spatial strategies for managing visitor impacts in parks and protected areas. A typology of four spatial strategies was proposed based on the recreation and park management literature. Spatial segregation is a common strategy for shielding sensitive resources from visitor impacts or for separating potentially conflicting types of use. Two forms of spatial segregation are zoning and closure. A spatial containment strategy is intended to minimize the aggregate extent of visitor impacts by confining use to limited designated or established Iocations. In contrast, a spatial dispersal strategy seeks to spread visitor use, reducing the frequency of use to levels that avoid or minimize permanent resource impacts or visitor crowding and conflict. Finally, a spatial configuration strategy minimizes impacting visitor behavior though the judicious spatial arrangement of facilities. These four spatial strategics can be implemented separately or in combination at varying spatial scales within a single park. A survey of national park managers provides an empirical example of the diversity of implemented spatial strategies in managing visitor impacts. Spatial segregation is frequently applied in the form of camping restrictions or closures to protect sensitive natural or cultural resources and to separate incompatible visitor activities. Spatial containment is the most widely applied strategy for minimizing the areal extent of resource impacts. Spatial dispersal is commonly applied to reduce visitor crowding or conflicts in popular destination areas but is less frequently applied or effective in minimizing resource impacts. Spatial configuration was only minimally evaluated, as it was not included in the survey. The proposed typology of spatial strategies offers a useful means of organizing and understanding the wide variety of management strategies and actions applied in managing visitor impacts in parks and protected areas. Examples from U.S. national parks demonstrate the diversity of these basic strategies and their flexibility in implementation at various spatial scales. Documentation of these examples helps illustrate their application and inform managers of the multitude of options. Further analysis from the spatial perspective is needed Io extend the applicability of this typology to other recreational activities and management issues.
[Formian 2 and a Formian Function for Processing Polyhedric Configurations
NASA Technical Reports Server (NTRS)
Nooshin, H.; Disney, P. L.; Champion, O. C.
1996-01-01
The work began in October 1994 with the following objectives: (1) to produce an improved version of the programming language Formian; and (2) to create a means for computer aided handling of polyhedric configurations including the geodesic forms of all kinds. A new version of Formian, referred to as Formian 2, is being implemented to operate in the Windows 95 environment. It is an ideal tool for configuration management in a convenient and user-friendly manner. The second objective was achieved by creating a standard Formian function that allows convenient handling of all types of polyhedric configurations. In particular, the focus of attention is on polyhedric configurations that are of importance in architectural and structural engineering fields. The natural medium for processing of polyhedric configurations is a programming language that incorporates the concepts of 'formex algebra'. Formian is such a programming language in which the processing of polyhedric configurations can be carried out using the standard elements of the language. A description of this function is included in a chapter for a book entitled 'Beyond the Cube: the Architecture of space Frames and Polyhedra'. A copy of this chapter is appended.
Lalleman, P C B; Smid, G A C; Lagerwey, M D; Shortridge-Baggett, L M; Schuurmans, M J
2016-11-01
Nurse managers play an important role in implementing patient safety practices in hospitals. However, the influence of their professional background on their clinical leadership behaviour remains unclear. Research has demonstrated that concepts of Bourdieu (dispositions of habitus, capital and field) help to describe this influence. It revealed various configurations of dispositions of the habitus in which a caring disposition plays a crucial role. We explore how the caring disposition of nurse middle managers' habitus influences their clinical leadership behaviour in patient safety practices. Our paper reports the findings of a Bourdieusian, multi-site, ethnographic case study. Two Dutch and two American acute care, mid-sized, non-profit hospitals. A total of 16 nurse middle managers of adult care units. Observations were made over 560h of shadowing nurse middle managers, semi-structured interviews and member check meetings with the participants. We observed three distinct configurations of dispositions of the habitus which influenced the clinical leadership of nurse middle managers in patient safety practices; they all include a caring disposition: (1) a configuration with a dominant caring disposition that was helpful (via solving urgent matters) and hindering (via ad hoc and reactive actions, leading to quick fixes and 'compensatory modes'); (2) a configuration with an interaction of caring and collegial dispositions that led to an absence of clinical involvement and discouraged patient safety practices; and (3) a configuration with a dominant scientific disposition showing an investigative, non-judging, analytic stance, a focus on evidence-based practice that curbs the ad hoc repertoire of the caring disposition. The dispositions of the nurse middle managers' habitus influenced their clinical leadership in patient safety practices. A dominance of the caring disposition, which meant 'always' answering calls for help and reactive and ad hoc reactions, did not support the clinical leadership role of nurse middle managers. By perceiving the team of staff nurses as pseudo-patients, patient safety practice was jeopardized because of erosion of the clinical disposition. The nurse middle managers' clinical leadership was enhanced by leadership behaviour based on the clinical and scientific dispositions that was manifested through an investigative, non-judging, analytic stance, a focus on evidence-based practice and a curbed caring disposition. Copyright © 2016 Elsevier Ltd. All rights reserved.
Technical implementation of an Internet address database with online maintenance module.
Mischke, K L; Bollmann, F; Ehmer, U
2002-01-01
The article describes the technical implementation and management of the Internet address database of the center for ZMK (University of Münster, Dental School) Münster, which is integrated in the "ZMK-Web" website. The editorially maintained system guarantees its topicality primarily due to the electronically organized division of work with the aid of an online maintenance module programmed in JavaScript/PHP, as well as a database-related feedback function for the visitor to the website through configuration-independent direct mail windows programmed in JavaScript/PHP.
MSAT signalling and network management architectures
NASA Technical Reports Server (NTRS)
Garland, Peter; Keelty, J. Malcolm
1989-01-01
Spar Aerospace has been active in the design and definition of Mobile Satellite Systems since the mid 1970's. In work sponsored by the Canadian Department of Communications, various payload configurations have evolved. In addressing the payload configuration, the requirements of the mobile user, the service provider and the satellite operator have always been the most important consideration. The current Spar 11 beam satellite design is reviewed, and its capabilities to provide flexibility and potential for network growth within the WARC87 allocations are explored. To enable the full capabilities of the payload to be realized, a large amount of ground based Switching and Network Management infrastructure will be required, when space segment becomes available. Early indications were that a single custom designed Demand Assignment Multiple Access (DAMA) switch should be implemented to provide efficient use of the space segment. As MSAT has evolved into a multiple service concept, supporting many service providers, this architecture should be reviewed. Some possible signalling and Network Management solutions are explored.
NASA Technical Reports Server (NTRS)
Allard, Dan; Deforrest, Lloyd
2014-01-01
Flight software parameters enable space mission operators fine-tuned control over flight system configurations, enabling rapid and dynamic changes to ongoing science activities in a much more flexible manner than can be accomplished with (otherwise broadly used) configuration file based approaches. The Mars Science Laboratory (MSL), Curiosity, makes extensive use of parameters to support complex, daily activities via commanded changes to said parameters in memory. However, as the loss of Mars Global Surveyor (MGS) in 2006 demonstrated, flight system management by parameters brings with it risks, including the possibility of losing track of the flight system configuration and the threat of invalid command executions. To mitigate this risk a growing number of missions have funded efforts to implement parameter tracking parameter state software tools and services including MSL and the Soil Moisture Active Passive (SMAP) mission. This paper will discuss the engineering challenges and resulting software architecture of MSL's onboard parameter state tracking software and discuss the road forward to make parameter management tools suitable for use on multiple missions.
Development of a high capacity toroidal Ni/Cd cell
NASA Technical Reports Server (NTRS)
Holleck, G. L.; Foos, J. S.; Avery, J. W.; Feiman, V.
1981-01-01
A nickel cadmium battery design which can offer better thermal management, higher energy density and much lower cost than the state-of-the-art is emphasized. A toroidal Ni/Cd cell concept is described. It was critically reviewed and used to develop two cell designs for practical implementation. One is a double swaged and the other a swaged welded configuration.
Component Framework for Loosely Coupled High Performance Integrated Plasma Simulations
NASA Astrophysics Data System (ADS)
Elwasif, W. R.; Bernholdt, D. E.; Shet, A. G.; Batchelor, D. B.; Foley, S.
2010-11-01
We present the design and implementation of a component-based simulation framework for the execution of coupled time-dependent plasma modeling codes. The Integrated Plasma Simulator (IPS) provides a flexible lightweight component model that streamlines the integration of stand alone codes into coupled simulations. Standalone codes are adapted to the IPS component interface specification using a thin wrapping layer implemented in the Python programming language. The framework provides services for inter-component method invocation, configuration, task, and data management, asynchronous event management, simulation monitoring, and checkpoint/restart capabilities. Services are invoked, as needed, by the computational components to coordinate the execution of different aspects of coupled simulations on Massive parallel Processing (MPP) machines. A common plasma state layer serves as the foundation for inter-component, file-based data exchange. The IPS design principles, implementation details, and execution model will be presented, along with an overview of several use cases.
Router Agent Technology for Policy-Based Network Management
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Sudhir, Gurusham; Chang, Hsin-Ping; James, Mark; Liu, Yih-Chiao J.; Chiang, Winston
2011-01-01
This innovation can be run as a standalone network application on any computer in a networked environment. This design can be configured to control one or more routers (one instance per router), and can also be configured to listen to a policy server over the network to receive new policies based on the policy- based network management technology. The Router Agent Technology transforms the received policies into suitable Access Control List syntax for the routers it is configured to control. It commits the newly generated access control lists to the routers and provides feedback regarding any errors that were faced. The innovation also automatically generates a time-stamped log file regarding all updates to the router it is configured to control. This technology, once installed on a local network computer and started, is autonomous because it has the capability to keep listening to new policies from the policy server, transforming those policies to router-compliant access lists, and committing those access lists to a specified interface on the specified router on the network with any error feedback regarding commitment process. The stand-alone application is named RouterAgent and is currently realized as a fully functional (version 1) implementation for the Windows operating system and for CISCO routers.
MendeLIMS: a web-based laboratory information management system for clinical genome sequencing.
Grimes, Susan M; Ji, Hanlee P
2014-08-27
Large clinical genomics studies using next generation DNA sequencing require the ability to select and track samples from a large population of patients through many experimental steps. With the number of clinical genome sequencing studies increasing, it is critical to maintain adequate laboratory information management systems to manage the thousands of patient samples that are subject to this type of genetic analysis. To meet the needs of clinical population studies using genome sequencing, we developed a web-based laboratory information management system (LIMS) with a flexible configuration that is adaptable to continuously evolving experimental protocols of next generation DNA sequencing technologies. Our system is referred to as MendeLIMS, is easily implemented with open source tools and is also highly configurable and extensible. MendeLIMS has been invaluable in the management of our clinical genome sequencing studies. We maintain a publicly available demonstration version of the application for evaluation purposes at http://mendelims.stanford.edu. MendeLIMS is programmed in Ruby on Rails (RoR) and accesses data stored in SQL-compliant relational databases. Software is freely available for non-commercial use at http://dna-discovery.stanford.edu/software/mendelims/.
Design and Data Management System
NASA Technical Reports Server (NTRS)
Messer, Elizabeth; Messer, Brad; Carter, Judy; Singletary, Todd; Albasini, Colby; Smith, Tammy
2007-01-01
The Design and Data Management System (DDMS) was developed to automate the NASA Engineering Order (EO) and Engineering Change Request (ECR) processes at the Propulsion Test Facilities at Stennis Space Center for efficient and effective Configuration Management (CM). Prior to the development of DDMS, the CM system was a manual, paper-based system that required an EO or ECR submitter to walk the changes through the acceptance process to obtain necessary approval signatures. This approval process could take up to two weeks, and was subject to a variety of human errors. The process also requires that the CM office make copies and distribute them to the Configuration Control Board members for review prior to meetings. At any point, there was a potential for an error or loss of the change records, meaning the configuration of record was not accurate. The new Web-based DDMS eliminates unnecessary copies, reduces the time needed to distribute the paperwork, reduces time to gain the necessary signatures, and prevents the variety of errors inherent in the previous manual system. After implementation of the DDMS, all EOs and ECRs can be automatically checked prior to submittal to ensure that the documentation is complete and accurate. Much of the configuration information can be documented in the DDMS through pull-down forms to ensure consistent entries by the engineers and technicians in the field. The software also can electronically route the documents through the signature process to obtain the necessary approvals needed for work authorization. The workflow of the system allows for backups and timestamps that determine the correct routing and completion of all required authorizations in a more timely manner, as well as assuring the quality and accuracy of the configuration documents.
Statistical evaluation of PACSTAT random number generation capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, G.F.; Toland, M.R.; Harty, H.
1988-05-01
This report summarizes the work performed in verifying the general purpose Monte Carlo driver-program PACSTAT. The main objective of the work was to verify the performance of PACSTAT's random number generation capabilities. Secondary objectives were to document (using controlled configuration management procedures) changes made in PACSTAT at Pacific Northwest Laboratory, and to assure that PACSTAT input and output files satisfy quality assurance traceability constraints. Upon receipt of the PRIME version of the PACSTAT code from the Basalt Waste Isolation Project, Pacific Northwest Laboratory staff converted the code to run on Digital Equipment Corporation (DEC) VAXs. The modifications to PACSTAT weremore » implemented using the WITNESS configuration management system, with the modifications themselves intended to make the code as portable as possible. Certain modifications were made to make the PACSTAT input and output files conform to quality assurance traceability constraints. 10 refs., 17 figs., 6 tabs.« less
NASA Technical Reports Server (NTRS)
1978-01-01
The concept of decentralized (remote) neighborhood offices, linked together through a self-sustaining communications network for exchanging voice messages, video images, and digital data was quantitatively evaluated. Hardware and procedures for the integrated multifunctional system were developed. The configuration of the neighborhood office center (NOC) is explained, its production statistics given, and an experiment for NOC network integration via satellite is described. The hardware selected for the integration NOC/management information system is discussed, and the NASA teleconferencing network is evaluated.
Virtual Network Configuration Management System for Data Center Operations and Management
NASA Astrophysics Data System (ADS)
Okita, Hideki; Yoshizawa, Masahiro; Uehara, Keitaro; Mizuno, Kazuhiko; Tarui, Toshiaki; Naono, Ken
Virtualization technologies are widely deployed in data centers to improve system utilization. However, they increase the workload for operators, who have to manage the structure of virtual networks in data centers. A virtual-network management system which automates the integration of the configurations of the virtual networks is provided. The proposed system collects the configurations from server virtualization platforms and VLAN-supported switches, and integrates these configurations according to a newly developed XML-based management information model for virtual-network configurations. Preliminary evaluations show that the proposed system helps operators by reducing the time to acquire the configurations from devices and correct the inconsistency of operators' configuration management database by about 40 percent. Further, they also show that the proposed system has excellent scalability; the system takes less than 20 minutes to acquire the virtual-network configurations from a large scale network that includes 300 virtual machines. These results imply that the proposed system is effective for improving the configuration management process for virtual networks in data centers.
A Recipe for Streamlining Mission Management
NASA Technical Reports Server (NTRS)
Mitchell, Andrew E.; Semancik, Susan K.
2004-01-01
This paper describes a project's design and implementation for streamlining mission management with knowledge capture processes across multiple organizations of a NASA directorate. Thc project's focus is on standardizing processes and reports; enabling secure information access and case of maintenance; automating and tracking appropriate workflow rules through process mapping; and infusing new technologies. This paper will describe a small team's experiences using XML technologies through an enhanced vendor suite of applications integrated on Windows-based platforms called the Wallops Integrated Scheduling and Document Management System (WISDMS). This paper describes our results using this system in a variety of endeavors, including providing range project scheduling and resource management for a Range and Mission Management Office; implementing an automated Customer Feedback system for a directorate; streamlining mission status reporting across a directorate; and initiating a document management, configuration management and portal access system for a Range Safety Office's programs. The end result is a reduction of the knowledge gap through better integration and distribution of information, improved process performance, automated metric gathering, and quicker identification of problem areas and issues. However, the real proof of the pudding comes through overcoming the user's reluctance to replace familiar, seasoned processes with new technology ingredients blended with automated procedures in an untested recipe. This paper shares some of the team's observations that led to better implementation techniques, as well as an IS0 9001 Best Practices citation. This project has provided a unique opportunity to advance NASA's competency in new technologies, as well as to strategically implement them within an organizational structure, while wetting the appetite for continued improvements in mission management.
Configuration Management Policy
This Policy establishes an Agency-wide Configuration Management Program and to provide responsibilities, compliance requirements, and overall principles for Configuration and Change Management processes to support information technology management.
Multipurpose Controller with EPICS integration and data logging: BPM application for ESS Bilbao
NASA Astrophysics Data System (ADS)
Arredondo, I.; del Campo, M.; Echevarria, P.; Jugo, J.; Etxebarria, V.
2013-10-01
This work presents a multipurpose configurable control system which can be integrated in an EPICS control network, this functionality being configured through a XML configuration file. The core of the system is the so-called Hardware Controller which is in charge of the control hardware management, the set up and communication with the EPICS network and the data storage. The reconfigurable nature of the controller is based on a single XML file, allowing any final user to easily modify and adjust the control system to any specific requirement. The selected Java development environment ensures a multiplatform operation and large versatility, even regarding the control hardware to be controlled. Specifically, this paper, focused on fast control based on a high performance FPGA, describes also an application approach for the ESS Bilbao's Beam Position Monitoring system. The implementation of the XML configuration file and the satisfactory performance outcome achieved are presented, as well as a general description of the Multipurpose Controller itself.
ERIC Educational Resources Information Center
Cramer, Sharon F.
2012-01-01
As members of enrollment management units look ahead to the next few years, they anticipate many institution-wide challenges: (1) implementation of a new student information system; (2) major upgrade of an existing system; and (3) re-configuring an existing system to reflect changes in academic policies or to accommodate new federal or state…
2005-01-01
developed a partnership with the Defense Acquisition University to in- tegrate DISA’s systems engineering processes, software , and network...in place, with processes being implemented: deployment management; systems engineering ; software engineering ; configuration man- agement; test and...CSS systems engineering is a transition partner with Carnegie Mellon University’s Software Engineering Insti- tute and its work on the capability
NASA Technical Reports Server (NTRS)
Stone, D. A.; Craig, J. W.; Drone, B.; Gerlach, R. H.; Williams, R. J.
1991-01-01
The developmental status is discussed regarding the 'lifeboat' vehicle to enhance the safety of the crew on the Space Station Freedom (SSF). NASA's Assured Crew Return Vehicle (ACRV) is intended to provide a means for returning the SSF crew to earth at all times. The 'lifeboat' philosophy is the key to managing the development of the ACRV which further depends on matrixed support and total quality management for implementation. The risk of SSF mission scenarios are related to selected ACRV mission requirements, and the system and vehicle designs are related to these precepts. Four possible ACRV configurations are mentioned including the lifting-body, Apollo shape, Discoverer shape, and a new lift-to-drag concept. The SCRAM design concept is discussed in detail with attention to the 'lifeboat' philosophy and requirements for implementation.
TWRS Configuration management program plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vann, J.M.
The TWRS Configuration Management Program Plan (CMPP) integrates technical and administrative controls to establish and maintain consistency among requirements, product configuration, and product information for TWRS products during all life cycle phases. This CMPP will be used by TWRS management and configuration management personnel to establish and manage the technical and integrated baselines and controls and status changes to those baselines.
Configuration Management File Manager Developed for Numerical Propulsion System Simulation
NASA Technical Reports Server (NTRS)
Follen, Gregory J.
1997-01-01
One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.
Flexible medical image management using service-oriented architecture.
Shaham, Oded; Melament, Alex; Barak-Corren, Yuval; Kostirev, Igor; Shmueli, Noam; Peres, Yardena
2012-01-01
Management of medical images increasingly involves the need for integration with a variety of information systems. To address this need, we developed Content Management Offering (CMO), a platform for medical image management supporting interoperability through compliance with standards. CMO is based on the principles of service-oriented architecture, implemented with emphasis on three areas: clarity of business process definition, consolidation of service configuration management, and system scalability. Owing to the flexibility of this platform, a small team is able to accommodate requirements of customers varying in scale and in business needs. We describe two deployments of CMO, highlighting the platform's value to customers. CMO represents a flexible approach to medical image management, which can be applied to a variety of information technology challenges in healthcare and life sciences organizations.
Man-rated flight software for the F-8 DFBW program
NASA Technical Reports Server (NTRS)
Bairnsfather, R. R.
1976-01-01
The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program assembly control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools are described, as well as the program test plans and their implementation on the various simulators. Failure effects analysis and the creation of special failure generating software for testing purposes are described.
A New Nightly Build System for LHCb
NASA Astrophysics Data System (ADS)
Clemencic, M.; Couturier, B.
2014-06-01
The nightly build system used so far by LHCb has been implemented as an extension of the system developed by CERN PH/SFT group (as presented at CHEP2010). Although this version has been working for many years, it has several limitations in terms of extensibility, management and ease of use, so that it was decided to develop a new version based on a continuous integration system. In this paper we describe a new implementation of the LHCb Nightly Build System based on the open source continuous integration system Jenkins and report on the experience of configuring a complex build workflow in Jenkins.
A PBOM configuration and management method based on templates
NASA Astrophysics Data System (ADS)
Guo, Kai; Qiao, Lihong; Qie, Yifan
2018-03-01
The design of Process Bill of Materials (PBOM) holds a hinge position in the process of product development. The requirements of PBOM configuration design and management for complex products are analysed in this paper, which include the reuse technique of configuration procedure and urgent management need of huge quantity of product family PBOM data. Based on the analysis, the function framework of PBOM configuration and management has been established. Configuration templates and modules are defined in the framework to support the customization and the reuse of configuration process. The configuration process of a detection sensor PBOM is shown as an illustration case in the end. The rapid and agile PBOM configuration and management can be achieved utilizing template-based method, which has a vital significance to improve the development efficiency for complex products.
WIS Implementation Study Report. Volume 2. Resumes.
1983-10-01
WIS modernization that major attention be paid to interface definition and design, system integra- tion and test , and configuration management of the...Estimates -- Computer Corporation of America -- 155 Test Processing Systems -- Newburyport Computer Associates, Inc. -- 183 Cluster II Papers-- Standards...enhancements of the SPL/I compiler system, development of test systems for the verification of SDEX/M and the timing and architecture of the AN/U YK-20 and
2013-09-01
processes used in space system acquisitions, simply implementing a data exchange specification would not fundamentally improve how information is...instruction, searching existing data sources , gathering and maintaining the data needed, and completing and reviewing the collection of information ...and manage the configuration of all critical program models, processes , and tools used throughout the DoD. Second, mandate a data exchange
NASA Technical Reports Server (NTRS)
Dao, Arik-Quang V.; Martin, Lynne; Mohlenbrink, Christoph; Bienert, Nancy; Wolte, Cynthia; Gomez, Ashley; Claudatos, Lauren; Mercer, Joey
2017-01-01
The purpose of this paper is to report on a human factors evaluation of ground control station design concepts for interacting with an unmanned traffic management system. The data collected for this paper comes from recent field tests for NASA's Unmanned Traffic Management (UTM) project, and covers the following topics; workload, situation awareness, as well as flight crew communication, coordination, and procedures. The goal of this evaluation was to determine if the various software implementations for interacting with the UTM system can be described and classified into design concepts to provide guidance for the development of future UTM interfaces. We begin with a brief description of NASA's UTM project, followed by a description of the test range configuration related to a second development phase. We identified (post hoc) two classes in which the ground control stations could be grouped. This grouping was based on level of display integration. The analysis was exploratory and informal. It was conducted to compare ground stations across those two classes and against the aforementioned topics. Herein, we discuss the results.
Integrated System Health Management: Pilot Operational Implementation in a Rocket Engine Test Stand
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Schmalzel, John L.; Morris, Jonathan A.; Turowski, Mark P.; Franzl, Richard
2010-01-01
This paper describes a credible implementation of integrated system health management (ISHM) capability, as a pilot operational system. Important core elements that make possible fielding and evolution of ISHM capability have been validated in a rocket engine test stand, encompassing all phases of operation: stand-by, pre-test, test, and post-test. The core elements include an architecture (hardware/software) for ISHM, gateways for streaming real-time data from the data acquisition system into the ISHM system, automated configuration management employing transducer electronic data sheets (TEDS?s) adhering to the IEEE 1451.4 Standard for Smart Sensors and Actuators, broadcasting and capture of sensor measurements and health information adhering to the IEEE 1451.1 Standard for Smart Sensors and Actuators, user interfaces for management of redlines/bluelines, and establishment of a health assessment database system (HADS) and browser for extensive post-test analysis. The ISHM system was installed in the Test Control Room, where test operators were exposed to the capability. All functionalities of the pilot implementation were validated during testing and in post-test data streaming through the ISHM system. The implementation enabled significant improvements in awareness about the status of the test stand, and events and their causes/consequences. The architecture and software elements embody a systems engineering, knowledge-based approach; in conjunction with object-oriented environments. These qualities are permitting systematic augmentation of the capability and scaling to encompass other subsystems.
Omics Metadata Management Software v. 1 (OMMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and to perform bioinformatics analyses and information management tasks via a simple and intuitive web-based interface. Several use cases with short-read sequence datasets are provided to showcase the full functionality of the OMMS, from metadata curation tasks, to bioinformatics analyses and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for web-based deployment supporting geographically dispersed research teams. Our software was developed with open-source bundles, is flexible, extensible and easily installedmore » and run by operators with general system administration and scripting language literacy.« less
Motion Planning with Six Degrees of Freedom.
1984-05-01
collision-free path taking "P" from some initial configuration to a desired goal configuration. _-- This thesis describes the first known implementation...configuration to a desired goal configuration. This thesis describes the first known implementation of a complete algorithm (at a given resolution) for...insight and clarity this thesis manifests. I am deeply indebted to my supervisor, Tomis Lozano-P~rez, for his guidance, support, and encouragement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laney, T.
The configuration management architecture presented in this Configuration Management Plan is based on the functional model established by DOE-STD-1073-93, ``Guide for Operational Configuration Management Program.`` The DOE Standard defines the configuration management program by the five basic program elements of ``program management,`` ``design requirements,`` ``document control,`` ``change control,`` and ``assessments,`` and the two adjunct recovery programs of ``design reconstitution,`` and ``material condition and aging management.`` The CM model of five elements and two adjunct programs strengthen the necessary technical and administrative control to establish and maintain a consistent technical relationship among the requirements, physical configuration, and documentation. Although the DOEmore » Standard was originally developed for the operational phase of nuclear facilities, this plan has the flexibility to be adapted and applied to all life-cycle phases of both nuclear and non-nuclear facilities. The configuration management criteria presented in this plan endorses the DOE Standard and has been tailored specifically to address the technical relationship of requirements, physical configuration, and documentation during the full life cycle of the Waste Tank Farms and 242-A Evaporator of Tank Waste Remediation System.« less
NASA Technical Reports Server (NTRS)
Upchurch, Christopher
2011-01-01
The project, being the development of resource management applications, consisted entirely of my own effort. From deliverable requirements provided by my mentor, and some functional requirement additions generated through design reviews, It was my responsibility to implement the requested features as well as possible, given the resources available. For the most part development work consisted of database programming and functional testing using real resource data. Additional projects I worked on included some firing room console training, configuring the new NE-A microcontroller development lab network, mentoring high school CubeSat students, and managing the NE interns' component of the mentor appreciation ceremony.
Saver.net lidar network in southern South America
NASA Astrophysics Data System (ADS)
Ristori, Pablo; Otero, Lidia; Jin, Yoshitaka; Barja, Boris; Shimizu, Atsushi; Barbero, Albane; Salvador, Jacobo; Bali, Juan Lucas; Herrera, Milagros; Etala, Paula; Acquesta, Alejandro; Quel, Eduardo; Sugimoto, Nobuo; Mizuno, Akira
2018-04-01
The South American Environmental Risk Management Network (SAVER-Net) is an instrumentation network, mainly composed by lidars, to provide real-time information for atmospheric hazards and risk management purposes in South America. This lidar network have been developed since 2012 and all its sampling points are expected to be fully implemented by 2017. This paper describes the network's status and configuration, the data acquisition and processing scheme (protocols and data levels), as well as some aspects of the scientific networking in Latin American Lidar Network (LALINET). Similarly, the paper lays out future plans on the operation and integration to major international collaborative efforts.
PIV Logon Configuration Guidance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Glen Alan
This document details the configurations and enhancements implemented to support the usage of federal Personal Identity Verification (PIV) Card for logon on unclassified networks. The guidance is a reference implementation of the configurations and enhancements deployed at the Los Alamos National Laboratory (LANL) by Network and Infrastructure Engineering – Core Services (NIE-CS).
Quality assurance planning for lunar Mars exploration
NASA Technical Reports Server (NTRS)
Myers, Kay
1991-01-01
A review is presented of the tools and techniques required to meet the challenge of total quality in the goal of traveling to Mars and returning to the moon. One program used by NASA to ensure the integrity of baselined requirements documents is configuration management (CM). CM is defined as an integrated management process that documents and identifies the functional and physical characteristics of a facility's systems, structures, computer software, and components. It also ensures that changes to these characteristics are properly assessed, developed, approved, implemented, verified, recorded, and incorporated into the facility's documentation. Three principal areas are discussed that will realize significant efficiencies and enhanced effectiveness, change assessment, change avoidance, and requirements management.
The Resource Manager the ATLAS Trigger and Data Acquisition System
NASA Astrophysics Data System (ADS)
Aleksandrov, I.; Avolio, G.; Lehmann Miotto, G.; Soloviev, I.
2017-10-01
The Resource Manager is one of the core components of the Data Acquisition system of the ATLAS experiment at the LHC. The Resource Manager marshals the right for applications to access resources which may exist in multiple but limited copies, in order to avoid conflicts due to program faults or operator errors. The access to resources is managed in a manner similar to what a lock manager would do in other software systems. All the available resources and their association to software processes are described in the Data Acquisition configuration database. The Resource Manager is queried about the availability of resources every time an application needs to be started. The Resource Manager’s design is based on a client-server model, hence it consists of two components: the Resource Manager “server” application and the “client” shared library. The Resource Manager server implements all the needed functionalities, while the Resource Manager client library provides remote access to the “server” (i.e., to allocate and free resources, to query about the status of resources). During the LHC’s Long Shutdown period, the Resource Manager’s requirements have been reviewed at the light of the experience gained during the LHC’s Run 1. As a consequence, the Resource Manager has undergone a full re-design and re-implementation cycle with the result of a reduction of the code base by 40% with respect to the previous implementation. This contribution will focus on the way the design and the implementation of the Resource Manager could leverage the new features available in the C++11 standard, and how the introduction of external libraries (like Boost multi-container) led to a more maintainable system. Additionally, particular attention will be given to the technical solutions adopted to ensure the Resource Manager could effort the typical requests rates of the Data Acquisition system, which is about 30000 requests in a time window of few seconds coming from more than 1000 clients.
GIS Application System Design Applied to Information Monitoring
NASA Astrophysics Data System (ADS)
Qun, Zhou; Yujin, Yuan; Yuena, Kang
Natural environment information management system involves on-line instrument monitoring, data communications, database establishment, information management software development and so on. Its core lies in collecting effective and reliable environmental information, increasing utilization rate and sharing degree of environment information by advanced information technology, and maximizingly providing timely and scientific foundation for environmental monitoring and management. This thesis adopts C# plug-in application development and uses a set of complete embedded GIS component libraries and tools libraries provided by GIS Engine to finish the core of plug-in GIS application framework, namely, the design and implementation of framework host program and each functional plug-in, as well as the design and implementation of plug-in GIS application framework platform. This thesis adopts the advantages of development technique of dynamic plug-in loading configuration, quickly establishes GIS application by visualized component collaborative modeling and realizes GIS application integration. The developed platform is applicable to any application integration related to GIS application (ESRI platform) and can be as basis development platform of GIS application development.
2016-12-13
INFORMATION TECHNOLOGY , GOVERNMENT ACCOUNTABILITY OFFICE SUBJECT: DoD Cybersecurity Weaknesses as Reported in Audit Reports Issued From August...The Air Force Audit Agency recommended that the Air Force Reserve officials direct AFRC personnel to implement a standard process to ensure continued...those products and systems throughout the system development life cycle. The DoD audit community and the GAO reported configuration management
NASA Technical Reports Server (NTRS)
Rowe, Sidney E.
2010-01-01
In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.
Software as a service approach to sensor simulation software deployment
NASA Astrophysics Data System (ADS)
Webster, Steven; Miller, Gordon; Mayott, Gregory
2012-05-01
Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.
Separation Assurance and Scheduling Coordination in the Arrival Environment
NASA Technical Reports Server (NTRS)
Aweiss, Arwa S.; Cone, Andrew C.; Holladay, Joshua J.; Munoz, Epifanio; Lewis, Timothy A.
2016-01-01
Separation assurance (SA) automation has been proposed as either a ground-based or airborne paradigm. The arrival environment is complex because aircraft are being sequenced and spaced to the arrival fix. This paper examines the effect of the allocation of the SA and scheduling functions on the performance of the system. Two coordination configurations between an SA and an arrival management system are tested using both ground and airborne implementations. All configurations have a conflict detection and resolution (CD&R) system and either an integrated or separated scheduler. Performance metrics are presented for the ground and airborne systems based on arrival traffic headed to Dallas/ Fort Worth International airport. The total delay, time-spacing conformance, and schedule conformance are used to measure efficiency. The goal of the analysis is to use the metrics to identify performance differences between the configurations that are based on different function allocations. A surveillance range limitation of 100 nmi and a time delay for sharing updated trajectory intent of 30 seconds were implemented for the airborne system. Overall, these results indicate that the surveillance range and the sharing of trajectories and aircraft schedules are important factors in determining the efficiency of an airborne arrival management system. These parameters are not relevant to the ground-based system as modeled for this study because it has instantaneous access to all aircraft trajectories and intent. Creating a schedule external to the CD&R and the scheduling conformance system was seen to reduce total delays for the airborne system, and had a minor effect on the ground-based system. The effect of an external scheduler on other metrics was mixed.
Automatic provisioning, deployment and orchestration for load-balancing THREDDS instances
NASA Astrophysics Data System (ADS)
Cofino, A. S.; Fernández-Tejería, S.; Kershaw, P.; Cimadevilla, E.; Petri, R.; Pryor, M.; Stephens, A.; Herrera, S.
2017-12-01
THREDDS is a widely used web server to provide to different scientific communities with data access and discovery. Due to THREDDS's lack of horizontal scalability and automatic configuration management and deployment, this service usually deals with service downtimes and time consuming configuration tasks, mainly when an intensive use is done as is usual within the scientific community (e.g. climate). Instead of the typical installation and configuration of a single or multiple independent THREDDS servers, manually configured, this work presents an automatic provisioning, deployment and orchestration cluster of THREDDS servers. This solution it's based on Ansible playbooks, used to control automatically the deployment and configuration setup on a infrastructure and to manage the datasets available in THREDDS instances. The playbooks are based on modules (or roles) of different backends and frontends load-balancing setups and solutions. The frontend load-balancing system enables horizontal scalability by delegating requests to backend workers, consisting in a variable number of instances for the THREDDS server. This implementation allows to configure different infrastructure and deployment scenario setups, as more workers are easily added to the cluster by simply declaring them as Ansible variables and executing the playbooks, and also provides fault-tolerance and better reliability since if any of the workers fail another instance of the cluster can take over it. In order to test the solution proposed, two real scenarios are analyzed in this contribution: The JASMIN Group Workspaces at CEDA and the User Data Gateway (UDG) at the Data Climate Service from the University of Cantabria. On the one hand, the proposed configuration has provided CEDA with a higher level and more scalable Group Workspaces (GWS) service than the previous one based on Unix permissions, improving also the data discovery and data access experience. On the other hand, the UDG has improved its scalability by allowing requests to be distributed to the backend workers instead of being served by a unique THREDDS worker. As a conclusion the proposed configuration supposes a significant improvement with respect to configurations based on non-collaborative THREDDS' instances.
Site systems engineering fiscal year 1999 multi-year work plan (MYWP) update for WBS 1.8.2.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
GRYGIEL, M.L.
1998-10-08
Manage the Site Systems Engineering process to provide a traceable integrated requirements-driven, and technically defensible baseline. Through the Site Integration Group(SIG), Systems Engineering ensures integration of technical activities across all site projects. Systems Engineering's primary interfaces are with the RL Project Managers, the Project Direction Office and with the Project Major Subcontractors, as well as with the Site Planning organization. Systems Implementation: (1) Develops, maintains, and controls the site integrated technical baseline, ensures the Systems Engineering interfaces between projects are documented, and maintain the Site Environmental Management Specification. (2) Develops and uses dynamic simulation models for verification of the baselinemore » and analysis of alternatives. (3) Performs and documents fictional and requirements analyses. (4) Works with projects, technology management, and the SIG to identify and resolve technical issues. (5) Supports technical baseline information for the planning and budgeting of the Accelerated Cleanup Plan, Multi-Year Work Plans, Project Baseline Summaries as well as performance measure reporting. (6) Works with projects to ensure the quality of data in the technical baseline. (7) Develops, maintains and implements the site configuration management system.« less
A survey of unmanned ground vehicles with applications to agricultural and environmental sensing
NASA Astrophysics Data System (ADS)
Bonadies, Stephanie; Lefcourt, Alan; Gadsden, S. Andrew
2016-05-01
Unmanned ground vehicles have been utilized in the last few decades in an effort to increase the efficiency of agriculture, in particular, by reducing labor needs. Unmanned vehicles have been used for a variety of purposes including: soil sampling, irrigation management, precision spraying, mechanical weeding, and crop harvesting. In this paper, unmanned ground vehicles, implemented by researchers or commercial operations, are characterized through a comparison to other vehicles used in agriculture, namely airplanes and UAVs. An overview of different trade-offs of configurations, control schemes, and data collection technologies is provided. Emphasis is given to the use of unmanned ground vehicles in food crops, and includes a discussion of environmental impacts and economics. Factors considered regarding the future trends and potential issues of unmanned ground vehicles include development, management and performance. Also included is a strategy to demonstrate to farmers the safety and profitability of implementing the technology.
Heterogeneous distributed databases: A case study
NASA Technical Reports Server (NTRS)
Stewart, Tracy R.; Mukkamala, Ravi
1991-01-01
Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.
Attaining and maintaining data integrity with configuration management
NASA Astrophysics Data System (ADS)
Huffman, Dorothy J.; Jeane, Shirley A.
1993-08-01
Managers and scientists are concerned about data integrity because they draw conclusions from data that can have far reaching effects. Projects managers use Configuration Management to insure that hardware, software, and project information are controlled. They have not, as yet, applied its rigorously to data. However, there is ample opportunity in the data collection and production process to jeopardize data integrity. Environmental changes, tampering and production problems can all affect data integrity. There are four functions included in the Configuration Management process: configuration identification, control, auditing and status accounting. These functions provide management the means to attain data integrity and the visibility into engineering processes needed to maintain data integrity. When project managers apply Configuration Management processes to data, the data user can trace back through history to validate data integrity. The user knows that the project allowed only orderly changes to the data. He is assured that project personnel followed procedures to maintain data quality. He also has access to status information about the data. The user receives data products with a known integrity level and a means to assess the impact of past events ont he conclusions derived from the data. To obtain these benefits, project managers should apply the Configuration Management discipline to data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasemir, Kay; Hartman, Steven M
2009-01-01
A new alarm system toolkit has been implemented at SNS. The toolkit handles the Central Control Room (CCR) 'annunciator', or audio alarms. For the new alarm system to be effective, the alarms must be meaningful and properly configured. Along with the implementation of the new alarm toolkit, a thorough documentation and rationalization of the alarm configuration is taking place. Requirements and maintenance of a robust alarm configuration have been gathered from system and operations experts. In this paper we present our practical experience with the vacuum system alarm handling configuration of the alarm toolkit.
NASA Astrophysics Data System (ADS)
Gardner, R. W.; Hanushevsky, A.; Vukotic, I.; Yang, W.
2017-10-01
As many LHC Tier-3 and some Tier-2 centers look toward streamlining operations, they are considering autonomously managed storage elements as part of the solution. These storage elements are essentially file caching servers. They can operate as whole file or data block level caches. Several implementations exist. In this paper we explore using XRootD caching servers that can operate in either mode. They can also operate autonomously (i.e. demand driven), be centrally managed (i.e. a Rucio managed cache), or operate in both modes. We explore the pros and cons of various configurations as well as practical requirements for caching to be effective. While we focus on XRootD caches, the analysis should apply to other kinds of caches as well.
System for Configuring Modular Telemetry Transponders
NASA Technical Reports Server (NTRS)
Varnavas, Kosta A. (Inventor); Sims, William Herbert, III (Inventor)
2014-01-01
A system for configuring telemetry transponder cards uses a database of error checking protocol data structures, each containing data to implement at least one CCSDS protocol algorithm. Using a user interface, a user selects at least one telemetry specific error checking protocol from the database. A compiler configures an FPGA with the data from the data structures to implement the error checking protocol.
Real-Time Reed-Solomon Decoder
NASA Technical Reports Server (NTRS)
Maki, Gary K.; Cameron, Kelly B.; Owsley, Patrick A.
1994-01-01
Generic Reed-Solomon decoder fast enough to correct errors in real time in practical applications designed to be implemented in fewer and smaller very-large-scale integrated, VLSI, circuit chips. Configured to operate in pipelined manner. One outstanding aspect of decoder design is that Euclid multiplier and divider modules contain Galoisfield multipliers configured as combinational-logic cells. Operates at speeds greater than older multipliers. Cellular configuration highly regular and requires little interconnection area, making it ideal for implementation in extraordinarily dense VLSI circuitry. Flight electronics single chip version of this technology implemented and available.
TWRS authorization basis configuration control summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendoza, D.P.
This document was developed to define the Authorization Basis management functional requirements for configuration control, to evaluate the management control systems currently in place, and identify any additional controls that may be required until the TWRS [Tank Waste Remediation System] Configuration Management system is fully in place.
DOT National Transportation Integrated Search
1997-01-01
Prepared ca. 1997. The Configuration Management Plan (CMP) provides configuration management instructions and guidance for the Vessel Traffic Service (VTS) system of the Ports and Waterways Safety System (PAWSS) project. The CMP describes in detail t...
Implementation and use of a highly available and innovative IaaS solution: the Cloud Area Padovana
NASA Astrophysics Data System (ADS)
Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Biasotto, M.; Dal Pra, S.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Frizziero, E.; Gulmini, M.; Michelotto, M.; Sgaravatto, M.; Traldi, S.; Venaruzzo, M.; Verlato, M.; Zangrando, L.
2015-12-01
While in the business world the cloud paradigm is typically implemented purchasing resources and services from third party providers (e.g. Amazon), in the scientific environment there's usually the need of on-premises IaaS infrastructures which allow efficient usage of the hardware distributed among (and owned by) different scientific administrative domains. In addition, the requirement of open source adoption has led to the choice of products like OpenStack by many organizations. We describe a use case of the Italian National Institute for Nuclear Physics (INFN) which resulted in the implementation of a unique cloud service, called ’Cloud Area Padovana’, which encompasses resources spread over two different sites: the INFN Legnaro National Laboratories and the INFN Padova division. We describe how this IaaS has been implemented, which technologies have been adopted and how services have been configured in high-availability (HA) mode. We also discuss how identity and authorization management were implemented, adopting a widely accepted standard architecture based on SAML2 and OpenID: by leveraging the versatility of those standards the integration with authentication federations like IDEM was implemented. We also discuss some other innovative developments, such as a pluggable scheduler, implemented as an extension of the native OpenStack scheduler, which allows the allocation of resources according to a fair-share based model and which provides a persistent queuing mechanism for handling user requests that can not be immediately served. Tools, technologies, procedures used to install, configure, monitor, operate this cloud service are also discussed. Finally we present some examples that show how this IaaS infrastructure is being used.
An Open Platform for Seamless Sensor Support in Healthcare for the Internet of Things
Miranda, Jorge; Cabral, Jorge; Wagner, Stefan Rahr; Fischer Pedersen, Christian; Ravelo, Blaise; Memon, Mukhtiar; Mathiesen, Morten
2016-01-01
Population aging and increasing pressure on health systems are two issues that demand solutions. Involving and empowering citizens as active managers of their health represents a desirable shift from the current culture mainly focused on treatment of disease, to one also focused on continuous health management and well-being. Current developments in technological areas such as the Internet of Things (IoT), lead to new technological solutions that can aid this shift in the healthcare sector. This study presents the design, development, implementation and evaluation of a platform called Common Recognition and Identification Platform (CRIP), a part of the CareStore project, which aims at supporting caregivers and citizens to manage health routines in a seamless way. Specifically, the CRIP offers sensor-based support for seamless identification of users and health devices. A set of initial requirements was defined with a focus on usability limitations and current sensor technologies. The CRIP was designed and implemented using several technologies that enable seamless integration and interaction of sensors and people, namely Near Field Communication and fingerprint biometrics for identification and authentication, Bluetooth for communication with health devices and web services for wider integration with other platforms. Two CRIP prototypes were implemented and evaluated in laboratory during a period of eight months. The evaluations consisted of identifying users and devices, as well as seamlessly configure and acquire vital data from the last. Also, the entire Carestore platform was deployed in a nursing home where its usability was evaluated with caregivers. The evaluations helped assess that seamless identification of users and seamless configuration and communication with health devices is feasible and can help enable the IoT on healthcare applications. Therefore, the CRIP and similar platforms could be transformed into a valuable enabling technology for secure and reliable IoT deployments on the healthcare sector. PMID:27941656
An Open Platform for Seamless Sensor Support in Healthcare for the Internet of Things.
Miranda, Jorge; Cabral, Jorge; Wagner, Stefan Rahr; Fischer Pedersen, Christian; Ravelo, Blaise; Memon, Mukhtiar; Mathiesen, Morten
2016-12-08
Population aging and increasing pressure on health systems are two issues that demand solutions. Involving and empowering citizens as active managers of their health represents a desirable shift from the current culture mainly focused on treatment of disease, to one also focused on continuous health management and well-being. Current developments in technological areas such as the Internet of Things (IoT), lead to new technological solutions that can aid this shift in the healthcare sector. This study presents the design, development, implementation and evaluation of a platform called Common Recognition and Identification Platform (CRIP), a part of the CareStore project, which aims at supporting caregivers and citizens to manage health routines in a seamless way. Specifically, the CRIP offers sensor-based support for seamless identification of users and health devices. A set of initial requirements was defined with a focus on usability limitations and current sensor technologies. The CRIP was designed and implemented using several technologies that enable seamless integration and interaction of sensors and people, namely Near Field Communication and fingerprint biometrics for identification and authentication, Bluetooth for communication with health devices and web services for wider integration with other platforms. Two CRIP prototypes were implemented and evaluated in laboratory during a period of eight months. The evaluations consisted of identifying users and devices, as well as seamlessly configure and acquire vital data from the last. Also, the entire Carestore platform was deployed in a nursing home where its usability was evaluated with caregivers. The evaluations helped assess that seamless identification of users and seamless configuration and communication with health devices is feasible and can help enable the IoT on healthcare applications. Therefore, the CRIP and similar platforms could be transformed into a valuable enabling technology for secure and reliable IoT deployments on the healthcare sector.
Initial development of the DIII–D snowflake divertor control
NASA Astrophysics Data System (ADS)
Kolemen, E.; Vail, P. J.; Makowski, M. A.; Allen, S. L.; Bray, B. D.; Fenstermacher, M. E.; Humphreys, D. A.; Hyatt, A. W.; Lasnier, C. J.; Leonard, A. W.; McLean, A. G.; Maingi, R.; Nazikian, R.; Petrie, T. W.; Soukhanovskii, V. A.; Unterberg, E. A.
2018-06-01
Simultaneous control of two proximate magnetic field nulls in the divertor region is demonstrated on DIII–D to enable plasma operations in an advanced magnetic configuration known as the snowflake divertor (SFD). The SFD is characterized by a second-order poloidal field null, created by merging two first-order nulls of the standard divertor configuration. The snowflake configuration has many magnetic properties, such as high poloidal flux expansion, large plasma-wetted area, and additional strike points, that are advantageous for divertor heat flux management in future fusion reactors. However, the magnetic configuration of the SFD is highly-sensitive to changes in currents within the plasma and external coils and therefore requires complex magnetic control. The first real-time snowflake detection and control system on DIII–D has been implemented in order to stabilize the configuration. The control algorithm calculates the position of the two nulls in real-time by locally-expanding the Grad–Shafranov equation in the divertor region. A linear relation between variations in the poloidal field coil currents and changes in the null locations is then analytically derived. This formulation allows for simultaneous control of multiple coils to achieve a desired SFD configuration. It is shown that the control enabled various snowflake configurations on DIII–D in scenarios such as the double-null advanced tokamak. The SFD resulted in a 2.5× reduction in the peak heat flux for many energy confinement times (2–3 s) without any adverse effects on core plasma performance.
Team table: a framework and tool for continuous factory planning
NASA Astrophysics Data System (ADS)
Sihn, Wilfried; Bischoff, Juergen; von Briel, Ralf; Josten, Marcus
2000-10-01
Growing market turbulences and shorter product life cycles require a continuous adaptation of factory structures resulting in a continuous factory planning process. Therefore a new framework is developed which focuses on configuration and data management process integration. This enable an online system performance evaluation based on continuous availability of current data. The use of this framework is especially helpful and will guarantee high cost and time savings, when used in the early stages of the planning, called the concept or rough planning phase. The new framework is supported by a planning round table as a tool for team-based configuration processes integrating the knowledge of all persons involved in planning processes. A case study conducted at a German company shows the advantages which can be achieved by implementing the new framework and methods.
NASA Astrophysics Data System (ADS)
Dolenc, B.; Vrečko, D.; Juričić, Ð.; Pohjoranta, A.; Pianese, C.
2017-03-01
Degradation and poisoning of solid oxide fuel cell (SOFC) stacks are continuously shortening the lifespan of SOFC systems. Poisoning mechanisms, such as carbon deposition, form a coating layer, hence rapidly decreasing the efficiency of the fuel cells. Gas composition of inlet gases is known to have great impact on the rate of coke formation. Therefore, monitoring of these variables can be of great benefit for overall management of SOFCs. Although measuring the gas composition of the gas stream is feasible, it is too costly for commercial applications. This paper proposes three distinct approaches for the design of gas composition estimators of an SOFC system in anode off-gas recycle configuration which are (i.) accurate, and (ii.) easy to implement on a programmable logic controller. Firstly, a classical approach is briefly revisited and problems related to implementation complexity are discussed. Secondly, the model is simplified and adapted for easy implementation. Further, an alternative data-driven approach for gas composition estimation is developed. Finally, a hybrid estimator employing experimental data and 1st-principles is proposed. Despite the structural simplicity of the estimators, the experimental validation shows a high precision for all of the approaches. Experimental validation is performed on a 10 kW SOFC system.
NASA Astrophysics Data System (ADS)
Demyanova, O. V.; Andreeva, E. V.; Sibgatullina, D. R.; Kireeva-Karimova, A. M.; Gafurova, A. Y.; Zakirova, Ch S.
2018-05-01
ERP in a modern enterprise information system allowed optimizing internal business processes, reducing production costs and increasing the attractiveness of enterprises for investors. It is an important component of success in the competition and an important condition for attracting investments in the key sector of the state. A vivid example of these systems are enterprise information systems using the methodology of ERP (Enterprise Resource Planning - enterprise resource planning). ERP is an integrated set of methods, processes, technologies and tools. It is based on: supply chain management; advanced planning and scheduling; sales automation; tool responsible for configuring; final resource planning; intelligence business; OLAP technology; block e- Commerce; management of product data. The main purpose of ERP systems is the automation of interrelated processes of planning, accounting and management in key areas of the company. ERP systems are automated systems that effectively address complex problems, including optimal allocation of business resources, ensuring quick and efficient delivery of goods and services to the consumer. Knowledge embedded in ERP systems provided enterprise-wide automation to introduce the activities of all functional departments of the company as a single complex system. At the level of quality estimates, most managers understand that the implementations of ERP systems is a necessary and useful procedure. Assessment of the effectiveness of the information systems implementation is relevant.
Three-terminal quantum-dot thermal management devices
NASA Astrophysics Data System (ADS)
Zhang, Yanchao; Zhang, Xin; Ye, Zhuolin; Lin, Guoxing; Chen, Jincan
2017-04-01
We theoretically demonstrate that the heat flows can be manipulated by designing a three-terminal quantum-dot system consisting of three Coulomb-coupled quantum dots connected to respective reservoirs. In this structure, the electron transport between the quantum dots is forbidden, but the heat transport is allowed by the Coulomb interaction to transmit heat between the reservoirs with a temperature difference. We show that such a system is capable of performing thermal management operations, such as heat flow swap, thermal switch, and heat path selector. An important thermal rectifier, i.e., a thermal diode, can be implemented separately in two different paths. The asymmetric configuration of a quantum-dot system is a necessary condition for thermal management operations in practical applications. These results should have important implications in providing the design principle for quantum-dot thermal management devices and may open up potential applications for the thermal management of quantum-dot systems at the nanoscale.
Geyer, John; Myers, Kathleen; Vander Stoep, Ann; McCarty, Carolyn; Palmer, Nancy; DeSalvo, Amy
2011-10-01
Clinical trials with multiple intervention locations and a single research coordinating center can be logistically difficult to implement. Increasingly, web-based systems are used to provide clinical trial support with many commercial, open source, and proprietary systems in use. New web-based tools are available which can be customized without programming expertise to deliver web-based clinical trial management and data collection functions. To demonstrate the feasibility of utilizing low-cost configurable applications to create a customized web-based data collection and study management system for a five intervention site randomized clinical trial establishing the efficacy of providing evidence-based treatment via teleconferencing to children with attention-deficit hyperactivity disorder. The sites are small communities that would not usually be included in traditional randomized trials. A major goal was to develop database that participants could access from computers in their home communities for direct data entry. Discussed is the selection process leading to the identification and utilization of a cost-effective and user-friendly set of tools capable of customization for data collection and study management tasks. An online assessment collection application, template-based web portal creation application, and web-accessible Access 2007 database were selected and customized to provide the following features: schedule appointments, administer and monitor online secure assessments, issue subject incentives, and securely transmit electronic documents between sites. Each tool was configured by users with limited programming expertise. As of June 2011, the system has successfully been used with 125 participants in 5 communities, who have completed 536 sets of assessment questionnaires, 8 community therapists, and 11 research staff at the research coordinating center. Total automation of processes is not possible with the current set of tools as each is loosely affiliated, creating some inefficiency. This system is best suited to investigations with a single data source e.g., psychosocial questionnaires. New web-based applications can be used by investigators with limited programming experience to implement user-friendly, efficient, and cost-effective tools for multi-site clinical trials with small distant communities. Such systems allow the inclusion in research of populations that are not usually involved in clinical trials.
Russom, Diana; Ahmed, Amira; Gonzalez, Nancy; Alvarnas, Joseph; DiGiusto, David
2012-01-01
Regulatory requirements for the manufacturing of cell products for clinical investigation require a significant level of record-keeping, starting early in process development and continuing through to the execution and requisite follow-up of patients on clinical trials. Central to record-keeping is the management of documentation related to patients, raw materials, processes, assays and facilities. To support these requirements, we evaluated several laboratory information management systems (LIMS), including their cost, flexibility, regulatory compliance, ongoing programming requirements and ability to integrate with laboratory equipment. After selecting a system, we performed a pilot study to develop a user-configurable LIMS for our laboratory in support of our pre-clinical and clinical cell-production activities. We report here on the design and utilization of this system to manage accrual with a healthy blood-donor protocol, as well as manufacturing operations for the production of a master cell bank and several patient-specific stem cell products. The system was used successfully to manage blood donor eligibility, recruiting, appointments, billing and serology, and to provide annual accrual reports. Quality management reporting features of the system were used to capture, report and investigate process and equipment deviations that occurred during the production of a master cell bank and patient products. Overall the system has served to support the compliance requirements of process development and phase I/II clinical trial activities for our laboratory and can be easily modified to meet the needs of similar laboratories.
FPGA-based protein sequence alignment : A review
NASA Astrophysics Data System (ADS)
Isa, Mohd. Nazrin Md.; Muhsen, Ku Noor Dhaniah Ku; Saiful Nurdin, Dayana; Ahmad, Muhammad Imran; Anuar Zainol Murad, Sohiful; Nizam Mohyar, Shaiful; Harun, Azizi; Hussin, Razaidi
2017-11-01
Sequence alignment have been optimized using several techniques in order to accelerate the computation time to obtain the optimal score by implementing DP-based algorithm into hardware such as FPGA-based platform. During hardware implementation, there will be performance challenges such as the frequent memory access and highly data dependent in computation process. Therefore, investigation in processing element (PE) configuration where involves more on memory access in load or access the data (substitution matrix, query sequence character) and the PE configuration time will be the main focus in this paper. There are various approaches to enhance the PE configuration performance that have been done in previous works such as by using serial configuration chain and parallel configuration chain i.e. the configuration data will be loaded into each PEs sequentially and simultaneously respectively. Some researchers have proven that the performance using parallel configuration chain has optimized both the configuration time and area.
Lighting system with thermal management system
Arik, Mehmet; Weaver, Stanton Earl; Stecher, Thomas Elliot; Seeley, Charles Erklin; Kuenzler, Glenn Howard; Wolfe, Jr., Charles Franklin; Utturkar, Yogen Vishwas; Sharma, Rajdeep; Prabhakaran, Satish; Icoz, Tunc
2015-02-24
Lighting systems having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system is configured to provide an air flow, such as a unidirectional air flow, through the housing structure in order to cool the light source. The driver electronics are configured to provide power to each of the light source and the thermal management system.
Lighting system with thermal management system
Arik, Mehmet; Weaver, Stanton Earl; Stecher, Thomas Elliot; Seeley, Charles Erklin; Kuenzler, Glenn Howard; Wolfe, Jr., Charles Franklin; Utturkar, Yogen Vishwas; Sharma, Rajdeep; Prabhakaran, Satish; Icoz, Tunc
2015-08-25
Lighting systems having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system is configured to provide an air flow, such as a unidirectional air flow, through the housing structure in order to cool the light source. The driver electronics are configured to provide power to each of the light source and the thermal management system.
Lighting system with thermal management system
Arik, Mehmet; Weaver, Stanton; Stecher, Thomas; Seeley, Charles; Kuenzler, Glenn; Wolfe, Jr., Charles; Utturkar, Yogen; Sharma, Rajdeep; Prabhakaran, Satish; Icoz, Tunc
2013-05-07
Lighting systems having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system is configured to provide an air flow, such as a unidirectional air flow, through the housing structure in order to cool the light source. The driver electronics are configured to provide power to each of the light source and the thermal management system.
Lighting system with thermal management system
Arik, Mehmet; Weaver, Stanton Earl; Stecher, Thomas Elliot; Seeley, Charles Erklin; Kuenzler, Glenn Howard; Wolfe, Jr, Charles Franklin; Utturkar, Yogen Vishwas; Sharma, Rajdeep; Prabhakaran, Satish; Icoz, Tunc
2016-10-11
Lighting systems having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system is configured to provide an air flow, such as a unidirectional air flow, through the housing structure in order to cool the light source. The driver electronics are configured to provide power to each of the light source and the thermal management system.
Avionics test bed development plan
NASA Technical Reports Server (NTRS)
Harris, L. H.; Parks, J. M.; Murdock, C. R.
1981-01-01
A development plan for a proposed avionics test bed facility for the early investigation and evaluation of new concepts for the control of large space structures, orbiter attached flex body experiments, and orbiter enhancements is presented. A distributed data processing facility that utilizes the current laboratory resources for the test bed development is outlined. Future studies required for implementation, the management system for project control, and the baseline system configuration are defined. A background analysis of the specific hardware system for the preliminary baseline avionics test bed system is included.
RIPS: a UNIX-based reference information program for scientists.
Klyce, S D; Rózsa, A J
1983-09-01
A set of programs is described which implement a personal reference management and information retrieval system on a UNIX-based minicomputer. The system operates in a multiuser configuration with a host of user-friendly utilities that assist entry of reference material, its retrieval, and formatted printing for associated tasks. A search command language was developed without restriction in keyword vocabulary, number of keywords, or level of parenthetical expression nesting. The system is readily transported, and by design is applicable to any academic specialty.
NASA Technical Reports Server (NTRS)
Bishop, Matt
1988-01-01
The organization of some tools to help improve passwork security at a UNIX-based site is described along with how to install and use them. These tools and their associated library enable a site to force users to pick reasonably safe passwords (safe being site configurable) and to enable site management to try to crack existing passworks. The library contains various versions of a very fast implementation of the Data Encryption Standard and of the one-way encryption functions used to encryp the password.
A preliminary estimate of future communications traffic for the electric power system
NASA Technical Reports Server (NTRS)
Barnett, R. M.
1981-01-01
Diverse new generator technologies using renewable energy, and to improve operational efficiency throughout the existing electric power systems are presented. A description of a model utility and the information transfer requirements imposed by incorporation of dispersed storage and generation technologies and implementation of more extensive energy management are estimated. An example of possible traffic for an assumed system, and an approach that can be applied to other systems, control configurations, or dispersed storage and generation penetrations is provided.
An Evaluation Method of Equipment Reliability Configuration Management
NASA Astrophysics Data System (ADS)
Wang, Wei; Feng, Weijia; Zhang, Wei; Li, Yuan
2018-01-01
At present, many equipment development companies have been aware of the great significance of reliability of the equipment development. But, due to the lack of effective management evaluation method, it is very difficult for the equipment development company to manage its own reliability work. Evaluation method of equipment reliability configuration management is to determine the reliability management capabilities of equipment development company. Reliability is not only designed, but also managed to achieve. This paper evaluates the reliability management capabilities by reliability configuration capability maturity model(RCM-CMM) evaluation method.
DPM evolution: a disk operations management engine for DPM
NASA Astrophysics Data System (ADS)
Manzi, A.; Furano, F.; Keeble, O.; Bitzes, G.
2017-10-01
The DPM (Disk Pool Manager) project is the most widely deployed solution for storage of large data repositories on Grid sites, and is completing the most important upgrade in its history, with the aim of bringing important new features, performance and easier long term maintainability. Work has been done to make the so-called “legacy stack” optional, and substitute it with an advanced implementation that is based on the fastCGI and RESTful technologies. Beside the obvious gain in making optional several legacy components that are difficult to maintain, this step brings important features together with performance enhancements. Among the most important features we can cite the simplification of the configuration, the possibility of working in a totally SRM-free mode, the implementation of quotas, free/used space on directories, and the implementation of volatile pools that can pull files from external sources, which can be used to deploy data caches. Moreover, the communication with the new core, called DOME (Disk Operations Management Engine) now happens through secure HTTPS channels through an extensively documented, industry-compliant protocol. For this leap, referred to with the codename “DPM Evolution”, the help of the DPM collaboration has been very important in the beta testing phases, and here we report about the technical choices.
NASA Technical Reports Server (NTRS)
Lee, Paul U.; Bender, Kim; Pagan, Danielle
2011-01-01
Flexible Airspace Management (FAM) is a mid- term Next Generation Air Transportation System (NextGen) concept that allows dynamic changes to airspace configurations to meet the changes in the traffic demand. A series of human-in-the-loop (HITL) studies have identified procedures and decision support requirements needed to implement FAM. This paper outlines a suggested FAM procedure and associated decision support functionality based on these HITL studies. A description of both the tools used to support the HITLs and the planned NextGen technologies available in the mid-term are presented and compared. The mid-term implementation of several NextGen capabilities, specifically, upgrades to the Traffic Management Unit (TMU), the initial release of an en route automation system, the deployment of a digital data communication system, a more flexible voice communications network, and the introduction of a tool envisioned to manage and coordinate networked ground systems can support the implementation of the FAM concept. Because of the variability in the overall deployment schedule of the mid-term NextGen capabilities, the dependency of the individual NextGen capabilities are examined to determine their impact on a mid-term implementation of FAM. A cursory review of the different technologies suggests that new functionality slated for the new en route automation system is a critical enabling technology for FAM, as well as the functionality to manage and coordinate networked ground systems. Upgrades to the TMU are less critical but important nonetheless for FAM to be fully realized. Flexible voice communications network and digital data communication system could allow more flexible FAM operations but they are not as essential.
Integrated System Health Management (ISHM) Implementation in Rocket Engine Testing
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Morris, Jon; Turowski, Mark; Franzl, Richard; Walker, Mark; Kapadia, Ravi; Venkatesh, Meera
2010-01-01
A pilot operational ISHM capability has been implemented for the E-2 Rocket Engine Test Stand (RETS) and a Chemical Steam Generator (CSG) test article at NASA Stennis Space Center. The implementation currently includes an ISHM computer and a large display in the control room. The paper will address the overall approach, tools, and requirements. It will also address the infrastructure and architecture. Specific anomaly detection algorithms will be discussed regarding leak detection and diagnostics, valve validation, and sensor validation. It will also describe development and use of a Health Assessment Database System (HADS) as a repository for measurements, health, configuration, and knowledge related to a system with ISHM capability. It will conclude with a discussion of user interfaces, and a description of the operation of the ISHM system prior, during, and after testing.
Evaluation of the multifunctional worker role: a stakeholder analysis.
Jones, K R; Redman, R W; VandenBosch, T M; Holdwick, C; Wolgin, F
1999-01-01
Health care organizations are rethinking how care is delivered because of incentives generated by managed care and a competitive marketplace. An evaluation of a work redesign project that involved the creation of redesigned unlicensed caregiver roles is described. The effect of model implementation on patients, multiple categories of caregivers, and physicians was measured using several different approaches to data collection. In this evaluation, caregivers perceived the institutional culture to be both market-driven and hierarchical. The work redesign, along with significant changes in unit configuration and leadership over the same period, significantly reduced job security and satisfaction with supervision. Quality indicators suggested short-term declines in quality during model implementation with higher levels of quality after implementation issues were resolved. Objective measurement of the outcomes of work redesign initiatives is imperative to assure appropriate adjustments and responses to caregiver concerns.
Design and implementation of a programming circuit in radiation-hardened FPGA
NASA Astrophysics Data System (ADS)
Lihua, Wu; Xiaowei, Han; Yan, Zhao; Zhongli, Liu; Fang, Yu; Chen, Stanley L.
2011-08-01
We present a novel programming circuit used in our radiation-hardened field programmable gate array (FPGA) chip. This circuit provides the ability to write user-defined configuration data into an FPGA and then read it back. The proposed circuit adopts the direct-access programming point scheme instead of the typical long token shift register chain. It not only saves area but also provides more flexible configuration operations. By configuring the proposed partial configuration control register, our smallest configuration section can be conveniently configured as a single data and a flexible partial configuration can be easily implemented. The hierarchical simulation scheme, optimization of the critical path and the elaborate layout plan make this circuit work well. Also, the radiation hardened by design programming point is introduced. This circuit has been implemented in a static random access memory (SRAM)-based FPGA fabricated by a 0.5 μm partial-depletion silicon-on-insulator CMOS process. The function test results of the fabricated chip indicate that this programming circuit successfully realizes the desired functions in the configuration and read-back. Moreover, the radiation test results indicate that the programming circuit has total dose tolerance of 1 × 105 rad(Si), dose rate survivability of 1.5 × 1011 rad(Si)/s and neutron fluence immunity of 1 × 1014 n/cm2.
The Application of SNiPER to the JUNO Simulation
NASA Astrophysics Data System (ADS)
Lin, Tao; Zou, Jiaheng; Li, Weidong; Deng, Ziyan; Fang, Xiao; Cao, Guofu; Huang, Xingtao; You, Zhengyun; JUNO Collaboration
2017-10-01
The JUNO (Jiangmen Underground Neutrino Observatory) is a multipurpose neutrino experiment which is designed to determine neutrino mass hierarchy and precisely measure oscillation parameters. As one of the important systems, the JUNO offline software is being developed using the SNiPER software. In this proceeding, we focus on the requirements of JUNO simulation and present the working solution based on the SNiPER. The JUNO simulation framework is in charge of managing event data, detector geometries and materials, physics processes, simulation truth information etc. It glues physics generator, detector simulation and electronics simulation modules together to achieve a full simulation chain. In the implementation of the framework, many attractive characteristics of the SNiPER have been used, such as dynamic loading, flexible flow control, multiple event management and Python binding. Furthermore, additional efforts have been made to make both detector and electronics simulation flexible enough to accommodate and optimize different detector designs. For the Geant4-based detector simulation, each sub-detector component is implemented as a SNiPER tool which is a dynamically loadable and configurable plugin. So it is possible to select the detector configuration at runtime. The framework provides the event loop to drive the detector simulation and interacts with the Geant4 which is implemented as a passive service. All levels of user actions are wrapped into different customizable tools, so that user functions can be easily extended by just adding new tools. The electronics simulation has been implemented by following an event driven scheme. The SNiPER task component is used to simulate data processing steps in the electronics modules. The electronics and trigger are synchronized by triggered events containing possible physics signals. The JUNO simulation software has been released and is being used by the JUNO collaboration to do detector design optimization, event reconstruction algorithm development and physics sensitivity studies.
Controlling changes - lessons learned from waste management facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, B.M.; Koplow, A.S.; Stoll, F.E.
This paper discusses lessons learned about change control at the Waste Reduction Operations Complex (WROC) and Waste Experimental Reduction Facility (WERF) of the Idaho National Engineering Laboratory (INEL). WROC and WERF have developed and implemented change control and an as-built drawing process and have identified structures, systems, and components (SSCS) for configuration management. The operations have also formed an Independent Review Committee to minimize costs and resources associated with changing documents. WROC and WERF perform waste management activities at the INEL. WROC activities include storage, treatment, and disposal of hazardous and mixed waste. WERF provides volume reduction of solid low-levelmore » waste through compaction, incineration, and sizing operations. WROC and WERF`s efforts aim to improve change control processes that have worked inefficiently in the past.« less
Architecture for the Next Generation System Management Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallard, Jerome; Lebre, I Adrien; Morin, Christine
2011-01-01
To get more results or greater accuracy, computational scientists execute their applications on distributed computing platforms such as Clusters, Grids and Clouds. These platforms are different in terms of hardware and software resources as well as locality: some span across multiple sites and multiple administrative domains whereas others are limited to a single site/domain. As a consequence, in order to scale their applica- tions up the scientists have to manage technical details for each target platform. From our point of view, this complexity should be hidden from the scientists who, in most cases, would prefer to focus on their researchmore » rather than spending time dealing with platform configuration concerns. In this article, we advocate for a system management framework that aims to automatically setup the whole run-time environment according to the applications needs. The main difference with regards to usual approaches is that they generally only focus on the software layer whereas we address both the hardware and the software expecta- tions through a unique system. For each application, scientists describe their requirements through the definition of a Virtual Platform (VP) and a Virtual System Environment (VSE). Relying on the VP/VSE definitions, the framework is in charge of: (i) the configuration of the physical infrastructure to satisfy the VP requirements, (ii) the setup of the VP, and (iii) the customization of the execution environment (VSE) upon the former VP. We propose a new formalism that the system can rely upon to successfully perform each of these three steps without burdening the user with the specifics of the configuration for the physical resources, and system management tools. This formalism leverages Goldberg s theory for recursive virtual machines by introducing new concepts based on system virtualization (identity, partitioning, aggregation) and emulation (simple, abstraction). This enables the definition of complex VP/VSE configurations without making assumptions about the hardware and the software re- sources. For each requirement, the system executes the corresponding operation with the appropriate management tool. As a proof of concept, we implemented a first prototype that currently interacts with several system management tools (e.g., OSCAR, the Grid 5000 toolkit, and XtreemOS) and that can be easily extended to integrate new resource brokers or cloud systems such as Nimbus, OpenNebula or Eucalyptus for instance.« less
NASA Technical Reports Server (NTRS)
1988-01-01
This Preliminary Project Implementation Plan (PPIP) was used to examine the feasibility of replacing the current Solid Rocket Boosters on the Space Shuttle with Liquid Rocket Boosters (LRBs). The need has determined the implications of integrating the LRB with the Space Transportation System as the earliest practical date. The purpose was to identify and define all elements required in a full scale development program for the LRB. This will be a reference guide for management of the LRB program, addressing such requirement as design and development, configuration management, performance measurement, manufacturing, product assurance and verification, launch operations, and mission operations support.
NASA Astrophysics Data System (ADS)
Anderson, Thomas S.
2016-05-01
The Global Information Network Architecture is an information technology based on Vector Relational Data Modeling, a unique computational paradigm, DoD network certified by USARMY as the Dragon Pulse Informa- tion Management System. This network available modeling environment for modeling models, where models are configured using domain relevant semantics and use network available systems, sensors, databases and services as loosely coupled component objects and are executable applications. Solutions are based on mission tactics, techniques, and procedures and subject matter input. Three recent ARMY use cases are discussed a) ISR SoS. b) Modeling and simulation behavior validation. c) Networked digital library with behaviors.
McGinnis, John W.
1980-01-01
The very same technological advances that support distributed systems have also dramatically increased the efficiency and capabilities of centralized systems making it more complex for health care managers to select the “right” system architecture to meet their particular needs. How this selection can be made with a reasonable degree of managerial comfort is the focus of this paper. The approach advocated is based on experience in developing the Tri-Service Medical Information System (TRIMIS) program. Along with this technical standards and configuration management procedures were developed that provided the necessary guidance to implement the selected architecture and to allow it to change in a controlled way over its life cycle.
Configurable unitary transformations and linear logic gates using quantum memories.
Campbell, G T; Pinel, O; Hosseini, M; Ralph, T C; Buchler, B C; Lam, P K
2014-08-08
We show that a set of optical memories can act as a configurable linear optical network operating on frequency-multiplexed optical states. Our protocol is applicable to any quantum memories that employ off-resonant Raman transitions to store optical information in atomic spins. In addition to the configurability, the protocol also offers favorable scaling with an increasing number of modes where N memories can be configured to implement arbitrary N-mode unitary operations during storage and readout. We demonstrate the versatility of this protocol by showing an example where cascaded memories are used to implement a conditional cz gate.
An Analysis of Naval Aviation Configuration Status Accounting.
1983-12-01
Audit Service Report T30211, Multilocation Audit of Configuration Management of Aeronautical Equipment, 17 August 1982. 18. United States General... Audit and Review .......... 27 III. CONFIGURATION MANAGEMENT STATUS ACCOUNTING WITHIN THE DEPARTMENT OF DEFENSE ........................... 29 A. DOD...included published ar- ticles written by both military and private industry managers, technical papers delivered at symposia and conferences, Naval Audit
NASA Astrophysics Data System (ADS)
Pezzi, M.; Favaro, M.; Gregori, D.; Ricci, P. P.; Sapunenko, V.
2014-06-01
In large computing centers, such as the INFN CNAF Tier1 [1], is essential to be able to configure all the machines, depending on use, in an automated way. For several years at the Tier1 has been used Quattor[2], a server provisioning tool, which is currently used in production. Nevertheless we have recently started a comparison study involving other tools able to provide specific server installation and configuration features and also offer a proper full customizable solution as an alternative to Quattor. Our choice at the moment fell on integration between two tools: Cobbler [3] for the installation phase and Puppet [4] for the server provisioning and management operation. The tool should provide the following properties in order to replicate and gradually improve the current system features: implement a system check for storage specific constraints such as kernel modules black list at boot time to avoid undesired SAN (Storage Area Network) access during disk partitioning; a simple and effective mechanism for kernel upgrade and downgrade; the ability of setting package provider using yum, rpm or apt; easy to use Virtual Machine installation support including bonding and specific Ethernet configuration; scalability for managing thousands of nodes and parallel installations. This paper describes the results of the comparison and the tests carried out to verify the requirements and the new system suitability in the INFN-T1 environment.
A graph based algorithm for adaptable dynamic airspace configuration for NextGen
NASA Astrophysics Data System (ADS)
Savai, Mehernaz P.
The National Airspace System (NAS) is a complicated large-scale aviation network, consisting of many static sectors wherein each sector is controlled by one or more controllers. The main purpose of the NAS is to enable safe and prompt air travel in the U.S. However, such static configuration of sectors will not be able to handle the continued growth of air travel which is projected to be more than double the current traffic by 2025. Under the initiative of the Next Generation of Air Transportation system (NextGen), the main objective of Adaptable Dynamic Airspace Configuration (ADAC) is that the sectors should change to the changing traffic so as to reduce the controller workload variance with time while increasing the throughput. Change in the resectorization should be such that there is a minimal increase in exchange of air traffic among controllers. The benefit of a new design (improvement in workload balance, etc.) should sufficiently exceed the transition cost, in order to deserve a change. This leads to the analysis of the concept of transition workload which is the cost associated with a transition from one sectorization to another. Given two airspace configurations, a transition workload metric which considers the air traffic as well as the geometry of the airspace is proposed. A solution to reduce this transition workload is also discussed. The algorithm is specifically designed to be implemented for the Dynamic Airspace Configuration (DAC) Algorithm. A graph model which accurately represents the air route structure and air traffic in the NAS is used to formulate the airspace configuration problem. In addition, a multilevel graph partitioning algorithm is developed for Dynamic Airspace Configuration which partitions the graph model of airspace with given user defined constraints and hence provides the user more flexibility and control over various partitions. In terms of air traffic management, vertices represent airports and waypoints. Some of the major (busy) airports need to be given more importance and hence treated separately. Thus the algorithm takes into account the air route structure while finding a balance between sector workloads. The performance of the proposed algorithms and performance metrics is validated with the Enhanced Traffic Management System (ETMS) air traffic data.
Initial development of the DIII–D snowflake divertor control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolemen, Egemen; Vail, P. J.; Makowski, M. A.
Simultaneous control of two proximate magnetic field nulls in the divertor region is demonstrated on DIII–D to enable plasma operations in an advanced magnetic configuration known as the snowflake divertor (SFD). The SFD is characterized by a second-order poloidal field null, created by merging two first-order nulls of the standard divertor configuration. The snowflake configuration has many magnetic properties, such as high poloidal flux expansion, large plasma-wetted area, and additional strike points, that are advantageous for divertor heat flux management in future fusion reactors. However, the magnetic configuration of the SFD is highly-sensitive to changes in currents within the plasmamore » and external coils and therefore requires complex magnetic control. The first real-time snowflake detection and control system on DIII–D has been implemented in order to stabilize the configuration. The control algorithm calculates the position of the two nulls in real-time by locally-expanding the Grad–Shafranov equation in the divertor region. A linear relation between variations in the poloidal field coil currents and changes in the null locations is then analytically derived. This formulation allows for simultaneous control of multiple coils to achieve a desired SFD configuration. It is shown that the control enabled various snowflake configurations on DIII–D in scenarios such as the double-null advanced tokamak. In conclusion, the SFD resulted in a 2.5×reduction in the peak heat flux for many energy confinement times (2–3s) without any adverse effects on core plasma performance.« less
Initial development of the DIII–D snowflake divertor control
Kolemen, Egemen; Vail, P. J.; Makowski, M. A.; ...
2018-04-11
Simultaneous control of two proximate magnetic field nulls in the divertor region is demonstrated on DIII–D to enable plasma operations in an advanced magnetic configuration known as the snowflake divertor (SFD). The SFD is characterized by a second-order poloidal field null, created by merging two first-order nulls of the standard divertor configuration. The snowflake configuration has many magnetic properties, such as high poloidal flux expansion, large plasma-wetted area, and additional strike points, that are advantageous for divertor heat flux management in future fusion reactors. However, the magnetic configuration of the SFD is highly-sensitive to changes in currents within the plasmamore » and external coils and therefore requires complex magnetic control. The first real-time snowflake detection and control system on DIII–D has been implemented in order to stabilize the configuration. The control algorithm calculates the position of the two nulls in real-time by locally-expanding the Grad–Shafranov equation in the divertor region. A linear relation between variations in the poloidal field coil currents and changes in the null locations is then analytically derived. This formulation allows for simultaneous control of multiple coils to achieve a desired SFD configuration. It is shown that the control enabled various snowflake configurations on DIII–D in scenarios such as the double-null advanced tokamak. In conclusion, the SFD resulted in a 2.5×reduction in the peak heat flux for many energy confinement times (2–3s) without any adverse effects on core plasma performance.« less
Artificial Intelligent Platform as Decision Tool for Asset Management, Operations and Maintenance.
2018-01-04
An Artificial Intelligence (AI) system has been developed and implemented for water, wastewater and reuse plants to improve management of sensors, short and long term maintenance plans, asset and investment management plans. It is based on an integrated approach to capture data from different computer systems and files. It adds a layer of intelligence to the data. It serves as a repository of key current and future operations and maintenance conditions that a plant needs have knowledge of. With this information, it is able to simulate the configuration of processes and assets for those conditions to improve or optimize operations, maintenance and asset management, using the IViewOps (Intelligent View of Operations) model. Based on the optimization through model runs, it is able to create output files that can feed data to other systems and inform the staff regarding optimal solutions to the conditions experienced or anticipated in the future.
An SNMP-based solution to enable remote ISO/IEEE 11073 technical management.
Lasierra, Nelia; Alesanco, Alvaro; García, José
2012-07-01
This paper presents the design and implementation of an architecture based on the integration of simple network management protocol version 3 (SNMPv3) and the standard ISO/IEEE 11073 (X73) to manage technical information in home-based telemonitoring scenarios. This architecture includes the development of an SNMPv3-proxyX73 agent which comprises a management information base (MIB) module adapted to X73. In the proposed scenario, medical devices (MDs) send information to a concentrator device [designated as compute engine (CE)] using the X73 standard. This information together with extra information collected in the CE is stored in the developed MIB. Finally, the information collected is available for remote access via SNMP connection. Moreover, alarms and events can be configured by an external manager in order to provide warnings of irregularities in the MDs' technical performance evaluation. This proposed SNMPv3 agent provides a solution to integrate and unify technical device management in home-based telemonitoring scenarios fully adapted to X73.
Steitz, Bryan D; Weinberg, Stuart T; Danciu, Ioana; Unertl, Kim M
2016-01-01
Healthcare team members in emergency department contexts have used electronic whiteboard solutions to help manage operational workflow for many years. Ambulatory clinic settings have highly complex operational workflow, but are still limited in electronic assistance to communicate and coordinate work activities. To describe and discuss the design, implementation, use, and ongoing evolution of a coordination and collaboration tool supporting ambulatory clinic operational workflow at Vanderbilt University Medical Center (VUMC). The outpatient whiteboard tool was initially designed to support healthcare work related to an electronic chemotherapy order-entry application. After a highly successful initial implementation in an oncology context, a high demand emerged across the organization for the outpatient whiteboard implementation. Over the past 10 years, developers have followed an iterative user-centered design process to evolve the tool. The electronic outpatient whiteboard system supports 194 separate whiteboards and is accessed by over 2800 distinct users on a typical day. Clinics can configure their whiteboards to support unique workflow elements. Since initial release, features such as immunization clinical decision support have been integrated into the system, based on requests from end users. The success of the electronic outpatient whiteboard demonstrates the usefulness of an operational workflow tool within the ambulatory clinic setting. Operational workflow tools can play a significant role in supporting coordination, collaboration, and teamwork in ambulatory healthcare settings.
Innovation Configurations: Analyzing the Adaptations of Innovations.
ERIC Educational Resources Information Center
Hall, Gene E.; Loucks, Susan F.
When implementing an innovation, a multitude of components interact to change not only the users, but the innovation as well. This guide explains the concept of innovation configurations, or adaptations made in innovations during implementation. After presenting and discussing past research on innovation changes, the report outlines a five step…
NASA Technical Reports Server (NTRS)
Gavert, Raymond B.
1990-01-01
Some experiences of NASA configuration management in providing concurrent engineering support to the Space Station Freedom program for the achievement of life cycle benefits and total quality are discussed. Three change decision experiences involving tracing requirements and automated information systems of the electrical power system are described. The potential benefits of concurrent engineering and total quality management include improved operational effectiveness, reduced logistics and support requirements, prevention of schedule slippages, and life cycle cost savings. It is shown how configuration management can influence the benefits attained through disciplined approaches and innovations that compel consideration of all the technical elements of engineering and quality factors that apply to the program development, transition to operations and in operations. Configuration management experiences involving the Space Station program's tiered management structure, the work package contractors, international partners, and the participating NASA centers are discussed.
Configuration management issues and objectives for a real-time research flight test support facility
NASA Technical Reports Server (NTRS)
Yergensen, Stephen; Rhea, Donald C.
1988-01-01
An account is given of configuration management activities for the Western Aeronautical Test Range (WATR) at NASA-Ames, whose primary function is the conduct of aeronautical research flight testing through real-time processing and display, tracking, and communications systems. The processing of WATR configuration change requests for specific research flight test projects must be conducted in such a way as to refrain from compromising the reliability of WATR support to all project users. Configuration management's scope ranges from mission planning to operations monitoring and performance trend analysis.
ZoroufchiBenis, Khaled; Fatehifar, Esmaeil; Ahmadi, Javad; Rouhi, Alireza
2015-01-01
Industrial air pollution is a growing challenge to humane health, especially in developing countries, where there is no systematic monitoring of air pollution. Given the importance of the availability of valid information on population exposure to air pollutants, it is important to design an optimal Air Quality Monitoring Network (AQMN) for assessing population exposure to air pollution and predicting the magnitude of the health risks to the population. A multi-pollutant method (implemented as a MATLAB program) was explored for configur-ing an AQMN to detect the highest level of pollution around an oil refinery plant. The method ranks potential monitoring sites (grids) according to their ability to represent the ambient concentration. The term of cluster of contiguous grids that exceed a threshold value was used to calculate the Station Dosage. Selection of the best configuration of AQMN was done based on the ratio of a sta-tion's dosage to the total dosage in the network. Six monitoring stations were needed to detect the pollutants concentrations around the study area for estimating the level and distribution of exposure in the population with total network efficiency of about 99%. An analysis of the design procedure showed that wind regimes have greatest effect on the location of monitoring stations. The optimal AQMN enables authorities to implement an effective program of air quality management for protecting human health.
Configuration management program plan for Hanford site systems engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, A.G.
This plan establishes the integrated configuration management program for the evolving technical baseline developed through the systems engineering process. This configuration management program aligns with the criteria identified in the DOE Standard, DOE-STD-1073-93. Included are specific requirements for control of the systems engineering RDD-100 database, and electronic data incorporated in the database that establishes the Hanford site technical baseline.
A Model-based Approach to Reactive Self-Configuring Systems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Nayak, P. Pandurang
1996-01-01
This paper describes Livingstone, an implemented kernel for a self-reconfiguring autonomous system, that is reactive and uses component-based declarative models. The paper presents a formal characterization of the representation formalism used in Livingstone, and reports on our experience with the implementation in a variety of domains. Livingstone's representation formalism achieves broad coverage of hybrid software/hardware systems by coupling the concurrent transition system models underlying concurrent reactive languages with the discrete qualitative representations developed in model-based reasoning. We achieve a reactive system that performs significant deductions in the sense/response loop by drawing on our past experience at building fast prepositional conflict-based algorithms for model-based diagnosis, and by framing a model-based configuration manager as a prepositional, conflict-based feedback controller that generates focused, optimal responses. Livingstone automates all these tasks using a single model and a single core deductive engine, thus making significant progress towards achieving a central goal of model-based reasoning. Livingstone, together with the HSTS planning and scheduling engine and the RAPS executive, has been selected as the core autonomy architecture for Deep Space One, the first spacecraft for NASA's New Millennium program.
Criteria Underlying the Formation of Alternative IMS Configurations.
ERIC Educational Resources Information Center
Dave, Ashok
To assist the formation of IMS (Instructional Management System) configurations, three categories of characteristics are developed and explained. Categories 1 and 2 emphasize automation, and the necessity of forming workable configurations to carry out instructional management for Southwest Regional Laboratory developed instructional and/or…
Shen, Yi; Dai, Wei; Richards, Virginia M
2015-03-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.
Recipe for Success: Digital Viewables
NASA Technical Reports Server (NTRS)
LaPha, Steven; Gaydos, Frank
2014-01-01
The Engineering Services Contract (ESC) and Information Management Communication Support contract (IMCS) at Kennedy Space Center (KSC) provide services to NASA in respect to flight and ground systems design and development. These groups provides the necessary tools, aid, and best practice methodologies required for efficient, optimized design and process development. The team is responsible for configuring and implementing systems, software, along with training, documentation, and administering standards. The team supports over 200 engineers and design specialists with the use of Windchill, Creo Parametric, NX, AutoCAD, and a variety of other design and analysis tools.
An overview of the NASA Advanced Propulsion Concepts program
NASA Technical Reports Server (NTRS)
Curran, Francis M.; Bennett, Gary L.; Frisbee, Robert H.; Sercel, Joel C.; Lapointe, Michael R.
1992-01-01
NASA Advanced Propulsion Concepts (APC) program for the development of long-term space propulsion system schemes is managed by both NASA-Lewis and the JPL and is tasked with the identification and conceptual development of high-risk/high-payoff configurations. Both theoretical and experimental investigations have been undertaken in technology areas deemed essential to the implementation of candidate concepts. These APC candidates encompass very high energy density chemical propulsion systems, advanced electric propulsion systems, and an antiproton-catalyzed nuclear propulsion concept. A development status evaluation is presented for these systems.
NASA Astrophysics Data System (ADS)
Magee, Jeff; Moffett, Jonathan
1996-06-01
Special Issue on Management This special issue contains seven papers originally presented at an International Workshop on Services for Managing Distributed Systems (SMDS'95), held in September 1995 in Karslruhe, Germany. The workshop was organized to present the results of two ESPRIT III funded projects, Sysman and IDSM, and more generally to bring together work in the area of distributed systems management. The workshop focused on the tools and techniques necessary for managing future large-scale, multi-organizational distributed systems. The open call for papers attracted a large number of submissions and the subsequent attendance at the workshop, which was larger than expected, clearly indicated that the topics addressed by the workshop were of considerable interest both to industry and academia. The papers selected for this special issue represent an excellent coverage of the issues addressed by the workshop. A particular focus of the workshop was the need to help managers deal with the size and complexity of modern distributed systems by the provision of automated support. This automation must have two prime characteristics: it must provide a flexible management system which responds rapidly to changing organizational needs, and it must provide both human managers and automated management components with the information that they need, in a form which can be used for decision-making. These two characteristics define the two main themes of this special issue. To satisfy the requirement for a flexible management system, workers in both industry and universities have turned to architectures which support policy directed management. In these architectures policy is explicitly represented and can be readily modified to meet changing requirements. The paper `Towards implementing policy-based systems management' by Meyer, Anstötz and Popien describes an approach whereby policy is enforced by event-triggered rules. Krause and Zimmermann in their paper `Implementing configuration management policies for distributed applications' present a system in which the configuration of the system in terms of its constituent components and their interconnections can be controlled by reconfiguration rules. Neumair and Wies in the paper `Case study: applying management policies to manage distributed queuing systems' examine how high-level policies can be transformed into practical and efficient implementations for the case of distributed job queuing systems. Koch and Krämer in `Rules and agents for automated management of distributed systems' describe the results of an experiment in using the software development environment Marvel to provide a rule based implementation of management policy. The paper by Jardin, `Supporting scalability and flexibility in a distributed management platform' reports on the experience of using a policy directed approach in the industrial strength TeMIP management platform. Both human managers and automated management components rely on a comprehensive monitoring system to provide accurate and timely information on which decisions are made to modify the operation of a system. The monitoring service must deal with condensing and summarizing the vast amount of data available to produce the events of interest to the controlling components of the overall management system. The paper `Distributed intelligent monitoring and reporting facilities' by Pavlou, Mykoniatis and Sanchez describes a flexible monitoring system in which the monitoring agents themselves are policy directed. Their monitoring system has been implemented in the context of the OSIMIS management platform. Debski and Janas in `The SysMan monitoring service and its management environment' describe the overall SysMan management system architecture and then concentrate on how event processing and distribution is supported in that architecture. The collection of papers gives a good overview of the current state of the art in distributed system management. It has reached a point at which a first generation of systems, based on policy representation within systems and automated monitoring systems, are coming into practical use. The papers also serve to identify many of the issues which are open research questions. In particular, as management systems increase in complexity, how far can we automate the refinement of high-level policies into implementations? How can we detect and resolve conflicts between policies? And how can monitoring services deal efficiently with ever-growing complexity and volume? We wish to acknowledge the many contributors, besides the authors, who have made this issue possible: the anonymous reviewers who have done much to assure the quality of these papers, Morris Sloman and his Programme Committee who convened the Workshop, and Thomas Usländer and his team at the Fraunhofer Institute in Karlsruhe who acted as hosts.
Integrated Autonomous Network Management (IANM) Multi-Topology Route Manager and Analyzer
2008-02-01
zebra tmg mtrcli xinetd (tftp) mysql configuration file (mtrrm.conf) configuration file (mtrrmAggregator.properties) tftp files /tftpboot NetFlow PDUs...configuration upload/download snmp, telnet OSPFv2 user interface tmg Figure 6-2. Internal software organization Figure 6-2 illustrates the main
Reyers, Belinda; Nel, Jeanne L; O'Farrell, Patrick J; Sitas, Nadia; Nel, Deon C
2015-06-16
Achieving the policy and practice shifts needed to secure ecosystem services is hampered by the inherent complexities of ecosystem services and their management. Methods for the participatory production and exchange of knowledge offer an avenue to navigate this complexity together with the beneficiaries and managers of ecosystem services. We develop and apply a knowledge coproduction approach based on social-ecological systems research and assess its utility in generating shared knowledge and action for ecosystem services. The approach was piloted in South Africa across four case studies aimed at reducing the risk of disasters associated with floods, wildfires, storm waves, and droughts. Different configurations of stakeholders (knowledge brokers, assessment teams, implementers, and bridging agents) were involved in collaboratively designing each study, generating and exchanging knowledge, and planning for implementation. The approach proved useful in the development of shared knowledge on the sizable contribution of ecosystem services to disaster risk reduction. This knowledge was used by stakeholders to design and implement several actions to enhance ecosystem services, including new investments in ecosystem restoration, institutional changes in the private and public sector, and innovative partnerships of science, practice, and policy. By bringing together multiple disciplines, sectors, and stakeholders to jointly produce the knowledge needed to understand and manage a complex system, knowledge coproduction approaches offer an effective avenue for the improved integration of ecosystem services into decision making.
Reyers, Belinda; Nel, Jeanne L.; O’Farrell, Patrick J.; Sitas, Nadia; Nel, Deon C.
2015-01-01
Achieving the policy and practice shifts needed to secure ecosystem services is hampered by the inherent complexities of ecosystem services and their management. Methods for the participatory production and exchange of knowledge offer an avenue to navigate this complexity together with the beneficiaries and managers of ecosystem services. We develop and apply a knowledge coproduction approach based on social–ecological systems research and assess its utility in generating shared knowledge and action for ecosystem services. The approach was piloted in South Africa across four case studies aimed at reducing the risk of disasters associated with floods, wildfires, storm waves, and droughts. Different configurations of stakeholders (knowledge brokers, assessment teams, implementers, and bridging agents) were involved in collaboratively designing each study, generating and exchanging knowledge, and planning for implementation. The approach proved useful in the development of shared knowledge on the sizable contribution of ecosystem services to disaster risk reduction. This knowledge was used by stakeholders to design and implement several actions to enhance ecosystem services, including new investments in ecosystem restoration, institutional changes in the private and public sector, and innovative partnerships of science, practice, and policy. By bringing together multiple disciplines, sectors, and stakeholders to jointly produce the knowledge needed to understand and manage a complex system, knowledge coproduction approaches offer an effective avenue for the improved integration of ecosystem services into decision making. PMID:26082541
System-Oriented Runway Management Concept of Operations
NASA Technical Reports Server (NTRS)
Lohr, Gary W.; Atkins, Stephen
2015-01-01
This document describes a concept for runway management that maximizes the overall efficiency of arrival and departure operations at an airport or group of airports. Specifically, by planning airport runway configurations/usage, it focuses on the efficiency with which arrival flights reach their parking gates from their arrival fixes and departure flights exit the terminal airspace from their parking gates. In the future, the concept could be expanded to include the management of other limited airport resources. While most easily described in the context of a single airport, the concept applies equally well to a group of airports that comprise a metroplex (i.e., airports in close proximity that share resources such that operations at the airports are at least partially dependent) by including the coordination of runway usage decisions between the airports. In fact, the potential benefit of the concept is expected to be larger in future metroplex environments due to the increasing need to coordinate the operations at proximate airports to more efficiently share limited airspace resources. This concept, called System-Oriented Runway Management (SORM), is further broken down into a set of airport traffic management functions that share the principle that operational performance must be measured over the complete surface and airborne trajectories of the airport's arrivals and departures. The "system-oriented" term derives from the belief that the traffic management objective must consider the efficiency of operations over a wide range of aircraft movements and National Airspace System (NAS) dynamics. The SORM concept is comprised of three primary elements: strategic airport capacity planning, airport configuration management, and combined arrival/departure runway planning. Some aspects of the SORM concept, such as using airport configuration management1 as a mechanism for improving aircraft efficiency, are novel. Other elements (e.g., runway scheduling, which is a part of combined arrival/departure runway scheduling) have been well studied, but are included in the concept for completeness and to allow the concept to define the necessary relationship among the elements. The goal of this document is to describe the overall SORM concept and how it would apply both within the NAS and potential future Next Generation Air Traffic System (NextGen) environments, including research conducted to date. Note that the concept is based on the belief that runways are the primary constraint and the decision point for controlling efficiency, but the efficiency of runway management must be measured over a wide range of space and time. Implementation of the SORM concept is envisioned through a collection of complementary, necessary capabilities collectively focused on ensuring efficient arrival and departure traffic management, where that efficiency is measured not only in terms of runway efficiency but in terms of the overall trajectories between parking gates and transition fixes. For the more original elements of the concept-airport configuration management-this document proposes specific air traffic management (ATM) decision-support automation for realizing the concept.
Advanced consequence management program: challenges and recent real-world implementations
NASA Astrophysics Data System (ADS)
Graser, Tom; Barber, K. S.; Williams, Bob; Saghir, Feras; Henry, Kurt A.
2002-08-01
The Enhanced Consequence Management, Planning and Support System (ENCOMPASS) was developed under DARPA's Advanced Consequence Management program to assist decision-makers operating in crisis situations such as terrorist attacks using conventional and unconventional weapons and natural disasters. ENCOMPASS provides the tools for first responders, incident commanders, and officials at all levels to share vital information and consequently, plan and execute a coordinated response to incidents of varying complexity and size. ENCOMPASS offers custom configuration of components with capabilities ranging from map-based situation assessment, situation-based response checklists, casualty tracking, and epidemiological surveillance. Developing and deploying such a comprehensive system posed significant challenges for DARPA program management, due to an inherently complex domain, a broad spectrum of customer sites and skill sets, an often inhospitable runtime environment, demanding development-to-deployment transition requirements, and a technically diverse and geographically distributed development team. This paper introduces ENCOMPASS and explores these challenges, followed by an outline of selected ENCOMPASS deployments, demonstrating how ENCOMPASS can enhance consequence management in a variety real world contexts.
Risk analysis of Safety Service Patrol (SSP) systems in Virginia.
Dickey, Brett D; Santos, Joost R
2011-12-01
The transportation infrastructure is a vital backbone of any regional economy as it supports workforce mobility, tourism, and a host of socioeconomic activities. In this article, we specifically examine the incident management function of the transportation infrastructure. In many metropolitan regions, incident management is handled primarily by safety service patrols (SSPs), which monitor and resolve roadway incidents. In Virginia, SSP allocation across highway networks is based typically on average vehicle speeds and incident volumes. This article implements a probabilistic network model that partitions "business as usual" traffic flow with extreme-event scenarios. Results of simulated network scenarios reveal that flexible SSP configurations can improve incident resolution times relative to predetermined SSP assignments. © 2011 Society for Risk Analysis.
NASA Technical Reports Server (NTRS)
Wilber, George F.
2017-01-01
This Software Description Document (SDD) captures the design for developing the Flight Interval Management (FIM) system Configurable Graphics Display (CGD) software. Specifically this SDD describes aspects of the Boeing CGD software and the surrounding context and interfaces. It does not describe the Honeywell components of the CGD system. The SDD provides the system overview, architectural design, and detailed design with all the necessary information to implement the Boeing components of the CGD software and integrate them into the CGD subsystem within the larger FIM system. Overall system and CGD system-level requirements are derived from the CGD SRS (in turn derived from the Boeing System Requirements Design Document (SRDD)). Display and look-and-feel requirements are derived from Human Machine Interface (HMI) design documents and working group recommendations. This Boeing CGD SDD is required to support the upcoming Critical Design Review (CDR).
Flexibility First, Then Standardize: A Strategy for Growing Inter-Departmental Systems.
á Torkilsheyggi, Arnvør
2015-01-01
Any attempt to use IT to standardize work practices faces the challenge of finding a balance between standardization and flexibility. In implementing electronic whiteboards with the goal of standardizing inter-departmental practices, a hospital in Denmark chose to follow the strategy of "flexibility first, then standardization." To improve the local grounding of the system, they first focused on flexibility by configuring the whiteboards to support intra-departmental practices. Subsequently, they focused on standardization by using the white-boards to negotiate standardization of inter-departmental practices. This paper investigates the chosen strategy and finds: that super users on many wards managed to configure the whiteboard to support intra-departmental practices; that initiatives to standardize inter-departmental practices improved coordination of certain processes; and that the chosen strategy posed a challenge for finding the right time and manner to shift the balance from flexibility to standardization.
Configuration management issues and objectives for a real-time research flight test support facility
NASA Technical Reports Server (NTRS)
Yergensen, Stephen; Rhea, Donald C.
1988-01-01
Presented are some of the critical issues and objectives pertaining to configuration management for the NASA Western Aeronautical Test Range (WATR) of Ames Research Center. The primary mission of the WATR is to provide a capability for the conduct of aeronautical research flight test through real-time processing and display, tracking, and communications systems. In providing this capability, the WATR must maintain and enforce a configuration management plan which is independent of, but complimentary to, various research flight test project configuration management systems. A primary WATR objective is the continued development of generic research flight test project support capability, wherein the reliability of WATR support provided to all project users is a constant priority. Therefore, the processing of configuration change requests for specific research flight test project requirements must be evaluated within a perspective that maintains this primary objective.
Review of the Water Resources Information System of Argentina
Hutchison, N.E.
1987-01-01
A representative of the U.S. Geological Survey traveled to Buenos Aires, Argentina, in November 1986, to discuss water information systems and data bank implementation in the Argentine Government Center for Water Resources Information. Software has been written by Center personnel for a minicomputer to be used to manage inventory (index) data and water quality data. Additional hardware and software have been ordered to upgrade the existing computer. Four microcomputers, statistical and data base management software, and network hardware and software for linking the computers have also been ordered. The Center plans to develop a nationwide distributed data base for Argentina that will include the major regional offices as nodes. Needs for continued development of the water resources information system for Argentina were reviewed. Identified needs include: (1) conducting a requirements analysis to define the content of the data base and insure that all user requirements are met, (2) preparing a plan for the development, implementation, and operation of the data base, and (3) developing a conceptual design to inform all development personnel and users of the basic functionality planned for the system. A quality assurance and configuration management program to provide oversight to the development process was also discussed. (USGS)
Software Configuration Management Guidebook
NASA Technical Reports Server (NTRS)
1995-01-01
The growth in cost and importance of software to NASA has caused NASA to address the improvement of software development across the agency. One of the products of this program is a series of guidebooks that define a NASA concept of the assurance processes which are used in software development. The Software Assurance Guidebook, SMAP-GB-A201, issued in September, 1989, provides an overall picture of the concepts and practices of NASA in software assurance. Lower level guidebooks focus on specific activities that fall within the software assurance discipline, and provide more detailed information for the manager and/or practitioner. This is the Software Configuration Management Guidebook which describes software configuration management in a way that is compatible with practices in industry and at NASA Centers. Software configuration management is a key software development process, and is essential for doing software assurance.
NASA Astrophysics Data System (ADS)
Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.
2017-12-01
In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.
Integrated Systems Health Management (ISHM) Toolkit
NASA Technical Reports Server (NTRS)
Venkatesh, Meera; Kapadia, Ravi; Walker, Mark; Wilkins, Kim
2013-01-01
A framework of software components has been implemented to facilitate the development of ISHM systems according to a methodology based on Reliability Centered Maintenance (RCM). This framework is collectively referred to as the Toolkit and was developed using General Atomics' Health MAP (TM) technology. The toolkit is intended to provide assistance to software developers of mission-critical system health monitoring applications in the specification, implementation, configuration, and deployment of such applications. In addition to software tools designed to facilitate these objectives, the toolkit also provides direction to software developers in accordance with an ISHM specification and development methodology. The development tools are based on an RCM approach for the development of ISHM systems. This approach focuses on defining, detecting, and predicting the likelihood of system functional failures and their undesirable consequences.
NASA Technical Reports Server (NTRS)
Jethwa, Dipan; Selmic, Rastko R.; Figueroa, Fernando
2008-01-01
This paper presents a concept of feedback control for smart actuators that are compatible with smart sensors, communication protocols, and a hierarchical Integrated System Health Management (ISHM) architecture developed by NASA s Stennis Space Center. Smart sensors and actuators typically provide functionalities such as automatic configuration, system condition awareness and self-diagnosis. Spacecraft and rocket test facilities are in the early stages of adopting these concepts. The paper presents a concept combining the IEEE 1451-based ISHM architecture with a transducer health monitoring capability to enhance the control process. A control system testbed for intelligent actuator control, with on-board ISHM capabilities, has been developed and implemented. Overviews of the IEEE 1451 standard, the smart actuator architecture, and control based on this architecture are presented.
Space Station Freedom pressurized element interior design process
NASA Technical Reports Server (NTRS)
Hopson, George D.; Aaron, John; Grant, Richard L.
1990-01-01
The process used to develop the on-orbit working and living environment of the Space Station Freedom has some very unique constraints and conditions to satisfy. The goal is to provide maximum efficiency and utilization of the available space, in on-orbit, zero G conditions that establishes a comfortable, productive, and safe working environment for the crew. The Space Station Freedom on-orbit living and working space can be divided into support for three major functions: (1) operations, maintenance, and management of the station; (2) conduct of experiments, both directly in the laboratories and remotely for experiments outside the pressurized environment; and (3) crew related functions for food preparation, housekeeping, storage, personal hygiene, health maintenance, zero G environment conditioning, and individual privacy, and rest. The process used to implement these functions, the major requirements driving the design, unique considerations and constraints that influence the design, and summaries of the analysis performed to establish the current configurations are described. Sketches and pictures showing the layout and internal arrangement of the Nodes, U.S. Laboratory and Habitation modules identify the current design relationships of the common and unique station housekeeping subsystems. The crew facilities, work stations, food preparation and eating areas (galley and wardroom), and exercise/health maintenance configurations, waste management and personal hygiene area configuration are shown. U.S. Laboratory experiment facilities and maintenance work areas planned to support the wide variety and mixtures of life science and materials processing payloads are described.
Air traffic management evaluation tool
NASA Technical Reports Server (NTRS)
Sridhar, Banavar (Inventor); Chatterji, Gano Broto (Inventor); Schipper, John F. (Inventor); Bilimoria, Karl D. (Inventor); Grabbe, Shon (Inventor); Sheth, Kapil S. (Inventor)
2012-01-01
Methods for evaluating and implementing air traffic management tools and approaches for managing and avoiding an air traffic incident before the incident occurs. A first system receives parameters for flight plan configurations (e.g., initial fuel carried, flight route, flight route segments followed, flight altitude for a given flight route segment, aircraft velocity for each flight route segment, flight route ascent rate, flight route descent route, flight departure site, flight departure time, flight arrival time, flight destination site and/or alternate flight destination site), flight plan schedule, expected weather along each flight route segment, aircraft specifics, airspace (altitude) bounds for each flight route segment, navigational aids available. The invention provides flight plan routing and direct routing or wind optimal routing, using great circle navigation and spherical Earth geometry. The invention provides for aircraft dynamics effects, such as wind effects at each altitude, altitude changes, airspeed changes and aircraft turns to provide predictions of aircraft trajectory (and, optionally, aircraft fuel use). A second system provides several aviation applications using the first system. Several classes of potential incidents are analyzed and averted, by appropriate change en route of one or more parameters in the flight plan configuration, as provided by a conflict detection and resolution module and/or traffic flow management modules. These applications include conflict detection and resolution, miles-in trail or minutes-in-trail aircraft separation, flight arrival management, flight re-routing, weather prediction and analysis and interpolation of weather variables based upon sparse measurements. The invention combines these features to provide an aircraft monitoring system and an aircraft user system that interact and negotiate changes with each other.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542
NASA Technical Reports Server (NTRS)
Lohr, Gary W.; Williams, Daniel M.
2008-01-01
Significant air traffic increases are anticipated for the future of the National Airspace System (NAS). To cope with future traffic increases, fundamental changes are required in many aspects of the air traffic management process including the planning and use of NAS resources. Two critical elements of this process are the selection of airport runway configurations, and the effective management of active runways. Two specific research areas in NASA's Airspace Systems Program (ASP) have been identified to address efficient runway management: Runway Configuration Management (RCM) and Arrival/Departure Runway Balancing (ADRB). This report documents efforts in assessing past as well as current work in these two areas.
The CANopen Controller IP Core: Implementation, Synthesis and Test Results
NASA Astrophysics Data System (ADS)
Caramia, Maurizio; Bolognino, Luca; Montagna, Mario; Tosi, Pietro; Errico, Walter; Bigongiari, Franco; Furano, Gianluca
2011-08-01
This paper will describe the implementation and test results of the CANopen Controller IP Core (CCIPC) implemented by Thales Alenia Space and SITAEL Aerospace with the support of ESA in the frame of the EXOMARS Project. The CCIPC is a configurable VHDL implementation of the CANOPEN protocol [1]; it is foreseen to be used as CAN bus slave controller within the EXOMARS Entry Descending and Landing Demonstrato Module (EDM) and Rover Module. The CCIPC features, configuration capability, synthesis and test results will be described and the evidence of the state of maturity of this innovative IP core will be demonstrated.
ERIC Educational Resources Information Center
Hofman, W. H. Adriaan; Hofman, Roelande H.
2011-01-01
Purpose: In this study the authors focus on different (configurations of) leadership or management styles in schools for general and vocational education. Findings: Using multilevel (students and schools) analyses, strong differences in effective management styles between schools with different student populations were observed. Conclusions: The…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vargo, G.F. Jr.
1994-10-11
The DOE Standard defines the configuration management program by the five basic program elements of ``program management,`` ``design requirements,`` ``document control,`` ``change control,`` and ``assessments,`` and the two adjunct recovery programs of ``design reconstitution,`` and ``material condition and aging management. The C-M model of five elements and two adjunct programs strengthen the necessary technical and administrative control to establish and maintain a consistent technical relationship among the requirements, physical configuration, and documentation. Although the DOE Standard was originally developed for the operational phase of nuclear facilities, this plan has the flexibility to be adapted and applied to all life-cycle phasesmore » of both nuclear and non-nuclear facilities. The configuration management criteria presented in this plan endorses the DOE Standard and has been tailored specifically to address the technical relationship of requirements, physical configuration, and documentation during the full life-cycle of the 101-SY Hydrogen Mitigation Test Project Mini-Data Acquisition and Control System of Tank Waste Remediation System.« less
Software life cycle dynamic simulation model: The organizational performance submodel
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1985-01-01
The submodel structure of a software life cycle dynamic simulation model is described. The software process is divided into seven phases, each with product, staff, and funding flows. The model is subdivided into an organizational response submodel, a management submodel, a management influence interface, and a model analyst interface. The concentration here is on the organizational response model, which simulates the performance characteristics of a software development subject to external and internal influences. These influences emanate from two sources: the model analyst interface, which configures the model to simulate the response of an implementing organization subject to its own internal influences, and the management submodel that exerts external dynamic control over the production process. A complete characterization is given of the organizational response submodel in the form of parameterized differential equations governing product, staffing, and funding levels. The parameter values and functions are allocated to the two interfaces.
Traffic Management Coordinator Evaluation of the Dynamic Weather Routes Concept and System
NASA Technical Reports Server (NTRS)
Gong, Chester
2014-01-01
Dynamic Weather Routes (DWR) is a weather-avoidance system for airline dispatchers and FAA traffic managers that continually searches for and advises the user of more efficient routes around convective weather. NASA and American Airlines (AA) have been conducting an operational trial of DWR since July 17, 2012. The objective of this evaluation is to assess DWR from a traffic management coordinator (TMC) perspective, using recently retired TMCs and actual DWR reroutes advisories that were rated acceptable by AA during the operational trial. Results from the evaluation showed that the primary reasons for a TMC to modify or reject airline reroute requests were related to airspace configuration. Approximately 80 percent of the reroutes evaluated required some coordination before implementation. Analysis showed TMCs approved 62 percent of the requested DWR reroutes, resulting in 57 percent of the total requested DWR time savings.
Design and implementation of a fault-tolerant and dynamic metadata database for clinical trials
NASA Astrophysics Data System (ADS)
Lee, J.; Zhou, Z.; Talini, E.; Documet, J.; Liu, B.
2007-03-01
In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed. These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites. Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The centralization of metadata database management simplifies the task of adding new databases into the grid and also decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based clinical trials.
Motion control of 7-DOF arms - The configuration control approach
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Long, Mark K.; Lee, Thomas S.
1993-01-01
Graphics simulation and real-time implementation of configuration control schemes for a redundant 7-DOF Robotics Research arm are described. The arm kinematics and motion control schemes are described briefly. This is followed by a description of a graphics simulation environment for 7-DOF arm control on the Silicon Graphics IRIS Workstation. Computer simulation results are presented to demonstrate elbow control, collision avoidance, and optimal joint movement as redundancy resolution goals. The laboratory setup for experimental validation of motion control of the 7-DOF Robotics Research arm is then described. The configuration control approach is implemented on a Motorola-68020/VME-bus-based real-time controller, with elbow positioning for redundancy resolution. Experimental results demonstrate the efficacy of configuration control for real-time control.
Centralized Fabric Management Using Puppet, Git, and GLPI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William
2012-12-01
Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).
Universal Payload Information Management
NASA Technical Reports Server (NTRS)
Elmore, Ralph B.
2003-01-01
As the overall manager and integrator of International Space Station (ISS) science payloads, the Payload Operations Integration Center (POIC) at Marshall Space Flight Center has a critical need to provide an information management system for exchange and control of ISS payload files as well as to coordinate ISS payload related operational changes. The POIC's information management system has a fundamental requirement to provide secure operational access not only to users physically located at the POIC, but also to remote experimenters and International Partners physically located in different parts of the world. The Payload Information Management System (PIMS) is a ground-based electronic document configuration management and collaborative workflow system that was built to service the POIC's information management needs. This paper discusses the application components that comprise the PIMS system, the challenges that influenced its design and architecture, and the selected technologies it employs. This paper will also touch on the advantages of the architecture, details of the user interface, and lessons learned along the way to a successful deployment. With PIMS, a sophisticated software solution has been built that is not only universally accessible for POIC customer s information management needs, but also universally adaptable in implementation and application as a generalized information management system.
What We Did Last Summer: Depicting DES Data to Enhance Simulation Utility and Use
NASA Technical Reports Server (NTRS)
Elfrey, Priscilla; Conroy, Mike; Lagares, Jose G.; Mann, David; Fahmi, Mona
2009-01-01
At Kennedy Space Center (KSC), an important use of Discrete Event Simulation (DES) addresses ground operations .of missions to space. DES allows managers, scientists and engineers to assess the number of missions KSC can complete on a given schedule within different facilities, the effects of various configurations of resources and detect possible problems or unwanted situations. For fifteen years, DES has supported KSC efficiency, cost savings and improved safety and performance. The dense and abstract DES data, however, proves difficult to comprehend and, NASA managers realized, is subject to misinterpretation, misunderstanding and even, misuse. In summer 2008, KSC developed and implemented a NASA Exploration Systems Mission Directorate (ESMD) project based on the premise that visualization could enhance NASA's understanding and use of DES.
ControlShell: A real-time software framework
NASA Technical Reports Server (NTRS)
Schneider, Stanley A.; Chen, Vincent W.; Pardo-Castellote, Gerardo
1994-01-01
The ControlShell system is a programming environment that enables the development and implementation of complex real-time software. It includes many building tools for complex systems, such as a graphical finite state machine (FSM) tool to provide strategic control. ControlShell has a component-based design, providing interface definitions and mechanisms for building real-time code modules along with providing basic data management. Some of the system-building tools incorporated in ControlShell are a graphical data flow editor, a component data requirement editor, and a state-machine editor. It also includes a distributed data flow package, an execution configuration manager, a matrix package, and an object database and dynamic binding facility. This paper presents an overview of ControlShell's architecture and examines the functions of several of its tools.
Earth Observing System (EOS)/Advanced Microwave Sounding Unit-A (AMSU-A)
NASA Technical Reports Server (NTRS)
Mullooly, William
1995-01-01
This is the thirty-first monthly report for the Earth Observing System (EOS)/Advanced Microwave Sounding Unit- A (AMSU-A), Contract NAS5-32314, and covers the period from 1 July 1995 through 31 July 1995. This period is the nineteenth month of the Implementation Phase which provides for the design, fabrication, assembly, and test of the first EOS/AMSU-A, the Protoflight Model. Included in this report is the Master Program Schedule (Section 2), a report from the Product Team Leaders on the status of all major program elements (Section 3), Drawing status (Section 4), Weight and Power Budgets (CDRL) 503 (Section 5), Performance Assurance (CDRL 204) (Section 6), Configuration Management Status Report (CDRL 203) (Section 7), Documentation/Data Management Status Report (Section 8), and Contract Status (Section 9).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livingood, W.; Stein, J.; Considine, T.
Retailers who participate in the U.S. Department of Energy Commercial Building Energy Alliances (CBEA) identified the need to enhance communication standards. The means are available to collect massive numbers of buildings operational data, but CBEA members have difficulty transforming the data into usable information and energy-saving actions. Implementing algorithms for automated fault detection and diagnostics and linking building operational data to computerized maintenance management systems are important steps in the right direction, but have limited scalability for large building portfolios because the algorithms must be configured for each building.
System design analyses of a rotating advanced-technology space station for the year 2025
NASA Technical Reports Server (NTRS)
Queijo, M. J.; Butterfield, A. J.; Cuddihy, W. F.; Stone, R. W.; Wrobel, J. R.; Garn, P. A.; King, C. B.
1988-01-01
Studies of an advanced technology space station configured to implement subsystem technologies projected for availability in the time period 2000 to 2025 is documented. These studies have examined the practical synergies in operational performance available through subsystem technology selection and identified the needs for technology development. Further analyses are performed on power system alternates, momentum management and stabilization, electrothermal propulsion, composite materials and structures, launch vehicle alternates, and lunar and planetary missions. Concluding remarks are made regarding the advanced technology space station concept, its intersubsystem synergies, and its system operational subsystem advanced technology development needs.
Richards, V. M.; Dai, W.
2014-01-01
A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826
Scientific Digital Libraries, Interoperability, and Ontologies
NASA Technical Reports Server (NTRS)
Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris A.
2009-01-01
Scientific digital libraries serve complex and evolving research communities. Justifications for the development of scientific digital libraries include the desire to preserve science data and the promises of information interconnectedness, correlative science, and system interoperability. Shared ontologies are fundamental to fulfilling these promises. We present a tool framework, some informal principles, and several case studies where shared ontologies are used to guide the implementation of scientific digital libraries. The tool framework, based on an ontology modeling tool, was configured to develop, manage, and keep shared ontologies relevant within changing domains and to promote the interoperability, interconnectedness, and correlation desired by scientists.
Performing Verification and Validation in Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1999-01-01
The implementation of reuse-based software engineering not only introduces new activities to the software development process, such as domain analysis and domain modeling, it also impacts other aspects of software engineering. Other areas of software engineering that are affected include Configuration Management, Testing, Quality Control, and Verification and Validation (V&V). Activities in each of these areas must be adapted to address the entire domain or product line rather than a specific application system. This paper discusses changes and enhancements to the V&V process, in order to adapt V&V to reuse-based software engineering.
Allan, Helen T; Brearley, Sally; Byng, Richard; Christian, Sara; Clayton, Julie; Mackintosh, Maureen; Price, Linnie; Smith, Pam; Ross, Fiona
2014-01-01
ObjectivesTo explore the experiences of governance and incentives during organizational change for managers and clinical staff. Study SettingThree primary care settings in England in 2006–2008. Study DesignData collection involved three group interviews with 32 service users, individual interviews with 32 managers, and 56 frontline professionals in three sites. The Realistic Evaluation framework was used in analysis to examine the effects of new policies and their implementation. Principal FindingsIntegrating new interprofessional teams to work effectively is a slow process, especially if structures in place do not acknowledge the painful feelings involved in change and do not support staff during periods of uncertainty. ConclusionsEliciting multiple perspectives, often dependent on individual occupational positioning or place in new team configurations, illuminates the need to incorporate the emotional as well as technocratic and system factors when implementing change. Some suggestions are made for facilitating change in health care systems. These are discussed in the context of similar health care reform initiatives in the United States. PMID:23829292
Scalable and cost-effective NGS genotyping in the cloud.
Souilmi, Yassine; Lancaster, Alex K; Jung, Jae-Yoon; Rizzo, Ettore; Hawkins, Jared B; Powles, Ryan; Amzazi, Saaïd; Ghazal, Hassan; Tonellato, Peter J; Wall, Dennis P
2015-10-15
While next-generation sequencing (NGS) costs have plummeted in recent years, cost and complexity of computation remain substantial barriers to the use of NGS in routine clinical care. The clinical potential of NGS will not be realized until robust and routine whole genome sequencing data can be accurately rendered to medically actionable reports within a time window of hours and at scales of economy in the 10's of dollars. We take a step towards addressing this challenge, by using COSMOS, a cloud-enabled workflow management system, to develop GenomeKey, an NGS whole genome analysis workflow. COSMOS implements complex workflows making optimal use of high-performance compute clusters. Here we show that the Amazon Web Service (AWS) implementation of GenomeKey via COSMOS provides a fast, scalable, and cost-effective analysis of both public benchmarking and large-scale heterogeneous clinical NGS datasets. Our systematic benchmarking reveals important new insights and considerations to produce clinical turn-around of whole genome analysis optimization and workflow management including strategic batching of individual genomes and efficient cluster resource configuration.
VIDANA: Data Management System for Nano Satellites
NASA Astrophysics Data System (ADS)
Montenegro, Sergio; Walter, Thomas; Dilger, Erik
2013-08-01
A Vidana data management system is a network of software and hardware components. This implies a software network, a hardware network and a smooth connection between both of them. Our strategy is based on our innovative middleware. A reliable interconnection network (SW & HW) which can interconnect many unreliable redundant components such as sensors, actuators, communication devices, computers, and storage elements,... and software components! Component failures are detected, the affected device is disabled and its function is taken over by a redundant component. Our middleware doesn't connect only software, but also devices and software together. Software and hardware communicate with each other without having to distinguish which functions are in software and which are implemented in hardware. Components may be turned on and off at any time, and the whole system will autonomously adapt to its new configuration in order to continue fulfilling its task. In VIDANA we aim dynamic adaptability (run tine), static adaptability (tailoring), and unified HW/SW communication protocols. For many of these aspects we use "learn from the nature" where we can find astonishing reference implementations.
VMSoar: a cognitive agent for network security
NASA Astrophysics Data System (ADS)
Benjamin, David P.; Shankar-Iyer, Ranjita; Perumal, Archana
2005-03-01
VMSoar is a cognitive network security agent designed for both network configuration and long-term security management. It performs automatic vulnerability assessments by exploring a configuration"s weaknesses and also performs network intrusion detection. VMSoar is built on the Soar cognitive architecture, and benefits from the general cognitive abilities of Soar, including learning from experience, the ability to solve a wide range of complex problems, and use of natural language to interact with humans. The approach used by VMSoar is very different from that taken by other vulnerability assessment or intrusion detection systems. VMSoar performs vulnerability assessments by using VMWare to create a virtual copy of the target machine then attacking the simulated machine with a wide assortment of exploits. VMSoar uses this same ability to perform intrusion detection. When trying to understand a sequence of network packets, VMSoar uses VMWare to make a virtual copy of the local portion of the network and then attempts to generate the observed packets on the simulated network by performing various exploits. This approach is initially slow, but VMSoar"s learning ability significantly speeds up both vulnerability assessment and intrusion detection. This paper describes the design and implementation of VMSoar, and initial experiments with Windows NT and XP.
SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)
Zhang, Xiang; Chen, Zhangwei
2013-01-01
This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. Ono, M. Jaworski, R. Kaita, C. N. Skinner, J.P. Allain, R. Maingi, F. Scotti, V.A. Soukhanovskii, and the NSTX-U Team
Developing a reactor compatible divertor and managing the associated plasma material interaction (PMI) has been identified as a high priority research area for magnetic confinement fusion. Accordingly on NSTXU, the PMI research has received a strong emphasis. With ~ 15 MW of auxiliary heating power, NSTX-U will be able to test the PMI physics with the peak divertor plasma facing component (PFC) heat loads of up to 40-60 MW/m2 . To support the PMI research, a comprehensive set of PMI diagnostic tools are being implemented. The snow-flake configuration can produce exceptionally high divertor flux expansion of up to ~ 50.more » Combined with the radiative divertor concept, the snow-flake configuration has reduced the divertor heat flux by an order of magnitude in NSTX. Another area of active PMI investigation is the effect of divertor lithium coating (both in solid and liquid phases). The overall NSTX lithium PFC coating results suggest exciting opportunities for future magnetic confinement research including significant electron energy confinement improvements, Hmode power threshold reduction, the control of Edge Localized Modes (ELMs), and high heat flux handling. To support the NSTX-U/PPPL PMI research, there are also a number of associated PMI facilities implemented at PPPL/Princeton University including the Liquid Lithium R&D facility, Lithium Tokamak Experiment, and Laboratories for Materials Characterization and Surface Chemistry.« less
Survey of piloting factors in V/STOL aircraft with implications for flight control system design
NASA Technical Reports Server (NTRS)
Ringland, R. F.; Craig, S. J.
1977-01-01
Flight control system design factors involved for pilot workload relief are identified. Major contributors to pilot workload include configuration management and control and aircraft stability and response qualities. A digital fly by wire stability augmentation, configuration management, and configuration control system is suggested for reduction of pilot workload during takeoff, hovering, and approach.
How Configuration Management Helps Projects Innovate and Communicate
NASA Technical Reports Server (NTRS)
Cioletti, Louis A.; Guidry, Carla F.
2009-01-01
This slide presentation reviews the concept of Configuration Management (CM) and compares it to the standard view of Project management (PM). It presents two PM models: (1) Kepner-Tregoe,, and the Deming models, describes why projects fail, and presents methods of how CM helps projects innovate and communicate.
76 FR 12617 - Airworthiness Directives; The Boeing Company Model 777-200 and -300 Series Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-08
... installing new operational software for the electrical load management system and configuration database... the electrical load management system operational software and configuration database software, in... Management, P.O. Box 3707, MC 2H-65, Seattle, Washington 98124-2207; telephone 206- 544-5000, extension 1...
Implementing Information Assurance - Beyond Process
2009-01-01
disabled or properly configured. Tools and scripts are available to expedite the configuration process on some platforms, For example, approved Windows...in the System Security Plan (SSP) or Information Security Plan (lSP). Any PPSs not required for operation by the system must be disabled , This...Services must be disabled , Implementing an 1M capability within the boundary carries many policy and documentation requirements. Usemame and passwords
An Auto-Configuration System for the GMSEC Architecture and API
NASA Technical Reports Server (NTRS)
Moholt, Joseph; Mayorga, Arturo
2007-01-01
A viewgraph presentation on an automated configuration concept for The Goddard Mission Services Evolution Center (GMSEC) architecture and Application Program Interface (API) is shown. The topics include: 1) The Goddard Mission Services Evolution Center (GMSEC); 2) Automated Configuration Concept; 3) Implementation Approach; and 4) Key Components and Benefits.
van der Kleij, Rianne M J J; Crone, Mathilde R; Paulussen, Theo G W M; van de Gaar, Vivan M; Reis, Ria
2015-10-08
The implementation of programs complex in design, such as the intersectoral community approach Youth At a Healthy Weight (JOGG), often deviates from their application as intended. There is limited knowledge of their implementation processes, making it difficult to formulate sound implementation strategies. For two years, we performed a repeated cross-sectional case study on the implementation of a JOGG fruit and water campaign targeting children age 0-12. Semi-structured observations, interviews, field notes and professionals' logs entries were used to evaluate implementation process. Data was analyzed via a framework approach; within-case and cross-case displays were formulated and key determinants identified. Principles from Qualitative Comparative Analysis (QCA) were used to identify causal configurations of determinants per sector and implementation phase. Implementation completeness differed, but was highest in the educational and health care sector, and higher for key than additional activities. Determinants and causal configurations of determinants were mostly sector- and implementation phase specific. High campaign ownership and possibilities for campaign adaptation were most frequently mentioned as facilitators. A lack of reinforcement strategies, low priority for campaign use and incompatibility of own goals with campaign goals were most often indicated as barriers. We advise multiple 'stitches in time'; tailoring implementation strategies to specific implementation phases and sectors using both the results from this study and a mutual adaptation strategy in which professionals are involved in the development of implementation strategies. The results of this study show that the implementation process of IACOs is complex and sustainable implementation is difficult to achieve. Moreover, this study reveals that the implementation process is influenced by predominantly sector and implementation phase specific (causal configurations of) determinants.
Oweis, Salah; D'Ussel, Louis; Chagnon, Guy; Zuhowski, Michael; Sack, Tim; Laucournet, Gaullume; Jackson, Edward J.
2002-06-04
A stand alone battery module including: (a) a mechanical configuration; (b) a thermal management configuration; (c) an electrical connection configuration; and (d) an electronics configuration. Such a module is fully interchangeable in a battery pack assembly, mechanically, from the thermal management point of view, and electrically. With the same hardware, the module can accommodate different cell sizes and, therefore, can easily have different capacities. The module structure is designed to accommodate the electronics monitoring, protection, and printed wiring assembly boards (PWAs), as well as to allow airflow through the module. A plurality of modules may easily be connected together to form a battery pack. The parts of the module are designed to facilitate their manufacture and assembly.
Data Model Management for Space Information Systems
NASA Technical Reports Server (NTRS)
Hughes, J. Steven; Crichton, Daniel J.; Ramirez, Paul; Mattmann, chris
2006-01-01
The Reference Architecture for Space Information Management (RASIM) suggests the separation of the data model from software components to promote the development of flexible information management systems. RASIM allows the data model to evolve independently from the software components and results in a robust implementation that remains viable as the domain changes. However, the development and management of data models within RASIM are difficult and time consuming tasks involving the choice of a notation, the capture of the model, its validation for consistency, and the export of the model for implementation. Current limitations to this approach include the lack of ability to capture comprehensive domain knowledge, the loss of significant modeling information during implementation, the lack of model visualization and documentation capabilities, and exports being limited to one or two schema types. The advent of the Semantic Web and its demand for sophisticated data models has addressed this situation by providing a new level of data model management in the form of ontology tools. In this paper we describe the use of a representative ontology tool to capture and manage a data model for a space information system. The resulting ontology is implementation independent. Novel on-line visualization and documentation capabilities are available automatically, and the ability to export to various schemas can be added through tool plug-ins. In addition, the ingestion of data instances into the ontology allows validation of the ontology and results in a domain knowledge base. Semantic browsers are easily configured for the knowledge base. For example the export of the knowledge base to RDF/XML and RDFS/XML and the use of open source metadata browsers provide ready-made user interfaces that support both text- and facet-based search. This paper will present the Planetary Data System (PDS) data model as a use case and describe the import of the data model into an ontology tool. We will also describe the current effort to provide interoperability with the European Space Agency (ESA)/Planetary Science Archive (PSA) which is critically dependent on a common data model.
DNS load balancing in the CERN cloud
NASA Astrophysics Data System (ADS)
Reguero Naredo, Ignacio; Lobato Pardavila, Lorena
2017-10-01
Load Balancing is one of the technologies enabling deployment of large-scale applications on cloud resources. A DNS Load Balancer Daemon (LBD) has been developed at CERN as a cost-effective way to balance applications accepting DNS timing dynamics and not requiring persistence. It currently serves over 450 load-balanced aliases with two small VMs acting as master and slave. The aliases are mapped to DNS subdomains. These subdomains are managed with DDNS according to a load metric, which is collected from the alias member nodes with SNMP. During the last years, several improvements were brought to the software, for instance: support for IPv6, parallelization of the status requests, implementing the client in Python to allow for multiple aliases with differentiated states on the same machine or support for application state. The configuration of the Load Balancer is currently managed by a Puppet type. It discovers the alias member nodes and gets the alias definitions from the Ermis REST service. The Aiermis self-service GUI for the management of the LB aliases has been produced and is based on the Ermis service above that implements a form of Load Balancing as a Service (LBaaS). The Ermis REST API has authorisation based in Foreman hostgroups. The CERN DNS LBD is Open Software with Apache 2 license.
Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1
2016-03-01
REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The
Information management advanced development. Volume 1: Summary
NASA Technical Reports Server (NTRS)
Gerber, C. R.
1972-01-01
The information management systems designed for the modular space station are discussed. Subjects presented are: (1) communications terminal breadboard configuration, (2) digital data bus breadboard configuration, (3) data processing assembly definition, and (4) computer program (software) assembly definition.
Morales-Asencio, Jose M; Kaknani-Uttumchandani, Shakira; Cuevas-Fernández-Gallego, Magdalena; Palacios-Gómez, Leopoldo; Gutiérrez-Sequera, José L; Silvano-Arranz, Agustina; Batres-Sicilia, Juan Pedro; Delgado-Romero, Ascensión; Cejudo-Lopez, Ángela; Trabado-Herrera, Manuel; García-Lara, Esteban L; Martin-Santos, Francisco J; Morilla-Herrera, Juan C
2015-10-01
Complex chronic diseases are a challenge for the current configuration of health services. Case management is a service frequently provided for people with chronic conditions, and despite its effectiveness in many outcomes, such as mortality or readmissions, uncertainty remains about the most effective form of team organization, structures and the nature of the interventions. Many processes and outcomes of case management for people with complex chronic conditions cannot be addressed with the information provided by electronic clinical records. Registries are frequently used to deal with this weakness. The aim of this study was to generate a registry-based information system of patients receiving case management to identify their clinical characteristics, their context of care, events identified during their follow-up, interventions developed by case managers and services used. The study was divided into three phases, covering the detection of information needs, the design and its implementation in the health care system, using literature review and expert consensus methods to select variables that would be included in the registry. A total of 102 variables representing structure, processes and outcomes of case management were selected for their inclusion in the registry after the consensus phase. A web-based registry with modular and layered architecture was designed. The framework follows a pattern based on the model-view-controller approach. In its first 6 months after the implementation, 102 case managers have introduced an average number of 6.49 patients each one. The registry permits a complete and in-depth analysis of the characteristics of the patients who receive case management, the interventions delivered and some major outcomes as mortality, readmissions or adverse events. © 2015 John Wiley & Sons, Ltd.
Implementing a Domain Specific Language to configure and run LHCb Continuous Integration builds
NASA Astrophysics Data System (ADS)
Clemencic, M.; Couturier, B.
2015-12-01
The new LHCb nightly build system described at CHEP 2013 was limited by the use of JSON files for its configuration. JSON had been chosen as a temporary solution to maintain backward compatibility towards the old XML format by means of a translation function. Modern languages like Python leverage on meta-programming techniques to enable the development of Domain Specific Languages (DSLs). In this contribution we will present the advantages of such techniques and how they have been used to implement a DSL that can be used to both describe the configuration of the LHCb Nightly Builds and actually operate them.
System and method of designing a load bearing layer of an inflatable vessel
NASA Technical Reports Server (NTRS)
Spexarth, Gary R. (Inventor)
2007-01-01
A computer-implemented method is provided for designing a restraint layer of an inflatable vessel. The restraint layer is inflatable from an initial uninflated configuration to an inflated configuration and is constructed from a plurality of interfacing longitudinal straps and hoop straps. The method involves providing computer processing means (e.g., to receive user inputs, perform calculations, and output results) and utilizing this computer processing means to implement a plurality of subsequent design steps. The computer processing means is utilized to input the load requirements of the inflated restraint layer and to specify an inflated configuration of the restraint layer. This includes specifying a desired design gap between pairs of adjacent longitudinal or hoop straps, whereby the adjacent straps interface with a plurality of transversely extending hoop or longitudinal straps at a plurality of intersections. Furthermore, an initial uninflated configuration of the restraint layer that is inflatable to achieve the specified inflated configuration is determined. This includes calculating a manufacturing gap between pairs of adjacent longitudinal or hoop straps that correspond to the specified desired gap in the inflated configuration of the restraint layer.
ATLAS TDAQ System Administration: Master of Puppets
NASA Astrophysics Data System (ADS)
Ballestrero, S.; Brasolin, F.; Fazio, D.; Gament, C.; Lee, C. J.; Scannicchio, D. A.; Twomey, M. S.
2017-10-01
Within the ATLAS detector, the Trigger and Data Acquisition system is responsible for the online processing of data streamed from the detector during collisions at the Large Hadron Collider at CERN. The online farm is comprised of ∼4000 servers processing the data read out from ∼100 million detector channels through multiple trigger levels. The configurtion of these servers is not an easy task, especially since the detector itself is made up of multiple different sub-detectors, each with their own particular requirements. The previous method of configuring these servers, using Quattor and a hierarchical scripts system was cumbersome and restrictive. A better, unified system was therefore required to simplify the tasks of the TDAQ Systems Administrators, for both the local and net-booted systems, and to be able to fulfil the requirements of TDAQ, Detector Control Systems and the sub-detectors groups. Various configuration management systems were evaluated, though in the end, Puppet was chosen as the application of choice and was the first such implementation at CERN.
Run Environment and Data Management for Earth System Models
NASA Astrophysics Data System (ADS)
Widmann, H.; Lautenschlager, M.; Fast, I.; Legutke, S.
2009-04-01
The Integrating Model and Data Infrastructure (IMDI) developed and maintained by the Model and Data Group (M&D) comprises the Standard Compile Environment (SCE) and the Standard Run Environment (SRE). The IMDI software has a modular design, which allows to combine and couple a suite of model components and as well to execute the tasks independently and on various platforms. Furthermore the modular structure enables the extension to new model combinations and new platforms. The SRE presented here enables the configuration and performance of earth system model experiments from model integration up to storage and visualization of data. We focus on recently implemented tasks such as synchronous data base filling, graphical monitoring and automatic generation of meta data in XML forms during run time. As well we address the capability to run experiments in heterogeneous IT environments with different computing systems for model integration, data processing and storage. These features are demonstrated for model configurations and on platforms used in current or upcoming projects, e.g. MILLENNIUM or IPCC AR5.
Automatic aeroponic irrigation system based on Arduino’s platform
NASA Astrophysics Data System (ADS)
Montoya, A. P.; Obando, F. A.; Morales, J. G.; Vargas, G.
2017-06-01
The recirculating hydroponic culture techniques, as aeroponics, has several advantages over traditional agriculture, aimed to improve the efficiently and environmental impact of agriculture. These techniques require continuous monitoring and automation for proper operation. In this work was developed an automatic monitored aeroponic-irrigation system based on the Arduino’s free software platform. Analog and digital sensors for measuring the temperature, flow and level of a nutrient solution in a real greenhouse were implemented. In addition, the pH and electric conductivity of nutritive solutions are monitored using the Arduino’s differential configuration. The sensor network, the acquisition and automation system are managed by two Arduinos modules in master-slave configuration, which communicate one each other wireless by Wi-Fi. Further, data are stored in micro SD memories and the information is loaded on a web page in real time. The developed device brings important agronomic information when is tested with an arugula culture (Eruca sativa Mill). The system also could be employ as an early warning system to prevent irrigation malfunctions.
Airborne wireless communication systems, airborne communication methods, and communication methods
Deaton, Juan D [Menan, ID; Schmitt, Michael J [Idaho Falls, ID; Jones, Warren F [Idaho Falls, ID
2011-12-13
An airborne wireless communication system includes circuitry configured to access information describing a configuration of a terrestrial wireless communication base station that has become disabled. The terrestrial base station is configured to implement wireless communication between wireless devices located within a geographical area and a network when the terrestrial base station is not disabled. The circuitry is further configured, based on the information, to configure the airborne station to have the configuration of the terrestrial base station. An airborne communication method includes answering a 911 call from a terrestrial cellular wireless phone using an airborne wireless communication system.
CoNNeCT Baseband Processor Module
NASA Technical Reports Server (NTRS)
Yamamoto, Clifford K; Jedrey, Thomas C.; Gutrich, Daniel G.; Goodpasture, Richard L.
2011-01-01
A document describes the CoNNeCT Baseband Processor Module (BPM) based on an updated processor, memory technology, and field-programmable gate arrays (FPGAs). The BPM was developed from a requirement to provide sufficient computing power and memory storage to conduct experiments for a Software Defined Radio (SDR) to be implemented. The flight SDR uses the AT697 SPARC processor with on-chip data and instruction cache. The non-volatile memory has been increased from a 20-Mbit EEPROM (electrically erasable programmable read only memory) to a 4-Gbit Flash, managed by the RTAX2000 Housekeeper, allowing more programs and FPGA bit-files to be stored. The volatile memory has been increased from a 20-Mbit SRAM (static random access memory) to a 1.25-Gbit SDRAM (synchronous dynamic random access memory), providing additional memory space for more complex operating systems and programs to be executed on the SPARC. All memory is EDAC (error detection and correction) protected, while the SPARC processor implements fault protection via TMR (triple modular redundancy) architecture. Further capability over prior BPM designs includes the addition of a second FPGA to implement features beyond the resources of a single FPGA. Both FPGAs are implemented with Xilinx Virtex-II and are interconnected by a 96-bit bus to facilitate data exchange. Dedicated 1.25- Gbit SDRAMs are wired to each Xilinx FPGA to accommodate high rate data buffering for SDR applications as well as independent SpaceWire interfaces. The RTAX2000 manages scrub and configuration of each Xilinx.
Experience of Data Handling with IPPM Payload
NASA Astrophysics Data System (ADS)
Errico, Walter; Tosi, Pietro; Ilstad, Jorgen; Jameux, David; Viviani, Riccardo; Collantoni, Daniele
2010-08-01
A simplified On-Board Data Handling system has been developed by CAEN AURELIA SPACE and ABSTRAQT as PUS-over-SpaceWire demonstration platform for the Onboard Payload Data Processing laboratory at ESTEC. The system is composed of three Leon2-based IPPM (Integrated Payload Processing Module) computers that play the roles of Instrument, Payload Data Handling Unit and Satellite Management Unit. Two PCs complete the test set-up simulating an external Memory Management Unit and the Ground Control Unit. Communication among units take place primarily through SpaceWire links; RMAP[2] protocol is used for configuration and housekeeping. A limited implementation of ECSS-E-70-41B Packet Utilisation Standard (PUS)[1] over CANbus and MIL-STD-1553B has been also realized. The Open Source RTEMS is running on the IPPM AT697E CPU as real-time operating system.
Creating wi-fi bluetooth mesh network for crisis management applications
NASA Astrophysics Data System (ADS)
Al-Tekreeti, Safa; Adams, Christopher; Al-Jawad, Naseer
2010-04-01
This paper proposes a wireless mesh network implementation consisting of both Wi-Fi Ad-Hoc networks as well as Bluetooth Piconet/Scatternet networks, organised in an energy and throughput efficient structure. This type of networks can be easily constructed for Crises management applications, for example in an Earthquake disaster. The motivation of this research is to form mesh network from the mass availability of WiFi and Bluetooth enabled electronic devices such as mobile phones and PC's that are normally present in most regions were major crises occurs. The target of this study is to achieve an effective solution that will enable Wi-Fi and/or Bluetooth nodes to seamlessly configure themselves to act as a bridge between their own network and that of the other network to achieve continuous routing for our proposed mesh networks.
YAMM - YET ANOTHER MENU MANAGER
NASA Technical Reports Server (NTRS)
Mazer, A. S.
1994-01-01
One of the most time-consuming yet necessary tasks of writing any piece of interactive software is the development of a user interface. Yet Another Menu Manager, YAMM, is an application independent menuing package, designed to remove much of the difficulty and save much of the time inherent in the implementation of the front ends for large packages. Written in C for UNIX-based operating systems, YAMM provides a complete menuing front end for a wide variety of applications, with provisions for terminal independence, user-specific configurations, and dynamic creation of menu trees. Applications running under the menu package consists of two parts: a description of the menu configuration and the body of application code. The menu configuration is used at runtime to define the menu structure and any non-standard keyboard mappings and terminal capabilities. Menu definitions define specific menus within the menu tree. The names used in a definition may be either a reference to an application function or the name of another menu defined within the menu configuration. Application parameters are entered using data entry screens which allow for required and optional parameters, tables, and legal-value lists. Both automatic and application-specific error checking are available. Help is available for both menu operation and specific applications. The YAMM program was written in C for execution on a Sun Microsystems workstation running SunOS, based on the Berkeley (4.2bsd) version of UNIX. During development, YAMM has been used on both 68020 and SPARC architectures, running SunOS versions 3.5 and 4.0. YAMM should be portable to most other UNIX-based systems. It has a central memory requirement of approximately 232K bytes. The standard distribution medium for this program is one .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. YAMM was developed in 1988 and last updated in 1990. YAMM is a copyrighted work with all copyright vested in NASA.
Digital Model-Based Engineering: Expectations, Prerequisites, and Challenges of Infusion
NASA Technical Reports Server (NTRS)
Hale, J. P.; Zimmerman, P.; Kukkala, G.; Guerrero, J.; Kobryn, P.; Puchek, B.; Bisconti, M.; Baldwin, C.; Mulpuri, M.
2017-01-01
Digital model-based engineering (DMbE) is the use of digital artifacts, digital environments, and digital tools in the performance of engineering functions. DMbE is intended to allow an organization to progress from documentation-based engineering methods to digital methods that may provide greater flexibility, agility, and efficiency. The term 'DMbE' was developed as part of an effort by the Model-Based Systems Engineering (MBSE) Infusion Task team to identify what government organizations might expect in the course of moving to or infusing MBSE into their organizations. The Task team was established by the Interagency Working Group on Engineering Complex Systems, an informal collaboration among government systems engineering organizations. This Technical Memorandum (TM) discusses the work of the MBSE Infusion Task team to date. The Task team identified prerequisites, expectations, initial challenges, and recommendations for areas of study to pursue, as well as examples of efforts already in progress. The team identified the following five expectations associated with DMbE infusion, discussed further in this TM: (1) Informed decision making through increased transparency, and greater insight. (2) Enhanced communication. (3) Increased understanding for greater flexibility/adaptability in design. (4) Increased confidence that the capability will perform as expected. (5) Increased efficiency. The team identified the following seven challenges an organization might encounter when looking to infuse DMbE: (1) Assessing value added to the organization. Not all DMbE practices will be applicable to every situation in every organization, and not all implementations will have positive results. (2) Overcoming organizational and cultural hurdles. (3) Adopting contractual practices and technical data management. (4) Redefining configuration management. The DMbE environment changes the range of configuration information to be managed to include performance and design models, database objects, as well as more traditional book-form objects and formats. (5) Developing information technology (IT) infrastructure. Approaches to implementing critical, enabling IT infrastructure capabilities must be flexible, reconfigurable, and updatable. (6) Ensuring security of the single source of truth (7) Potential overreliance on quantitative data over qualitative data. Executable/ computational models and simulations generally incorporate and generate quantitative vice qualitative data. The Task team also developed several recommendations for government, academia, and industry, as discussed in this TM. The Task team recommends continuing beyond this initial work to further develop the means of implementing DMbE and to look for opportunities to collaborate and share best practices.
National Centers for Environmental Prediction
/ VISION | About EMC EMC > NOAH > IMPLEMENTATION SCHEDULLE Home Operational Products Experimental Data Verification Model Configuration Implementation Schedule Collaborators Documentation FAQ Code
National Centers for Environmental Prediction
/ VISION | About EMC EMC > GEFS > IMPLEMENTATION SCHEDULLE Home Operational Products Experimental Data Verification Model Configuration Implementation Schedule Collaborators Documentation FAQ Code
Thomas, Kristin; Krevers, Barbro; Bendtsen, Preben
2015-01-22
Non-communicable diseases are a leading cause of death and can largely be prevented by healthy lifestyles. Health care organizations are encouraged to integrate healthy lifestyle promotion in routine care. This study evaluates the impact of a team initiative on healthy lifestyle promotion in primary care. A quasi-experimental, cross-sectional design compared three intervention centres that had implemented lifestyle teams with three control centres that used a traditional model of care. Outcomes were defined using the RE-AIM framework: reach, the proportion of patients receiving lifestyle promotion; effectiveness, self-reported attitudes and competency among staff; adoption, proportion of staff reporting regular practice of lifestyle promotion; implementation, fidelity to the original lifestyle team protocol. Data collection methods included a patient questionnaire (n = 888), a staff questionnaire (n = 120) and structured interviews with all practice managers and, where applicable, team managers (n = 8). The chi square test and problem-driven content analysis was used to analyse the questionnaire and interview data, respectively. Reach: patients at control centres (48%, n = 211) received lifestyle promotion significantly more often compared with patients at intervention centres (41%, n = 169). Effectiveness: intervention staff was significantly more positive towards the effectiveness of lifestyle promotion, shared competency and how lifestyle promotion was prioritized at their centre. Adoption: 47% of staff at intervention centres and 58% at control centres reported that they asked patients about their lifestyle on a daily basis. all intervention centres had implemented multi-professional teams and team managers and held regular meetings but struggled to implement in-house referral structures for lifestyle promotion, which was used consistently among staff. Intervention centres did not show higher rates than control centres on reach of patients or adoption among staff at this stage. All intervention centres struggled to implement working referral structures for lifestyle promotion. Intervention centres were more positive on effectiveness outcomes, attitudes and competency among staff, however. Thus, lifestyle teams may facilitate lifestyle promotion practice in terms of increased responsiveness among staff, illustrated by positive attitudes and perceptions of shared competency. More research is needed on lifestyle promotion referral structures in primary care regarding their configuration and implementation.
Intelligent Hybrid Vehicle Power Control. Part 2. Online Intelligent Energy Management
2012-06-30
IEC_HEV for vehicle energy optimization. IEC_HEV, the Figure 1. Power Split HEV configuration into VSC 5 online energy control is a component...in the Vehicle System Controller ( VSC ). The VSC for this configuration must manage the powertrain control in order to maintain a proper level of...charge in the battery. However, since two power sources are available to propel the vehicle, the VSC in this configuration has the additional
GCS plan for software aspects of certification
NASA Technical Reports Server (NTRS)
Shagnea, Anita M.; Lowman, Douglas S.; Withers, B. Edward
1990-01-01
As part of the Guidance and Control Software (GCS) research project being sponsored by NASA to evaluate the failure processes of software, standard industry software development procedures are being employed. To ensure that these procedures are authentic, the guidelines outlined in the Radio Technical Commission for Aeronautics (RTCA/DO-178A document entitled, software considerations in airborne systems and equipment certification, were adopted. A major aspect of these guidelines is proper documentation. As such, this report, the plan for software aspects of certification, was produced in accordance with DO-178A. An overview is given of the GCS research project, including the goals of the project, project organization, and project schedules. It also specifies the plans for all aspects of the project which relate to the certification of the GCS implementations developed under a NASA contract. These plans include decisions made regarding the software specification, accuracy requirements, configuration management, implementation development and verification, and the development of the GCS simulator.
Man-rated flight software for the F-8 DFBW program
NASA Technical Reports Server (NTRS)
Bairnsfather, R. R.
1975-01-01
The design, implementation, and verification of the flight control software used in the F-8 DFBW program are discussed. Since the DFBW utilizes an Apollo computer and hardware, the procedures, controls, and basic management techniques employed are based on those developed for the Apollo software system. Program Assembly Control, simulator configuration control, erasable-memory load generation, change procedures and anomaly reporting are discussed. The primary verification tools--the all-digital simulator, the hybrid simulator, and the Iron Bird simulator--are described, as well as the program test plans and their implementation on the various simulators. Failure-effects analysis and the creation of special failure-generating software for testing purposes are described. The quality of the end product is evidenced by the F-8 DFBW flight test program in which 42 flights, totaling 58 hours of flight time, were successfully made without any DFCS inflight software, or hardware, failures.
Layerwise Finite Elements for Smart Piezoceramic Composite Plates in Thermal Environments
NASA Technical Reports Server (NTRS)
Saravanos, Dimitris A.; Lee, Ho-Jun
1996-01-01
Analytical formulations are presented which account for the coupled mechanical, electrical, and thermal response of piezoelectric composite laminates and plate structures. A layerwise theory is formulated with the inherent capability to explicitly model the active and sensory response of piezoelectric composite plates having arbitrary laminate configurations in thermal environments. Finite element equations are derived and implemented for a bilinear 4-noded plate element. Application cases demonstrate the capability to manage thermally induced bending and twisting deformations in symmetric and antisymmetric composite plates with piezoelectric actuators, and show the corresponding electrical response of distributed piezoelectric sensors. Finally, the resultant stresses in the thermal piezoelectric composite laminates are investigated.
Operational Management System for Regulated Water Systems
NASA Astrophysics Data System (ADS)
van Loenen, A.; van Dijk, M.; van Verseveld, W.; Berger, H.
2012-04-01
Most of the Dutch large rivers, canals and lakes are controlled by the Dutch water authorities. The main reasons concern safety, navigation and fresh water supply. Historically the separate water bodies have been controlled locally. For optimizating management of these water systems an integrated approach was required. Presented is a platform which integrates data from all control objects for monitoring and control purposes. The Operational Management System for Regulated Water Systems (IWP) is an implementation of Delft-FEWS which supports operational control of water systems and actively gives advice. One of the main characteristics of IWP is that is real-time collects, transforms and presents different types of data, which all add to the operational water management. Next to that, hydrodynamic models and intelligent decision support tools are added to support the water managers during their daily control activities. An important advantage of IWP is that it uses the Delft-FEWS framework, therefore processes like central data collection, transformations, data processing and presentation are simply configured. At all control locations the same information is readily available. The operational water management itself gains from this information, but it can also contribute to cost efficiency (no unnecessary pumping), better use of available storage and advise during (water polution) calamities.
BLM Unmanned Aircraft Systems (UAS) Resource Management Operations
NASA Astrophysics Data System (ADS)
Hatfield, M. C.; Breen, A. L.; Thurau, R.
2016-12-01
The Department of the Interior Bureau of Land Management is funding research at the University of Alaska Fairbanks to study Unmanned Aircraft Systems (UAS) Resource Management Operations. In August 2015, the team conducted flight research at UAF's Toolik Field Station (TFS). The purpose was to determine the most efficient use of small UAS to collect low-altitude airborne digital stereo images, process the stereo imagery into close-range photogrammetry products, and integrate derived imagery products into the BLM's National Assessment, Inventory and Monitoring (AIM) Strategy. The AIM Strategy assists managers in answering questions of land resources at all organizational levels and develop management policy at regional and national levels. In Alaska, the BLM began to implement its AIM strategy in the National Petroleum Reserve-Alaska (NPR-A) in 2012. The primary goals of AIM-monitoring at the NPR-A are to implement an ecological baseline to monitor ecological trends, and to develop a monitoring network to understand the efficacy of management decisions. The long-term AIM strategy also complements other ongoing NPR-A monitoring processes, collects multi-use and multi-temporal data, and supports understanding of ecosystem management strategies in order to implement defensible natural resource management policy. The campaign measured vegetation types found in the NPR-A, using UAF's TFS location as a convenient proxy. The vehicle selected was the ACUASI Ptarmigan, a small hexacopter (based on DJI S800 airframe and 3DR autopilot) capable of carrying a 1.5 kg payload for 15 min for close-range environmental monitoring missions. The payload was a stereo camera system consisting of Sony NEX7's with various lens configurations (16/20/24/35 mm). A total of 77 flights were conducted over a 4 ½ day period, with 1.5 TB of data collected. Mission variables included camera height, UAS speed, transect overlaps, and camera lenses/settings. Invaluable knowledge was gained as to limitations and opportunities for field deployment of UAS relative to local conditions and vegetation type. Future efforts will focus of refining data analysis techniques and further optimizing UAS/sensor combinations and flight profiles.
Configuration management program plan for Hanford site systems engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kellie, C.L.
This plan establishes the integrated management program for the evolving technical baseline developed through the systems engineering process. This configuration management program aligns with the criteria identified in the DOE Standard, DOE-STD-1073-93. Included are specific requirements for control of the systems engineering RDD-100 database, and electronic data incorporated in the database that establishes the Hanford Site Technical Baseline.
1991-12-01
database, the Real Time Operation Management Information System (ROMIS), and Fitting Out Management Information System (FOMIS). These three configuration...Codes ROMIS Real Time Operation Management Information System SCLSIS Ship’s Configuration and Logistics Information System SCN Shipbuilding and
NASA Astrophysics Data System (ADS)
Arndt, J.; Kreimer, J.
2010-09-01
The European Space Laboratory COLUMBUS was launched in February 2008 with NASA Space Shuttle Atlantis. Since successful docking and activation this manned laboratory forms part of the International Space Station(ISS). Depending on the objectives of the Mission Increments the on-orbit configuration of the COLUMBUS Module varies with each increment. This paper describes the end-to-end verification which has been implemented to ensure safe operations under the condition of a changing on-orbit configuration. That verification process has to cover not only the configuration changes as foreseen by the Mission Increment planning but also those configuration changes on short notice which become necessary due to near real-time requests initiated by crew or Flight Control, and changes - most challenging since unpredictable - due to on-orbit anomalies. Subject of the safety verification is on one hand the on orbit configuration itself including the hardware and software products, on the other hand the related Ground facilities needed for commanding of and communication to the on-orbit System. But also the operational products, e.g. the procedures prepared for crew and ground control in accordance to increment planning, are subject of the overall safety verification. In order to analyse the on-orbit configuration for potential hazards and to verify the implementation of the related Safety required hazard controls, a hierarchical approach is applied. The key element of the analytical safety integration of the whole COLUMBUS Payload Complement including hardware owned by International Partners is the Integrated Experiment Hazard Assessment(IEHA). The IEHA especially identifies those hazardous scenarios which could potentially arise through physical and operational interaction of experiments. A major challenge is the implementation of a Safety process which owns quite some rigidity in order to provide reliable verification of on-board Safety and which likewise provides enough flexibility which is desired by manned space operations with scientific objectives. In the period of COLUMBUS operations since launch already a number of lessons learnt could be implemented especially in the IEHA that allow to improve the flexibility of on-board operations without degradation of Safety.
NASA Astrophysics Data System (ADS)
Poat, M. D.; Lauret, J.; Betts, W.
2015-12-01
The STAR online computing environment is an intensive ever-growing system used for real-time data collection and analysis. Composed of heterogeneous and sometimes groups of custom-tuned machines, the computing infrastructure was previously managed by manual configurations and inconsistently monitored by a combination of tools. This situation led to configuration inconsistency and an overload of repetitive tasks along with lackluster communication between personnel and machines. Globally securing this heterogeneous cyberinfrastructure was tedious at best and an agile, policy-driven system ensuring consistency, was pursued. Three configuration management tools, Chef, Puppet, and CFEngine have been compared in reliability, versatility and performance along with a comparison of infrastructure monitoring tools Nagios and Icinga. STAR has selected the CFEngine configuration management tool and the Icinga infrastructure monitoring system leading to a versatile and sustainable solution. By leveraging these two tools STAR can now swiftly upgrade and modify the environment to its needs with ease as well as promptly react to cyber-security requests. By creating a sustainable long term monitoring solution, the detection of failures was reduced from days to minutes, allowing rapid actions before the issues become dire problems, potentially causing loss of precious experimental data or uptime.
Production Management System for AMS Computing Centres
NASA Astrophysics Data System (ADS)
Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.
2017-10-01
The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.
A Reference Architecture for Space Information Management
NASA Technical Reports Server (NTRS)
Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.
2006-01-01
We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.
Data archiving and serving system implementation in CLEP's GRAS Core System
NASA Astrophysics Data System (ADS)
Zuo, Wei; Zeng, Xingguo; Zhang, Zhoubin; Geng, Liang; Li, Chunlai
2017-04-01
The Ground Research & Applications System(GRAS) is one of the five systems of China's Lunar Exploration Project(CLEP), it is responsible for data acquisition, processing, management and application, and it is also the operation control center during satellite in-orbit and payload operation management. Chang'E-1, Chang'E-2 and Chang'E-3 have collected abundant lunar exploration data. The aim of this work is to present the implementation of data archiving and Serving in CLEP's GRAS Core System software. This first approach provides a client side API and server side software allowing the creation of a simplified version of CLEPDB data archiving software, and implements all required elements to complete data archiving flow from data acquisition until its persistent storage technology. The client side includes all necessary components that run on devices that acquire or produce data, distributing and streaming to configure remote archiving servers. The server side comprises an archiving service that stores into PDS files all received data. The archiving solution aims at storing data coming for the Data Acquisition Subsystem, the Operation Management Subsystem, the Data Preprocessing Subsystem and the Scientific Application & Research Subsystem. The serving solution aims at serving data for the various business systems, scientific researchers and public users. The data-driven and component clustering methods was adopted in this system, the former is used to solve real-time data archiving and data persistence services; the latter is used to keep the continuous supporting ability of archive and service to new data from Chang'E Mission. Meanwhile, it can save software development cost as well.
Design and implementation of fishery rescue data mart system
NASA Astrophysics Data System (ADS)
Pan, Jun; Huang, Haiguang; Liu, Yousong
A novel data mart based system for fishery rescue field was designed and implemented. The system runs ETL process to deal with original data from various databases and data warehouses, and then reorganized the data into the fishery rescue data mart. Next, online analytical processing (OLAP) are carried out and statistical reports are generated automatically. Particularly, quick configuration schemes are designed to configure query dimensions and OLAP data sets. The configuration file will be transformed into statistic interfaces automatically through a wizard-style process. The system provides various forms of reporting files, including crystal reports, flash graphical reports, and two-dimensional data grids. In addition, a wizard style interface was designed to guide users customizing inquiry processes, making it possible for nontechnical staffs to access customized reports. Characterized by quick configuration, safeness and flexibility, the system has been successfully applied in city fishery rescue department.
NASA Technical Reports Server (NTRS)
Phojanamongkolkij, Nipa; Oseguera-Lohr, Rosa M.; Lohr, Gary W.; Robbins, Steven W.; Fenbert, James W.; Hartman, Christopher L.
2015-01-01
The System-Oriented Runway Management (SORM) concept is a collection of capabilities focused on a more efficient use of runways while considering all of the factors that affect runway use. Tactical Runway Configuration Management (TRCM), one of the SORM capabilities, provides runway configuration and runway usage recommendations, and monitoring the active runway configuration for suitability given existing factors. This report focuses on the metroplex environment, with two or more proximate airports having arrival and departure operations that are highly interdependent. The myriad of factors that affect metroplex opeations require consideration in arriving at runway configurations that collectively best serve the system as a whole. To assess the metroplex TRCM (mTRCM) benefit, the performance metrics must be compared with the actual historical operations. The historical configuration schedules can be viewed as the schedules produced by subject matter experts (SMEs), and therefore are referred to as the SMEs' schedules. These schedules were obtained from the FAA's Aviation System Performance Metrics (ASPM) database; this is the most representative information regarding runway configuration selection by SMEs. This report focused on a benefit assessment of total delay, transit time, and throughput efficiency (TE) benefits using the mTRCM algorithm at representative volumes for today's traffic at the New York metroplex (N90).
78 FR 23685 - Airworthiness Directives; The Boeing Company
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-22
... installing new operational software for the electrical load management system and configuration database. The..., installing a new electrical power control panel, and installing new operational software for the electrical load management system and configuration database. Since the proposed AD was issued, we have received...
Uniformity on the grid via a configuration framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Igor V Terekhov et al.
2003-03-11
As Grid permeates modern computing, Grid solutions continue to emerge and take shape. The actual Grid development projects continue to provide higher-level services that evolve in functionality and operate with application-level concepts which are often specific to the virtual organizations that use them. Physically, however, grids are comprised of sites whose resources are diverse and seldom project readily onto a grid's set of concepts. In practice, this also creates problems for site administrators who actually instantiate grid services. In this paper, we present a flexible, uniform framework to configure a grid site and its facilities, and otherwise describe the resourcesmore » and services it offers. We start from a site configuration and instantiate services for resource advertisement, monitoring and data handling; we also apply our framework to hosting environment creation. We use our ideas in the Information Management part of the SAM-Grid project, a grid system which will deliver petabyte-scale data to the hundreds of users. Our users are High Energy Physics experimenters who are scattered worldwide across dozens of institutions and always use facilities that are shared with other experiments as well as other grids. Our implementation represents information in the XML format and includes tools written in XQuery and XSLT.« less
Managing and Communicating Operational Workflow
Weinberg, Stuart T.; Danciu, Ioana; Unertl, Kim M.
2016-01-01
Summary Background Healthcare team members in emergency department contexts have used electronic whiteboard solutions to help manage operational workflow for many years. Ambulatory clinic settings have highly complex operational workflow, but are still limited in electronic assistance to communicate and coordinate work activities. Objective To describe and discuss the design, implementation, use, and ongoing evolution of a coordination and collaboration tool supporting ambulatory clinic operational workflow at Vanderbilt University Medical Center (VUMC). Methods The outpatient whiteboard tool was initially designed to support healthcare work related to an electronic chemotherapy order-entry application. After a highly successful initial implementation in an oncology context, a high demand emerged across the organization for the outpatient whiteboard implementation. Over the past 10 years, developers have followed an iterative user-centered design process to evolve the tool. Results The electronic outpatient whiteboard system supports 194 separate whiteboards and is accessed by over 2800 distinct users on a typical day. Clinics can configure their whiteboards to support unique workflow elements. Since initial release, features such as immunization clinical decision support have been integrated into the system, based on requests from end users. Conclusions The success of the electronic outpatient whiteboard demonstrates the usefulness of an operational workflow tool within the ambulatory clinic setting. Operational workflow tools can play a significant role in supporting coordination, collaboration, and teamwork in ambulatory healthcare settings. PMID:27081407
Medication order communication using fax and document-imaging technologies.
Simonian, Armen I
2008-03-15
The implementation of fax and document-imaging technology to electronically communicate medication orders from nursing stations to the pharmacy is described. The evaluation of a commercially available pharmacy order imaging system to improve order communication and to make document retrieval more efficient led to the selection and customization of a system already licensed and used in seven affiliated hospitals. The system consisted of existing fax machines and document-imaging software that would capture images of written orders and send them from nursing stations to a central database server. Pharmacists would then retrieve the images and enter the orders in an electronic medical record system. The pharmacy representatives from all seven hospitals agreed on the configuration and functionality of the custom application. A 30-day trial of the order imaging system was successfully conducted at one of the larger institutions. The new system was then implemented at the remaining six hospitals over a period of 60 days. The transition from a paper-order system to electronic communication via a standardized pharmacy document management application tailored to the specific needs of this health system was accomplished. A health system with seven affiliated hospitals successfully implemented electronic communication and the management of inpatient paper-chart orders by using faxes and document-imaging technology. This standardized application eliminated the problems associated with the hand delivery of paper orders, the use of the pneumatic tube system, and the printing of traditional faxes.
NASA Technical Reports Server (NTRS)
Franklin, J. A.; Innis, R. C.
1980-01-01
Flight experiments were conducted to evaluate two control concepts for configuration management during the transition to landing approach for a powered-lift STOL aircraft. NASA Ames' augmentor wing research aircraft was used in the program. Transitions from nominal level-flight configurations at terminal area pattern speeds were conducted along straight and curved descending flightpaths. Stabilization and command augmentation for attitude and airspeed control were used in conjunction with a three-cue flight director that presented commands for pitch, roll, and throttle controls. A prototype microwave system provided landing guidance. Results of these flight experiments indicate that these configuration management concepts permit the successful performance of transitions and approaches along curved paths by powered-lift STOL aircraft. Flight director guidance was essential to accomplish the task.
Behavior-based network management: a unique model-based approach to implementing cyber superiority
NASA Astrophysics Data System (ADS)
Seng, Jocelyn M.
2016-05-01
Behavior-Based Network Management (BBNM) is a technological and strategic approach to mastering the identification and assessment of network behavior, whether human-driven or machine-generated. Recognizing that all five U.S. Air Force (USAF) mission areas rely on the cyber domain to support, enhance and execute their tasks, BBNM is designed to elevate awareness and improve the ability to better understand the degree of reliance placed upon a digital capability and the operational risk.2 Thus, the objective of BBNM is to provide a holistic view of the digital battle space to better assess the effects of security, monitoring, provisioning, utilization management, allocation to support mission sustainment and change control. Leveraging advances in conceptual modeling made possible by a novel advancement in software design and implementation known as Vector Relational Data Modeling (VRDM™), the BBNM approach entails creating a network simulation in which meaning can be inferred and used to manage network behavior according to policy, such as quickly detecting and countering malicious behavior. Initial research configurations have yielded executable BBNM models as combinations of conceptualized behavior within a network management simulation that includes only concepts of threats and definitions of "good" behavior. A proof of concept assessment called "Lab Rat," was designed to demonstrate the simplicity of network modeling and the ability to perform adaptation. The model was tested on real world threat data and demonstrated adaptive and inferential learning behavior. Preliminary results indicate this is a viable approach towards achieving cyber superiority in today's volatile, uncertain, complex and ambiguous (VUCA) environment.
Sánchez Cuervo, Marina; Muñoz García, María; Gómez de Salazar López de Silanes, María Esther; Bermejo Vicedo, Teresa
2015-03-01
to describe the features of a computer program for management of drugs in special situations (off-label and compassionate use) in a Department of Hospital Pharmacy (PD). To describe the methodology followed for its implementation in the Medical Services. To evaluate their use after 2 years of practice. the design was carried out by pharmacists of the PD. The stages of the process were: selection of a software development company, establishment of a working group, selection of a development platform, design of an interactive Viewer, definition of functionality and data processing, creation of databases, connection, installation and configuration, application testing and improvements development. A directed sequential strategy was used for implementation in the Medical Services. The program's utility and experience of use were evaluated after 2 years. a multidisciplinary working group was formed and developed Pk_Usos®. The program works in web environment with a common viewer for all users enabling real time checking of the request files' status and that adapts to the management of medications in special situations procedure. Pk_Usos® was introduced first in the Oncology Department, with 15 oncologists as users of the program. 343 patients had 384 treatment requests managed, of which 363 are authorized throughout two years. PK_Usos® is the first software designed for the management of drugs in special situations in the PD. It is a dynamic and efficient tool for all professionals involved in the process by optimization of times. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplanis, S., E-mail: kaplanis@teipat.gr; Kaplani, E.
The paper presents the design features, the energy modelling and optical performance details of two pilot Intelligent Energy Buildings, (IEB). Both are evolution of the Zero Energy Building (ZEB) concept. RES innovations backed up by signal processing, simulation models and ICT tools were embedded into the building structures in order to implement a new predictive energy management concept. In addition, nano-coatings, produced by TiO2 and ITO nano-particles, were deposited on the IEB structural elements and especially on the window panes and the PV glass covers. They exhibited promising SSP values which lowered the cooling loads and increased the PV modulesmore » yield. Both pilot IEB units were equipped with an on-line dynamic hourly solar radiation prediction model, implemented by sensors and the related software to manage effectively the energy source, the loads and the storage or the backup system. The IEB energy sources covered the thermal loads via a south façade embedded in the wall and a solar roof which consists of a specially designed solar collector type, while a PV generator is part of the solar roof, like a compact BIPV in hybrid configuration to a small wind turbine.« less
Emma L. Witt; Christopher D. Barton; Jeffrey W. Stringer; Randy Kolka; Mac A. Cherry
2016-01-01
Streamside management zones (SMZs) are a common best management practice (BMP) used to reduce water quality impacts from logging. The objective of this research was to evaluate the impact of varying SMZ configurations on water quality. Treatments (T1, T2, and T3) that varied in SMZ width, canopy retention within the SMZ, and BMP utilization were applied at the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
WHITE, D.A.
1999-12-29
This Software Configuration Management Plan (SCMP) provides the instructions for change control of the AZ1101 Mixer Pump Demonstration Data Acquisition System (DAS) and the Sludge Mobilization Cart (Gamma Cart) Data Acquisition and Control System (DACS).
A network architecture for International Business Satellite communications
NASA Astrophysics Data System (ADS)
Takahata, Fumio; Nohara, Mitsuo; Takeuchi, Yoshio
Demand Assignment (DA) control is expected to be introduced in the International Business Satellte communications (IBS) network in order to cope with a growing international business traffic. The paper discusses the DA/IBS network from the viewpoints of network configuration, satellite channel configuration and DA control. The network configuration proposed here consists of one Central Station with network management function and several Network Coordination Stations with user management function. A satellite channel configuration is also presented along with a tradeoff study on transmission bit rate, high power amplifier output power requirement, and service quality. The DA control flow and protocol based on CCITT Signalling System No. 7 are also proposed.
The SOFIA Mission Control System Software
NASA Astrophysics Data System (ADS)
Heiligman, G. M.; Brock, D. R.; Culp, S. D.; Decker, P. H.; Estrada, J. C.; Graybeal, J. B.; Nichols, D. M.; Paluzzi, P. R.; Sharer, P. J.; Pampell, R. J.; Papke, B. L.; Salovich, R. D.; Schlappe, S. B.; Spriestersbach, K. K.; Webb, G. L.
1999-05-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) will be delivered with a computerized mission control system (MCS). The MCS communicates with the aircraft's flight management system and coordinates the operations of the telescope assembly, mission-specific subsystems, and the science instruments. The software for the MCS must be reliable and flexible. It must be easily usable by many teams of observers with widely differing needs, and it must support non-intrusive access for education and public outreach. The technology must be appropriate for SOFIA's 20-year lifetime. The MCS software development process is an object-oriented, use case driven approach. The process is iterative: delivery will be phased over four "builds"; each build will be the result of many iterations; and each iteration will include analysis, design, implementation, and test activities. The team is geographically distributed, coordinating its work via Web pages, teleconferences, T.120 remote collaboration, and CVS (for Internet-enabled configuration management). The MCS software architectural design is derived in part from other observatories' experience. Some important features of the MCS are: * distributed computing over several UNIX and VxWorks computers * fast throughput of time-critical data * use of third-party components, such as the Adaptive Communications Environment (ACE) and the Common Object Request Broker Architecture (CORBA) * extensive configurability via stored, editable configuration files * use of several computer languages so developers have "the right tool for the job". C++, Java, scripting languages, Interactive Data Language (from Research Systems, Int'l.), XML, and HTML will all be used in the final deliverables. This paper reports on work in progress, with the final product scheduled for delivery in 2001. This work was performed for Universities Space Research Association for NASA under contract NAS2-97001.
An object-oriented approach to deploying highly configurable Web interfaces for the ATLAS experiment
NASA Astrophysics Data System (ADS)
Lange, Bruno; Maidantchik, Carmen; Pommes, Kathy; Pavani, Varlen; Arosa, Breno; Abreu, Igor
2015-12-01
The ATLAS Technical Coordination disposes of 17 Web systems to support its operation. These applications, whilst ranging from managing the process of publishing scientific papers to monitoring radiation levels in the equipment in the experimental cavern, are constantly prone to changes in requirements due to the collaborative nature of the experiment and its management. In this context, a Web framework is proposed to unify the generation of the supporting interfaces. FENCE assembles classes to build applications by making extensive use of JSON configuration files. It relies heavily on Glance, a technology that was set forth in 2003 to create an abstraction layer on top of the heterogeneous sources that store the technical coordination data. Once Glance maps out the database modeling, records can be referenced in the configuration files by wrapping unique identifiers around double enclosing brackets. The deployed content can be individually secured by attaching clearance attributes to their description thus ensuring that view/edit privileges are granted to eligible users only. The framework also provides tools for securely writing into a database. Fully HTML5-compliant multi-step forms can be generated from their JSON description to assure that the submitted data comply with a series of constraints. Input validation is carried out primarily on the server- side but, following progressive enhancement guidelines, verification might also be performed on the client-side by enabling specific markup data attributes which are then handed over to the jQuery validation plug-in. User monitoring is accomplished by thoroughly logging user requests along with any POST data. Documentation is built from the source code using the phpDocumentor tool and made readily available for developers online. Fence, therefore, speeds up the implementation of Web interfaces and reduces the response time to requirement changes by minimizing maintenance overhead.
Software Defined Networking for Improved Wireless Sensor Network Management: A Survey
Ndiaye, Musa; Hancke, Gerhard P.; Abu-Mahfouz, Adnan M.
2017-01-01
Wireless sensor networks (WSNs) are becoming increasingly popular with the advent of the Internet of things (IoT). Various real-world applications of WSNs such as in smart grids, smart farming and smart health would require a potential deployment of thousands or maybe hundreds of thousands of sensor nodes/actuators. To ensure proper working order and network efficiency of such a network of sensor nodes, an effective WSN management system has to be integrated. However, the inherent challenges of WSNs such as sensor/actuator heterogeneity, application dependency and resource constraints have led to challenges in implementing effective traditional WSN management. This difficulty in management increases as the WSN becomes larger. Software Defined Networking (SDN) provides a promising solution in flexible management WSNs by allowing the separation of the control logic from the sensor nodes/actuators. The advantage with this SDN-based management in WSNs is that it enables centralized control of the entire WSN making it simpler to deploy network-wide management protocols and applications on demand. This paper highlights some of the recent work on traditional WSN management in brief and reviews SDN-based management techniques for WSNs in greater detail while drawing attention to the advantages that SDN brings to traditional WSN management. This paper also investigates open research challenges in coming up with mechanisms for flexible and easier SDN-based WSN configuration and management. PMID:28471390
Software Defined Networking for Improved Wireless Sensor Network Management: A Survey.
Ndiaye, Musa; Hancke, Gerhard P; Abu-Mahfouz, Adnan M
2017-05-04
Wireless sensor networks (WSNs) are becoming increasingly popular with the advent of the Internet of things (IoT). Various real-world applications of WSNs such as in smart grids, smart farming and smart health would require a potential deployment of thousands or maybe hundreds of thousands of sensor nodes/actuators. To ensure proper working order and network efficiency of such a network of sensor nodes, an effective WSN management system has to be integrated. However, the inherent challenges of WSNs such as sensor/actuator heterogeneity, application dependency and resource constraints have led to challenges in implementing effective traditional WSN management. This difficulty in management increases as the WSN becomes larger. Software Defined Networking (SDN) provides a promising solution in flexible management WSNs by allowing the separation of the control logic from the sensor nodes/actuators. The advantage with this SDN-based management in WSNs is that it enables centralized control of the entire WSN making it simpler to deploy network-wide management protocols and applications on demand. This paper highlights some of the recent work on traditional WSN management in brief and reviews SDN-based management techniques for WSNs in greater detail while drawing attention to the advantages that SDN brings to traditional WSN management. This paper also investigates open research challenges in coming up with mechanisms for flexible and easier SDN-based WSN configuration and management.
TWRS configuration management requirement source document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vann, J.M.
The TWRS Configuration Management (CM) Requirement Source document prescribes CM as a basic product life-cycle function by which work and activities are conducted or accomplished. This document serves as the requirements basis for the TWRS CM program. The objective of the TWRS CM program is to establish consistency among requirements, physical/functional configuration, information, and documentation for TWRS and TWRS products, and to maintain this consistency throughout the life-cycle of TWRS and the product, particularly as changes are being made.
Configuration of management accounting information system for multi-stage manufacturing
NASA Astrophysics Data System (ADS)
Mkrtychev, S. V.; Ochepovsky, A. V.; Enik, O. A.
2018-05-01
The article presents an approach to configuration of a management accounting information system (MAIS) that provides automated calculations and the registration of normative production losses in multi-stage manufacturing. The use of MAIS with the proposed configuration at the enterprises of textile and woodworking industries made it possible to increase the accuracy of calculations for normative production losses and to organize accounting thereof with the reference to individual stages of the technological process. Thus, high efficiency of multi-stage manufacturing control is achieved.
49 CFR Appendix A to Part 232 - Schedule of Civil Penalties 1
Code of Federal Regulations, 2011 CFR
2011-10-01
... 7,500 (f) Improper use of car with inoperative or ineffective brakes 2,500 5,000 (g) Improper... Design, interoperability, and configuration management requirements: (a) Failure to meet minimum... comply with a proper configuration management plan 7,500 11,000 232.605 Training Requirements: (a...
Space Geodesy Project Information and Configuration Management Procedure
NASA Technical Reports Server (NTRS)
Merkowitz, Stephen M.
2016-01-01
This plan defines the Space Geodesy Project (SGP) policies, procedures, and requirements for Information and Configuration Management (CM). This procedure describes a process that is intended to ensure that all proposed and approved technical and programmatic baselines and changes to the SGP hardware, software, support systems, and equipment are documented.
Data base management system configuration specification. [computer storage devices
NASA Technical Reports Server (NTRS)
Neiers, J. W.
1979-01-01
The functional requirements and the configuration of the data base management system are described. Techniques and technology which will enable more efficient and timely transfer of useful data from the sensor to the user, extraction of information by the user, and exchange of information among the users are demonstrated.
Representation of thermal infrared imaging data in the DICOM using XML configuration files.
Ruminski, Jacek
2007-01-01
The DICOM standard has become a widely accepted and implemented format for the exchange and storage of medical imaging data. Different imaging modalities are supported however there is not a dedicated solution for thermal infrared imaging in medicine. In this article we propose new ideas and improvements to final proposal of the new DICOM Thermal Infrared Imaging structures and services. Additionally, we designed, implemented and tested software packages for universal conversion of existing thermal imaging files to the DICOM format using XML configuration files. The proposed solution works fast and requires minimal number of user interactions. The XML configuration file enables to compose a set of attributes for any source file format of thermal imaging camera.
NASA Technical Reports Server (NTRS)
Swenson, Paul
2017-01-01
Satellite/Payload Ground Systems - Typically highly-customized to a specific mission's use cases - Utilize hundreds (or thousands!) of specialized point-to-point interfaces for data flows / file transfers Documentation and tracking of these complex interfaces requires extensive time to develop and extremely high staffing costs Implementation and testing of these interfaces are even more cost-prohibitive, and documentation often lags behind implementation resulting in inconsistencies down the road With expanding threat vectors, IT Security, Information Assurance and Operational Security have become key Ground System architecture drivers New Federal security-related directives are generated on a daily basis, imposing new requirements on current / existing ground systems - These mandated activities and data calls typically carry little or no additional funding for implementation As a result, Ground System Sustaining Engineering groups and Information Technology staff continually struggle to keep up with the rolling tide of security Advancing security concerns and shrinking budgets are pushing these large stove-piped ground systems to begin sharing resources - I.e. Operational / SysAdmin staff, IT security baselines, architecture decisions or even networks / hosting infrastructure Refactoring these existing ground systems into multi-mission assets proves extremely challenging due to what is typically very tight coupling between legacy components As a result, many "Multi-Mission" ops. environments end up simply sharing compute resources and networks due to the difficulty of refactoring into true multi-mission systems Utilizing continuous integration / rapid system deployment technologies in conjunction with an open architecture messaging approach allows System Engineers and Architects to worry less about the low-level details of interfaces between components and configuration of systems GMSEC messaging is inherently designed to support multi-mission requirements, and allows components to aggregate data across multiple homogeneous or heterogeneous satellites or payloads - The highly-successful Goddard Science and Planetary Operations Control Center (SPOCC) utilizes GMSEC as the hub for it's automation and situational awareness capability Shifts focus towards getting GS to a final configuration-managed baseline, as well as multi-mission / big-picture capabilities that help increase situational awareness, promote cross-mission sharing and establish enhanced fleet management capabilities across all levels of the enterprise.
NASA Technical Reports Server (NTRS)
McCartney, Patrick; MacLean, John
2012-01-01
mREST is an implementation of the REST architecture specific to the management and sharing of data in a system of logical elements. The purpose of this document is to clearly define the mREST interface protocol. The interface protocol covers all of the interaction between mREST clients and mREST servers. System-level requirements are not specifically addressed. In an mREST system, there are typically some backend interfaces between a Logical System Element (LSE) and the associated hardware/software system. For example, a network camera LSE would have a backend interface to the camera itself. These interfaces are specific to each type of LSE and are not covered in this document. There are also frontend interfaces that may exist in certain mREST manager applications. For example, an electronic procedure execution application may have a specialized interface for configuring the procedures. This interface would be application specific and outside of this document scope. mREST is intended to be a generic protocol which can be used in a wide variety of applications. A few scenarios are discussed to provide additional clarity but, in general, application-specific implementations of mREST are not specifically addressed. In short, this document is intended to provide all of the information necessary for an application developer to create mREST interface agents. This includes both mREST clients (mREST manager applications) and mREST servers (logical system elements, or LSEs).
Security management based on trust determination in cognitive radio networks
NASA Astrophysics Data System (ADS)
Li, Jianwu; Feng, Zebing; Wei, Zhiqing; Feng, Zhiyong; Zhang, Ping
2014-12-01
Security has played a major role in cognitive radio networks. Numerous researches have mainly focused on attacking detection based on source localization and detection probability. However, few of them took the penalty of attackers into consideration and neglected how to implement effective punitive measures against attackers. To address this issue, this article proposes a novel penalty mechanism based on cognitive trust value. The main feature of this mechanism has been realized by six functions: authentication, interactive, configuration, trust value collection, storage and update, and punishment. Data fusion center (FC) and cluster heads (CHs) have been put forward as a hierarchical architecture to manage trust value of cognitive users. Misbehaving users would be punished by FC by declining their trust value; thus, guaranteeing network security via distinguishing attack users is of great necessity. Simulation results verify the rationality and effectiveness of our proposed mechanism.
NASA Technical Reports Server (NTRS)
Barry, Matthew R.
2006-01-01
The X-Windows Socket Widget Class ("Class" is used here in the object-oriented-programming sense of the word) was devised to simplify the task of implementing network connections for graphical-user-interface (GUI) computer programs. UNIX Transmission Control Protocol/Internet Protocol (TCP/IP) socket programming libraries require many method calls to configure, operate, and destroy sockets. Most X Windows GUI programs use widget sets or toolkits to facilitate management of complex objects. The widget standards facilitate construction of toolkits and application programs. The X-Windows Socket Widget Class encapsulates UNIX TCP/IP socket-management tasks within the framework of an X Windows widget. Using the widget framework, X Windows GUI programs can treat one or more network socket instances in the same manner as that of other graphical widgets, making it easier to program sockets. Wrapping ISP socket programming libraries inside a widget framework enables a programmer to treat a network interface as though it were a GUI.
Generic functional requirements for a NASA general-purpose data base management system
NASA Technical Reports Server (NTRS)
Lohman, G. M.
1981-01-01
Generic functional requirements for a general-purpose, multi-mission data base management system (DBMS) for application to remotely sensed scientific data bases are detailed. The motivation for utilizing DBMS technology in this environment is explained. The major requirements include: (1) a DBMS for scientific observational data; (2) a multi-mission capability; (3) user-friendly; (4) extensive and integrated information about data; (5) robust languages for defining data structures and formats; (6) scientific data types and structures; (7) flexible physical access mechanisms; (8) ways of representing spatial relationships; (9) a high level nonprocedural interactive query and data manipulation language; (10) data base maintenance utilities; (11) high rate input/output and large data volume storage; and adaptability to a distributed data base and/or data base machine configuration. Detailed functions are specified in a top-down hierarchic fashion. Implementation, performance, and support requirements are also given.
Leaf LIMS: A Flexible Laboratory Information Management System with a Synthetic Biology Focus.
Craig, Thomas; Holland, Richard; D'Amore, Rosalinda; Johnson, James R; McCue, Hannah V; West, Anthony; Zulkower, Valentin; Tekotte, Hille; Cai, Yizhi; Swan, Daniel; Davey, Robert P; Hertz-Fowler, Christiane; Hall, Anthony; Caddick, Mark
2017-12-15
This paper presents Leaf LIMS, a flexible laboratory information management system (LIMS) designed to address the complexity of synthetic biology workflows. At the project's inception there was a lack of a LIMS designed specifically to address synthetic biology processes, with most systems focused on either next generation sequencing or biobanks and clinical sample handling. Leaf LIMS implements integrated project, item, and laboratory stock tracking, offering complete sample and construct genealogy, materials and lot tracking, and modular assay data capture. Hence, it enables highly configurable task-based workflows and supports data capture from project inception to completion. As such, in addition to it supporting synthetic biology it is ideal for many laboratory environments with multiple projects and users. The system is deployed as a web application through Docker and is provided under a permissive MIT license. It is freely available for download at https://leaflims.github.io .
A reduced energy supply strategy in active vibration control
NASA Astrophysics Data System (ADS)
Ichchou, M. N.; Loukil, T.; Bareille, O.; Chamberland, G.; Qiu, J.
2011-12-01
In this paper, a control strategy is presented and numerically tested. This strategy aims to achieve the potential performance of fully active systems with a reduced energy supply. These energy needs are expected to be comparable to the power demands of semi-active systems, while system performance is intended to be comparable to that of a fully active configuration. The underlying strategy is called 'global semi-active control'. This control approach results from an energy investigation based on management of the optimal control process. Energy management encompasses storage and convenient restitution. The proposed strategy monitors a given active law without any external energy supply by considering purely dissipative and energy-demanding phases. Such a control law is offered here along with an analysis of its properties. A suboptimal form, well adapted for practical implementation steps, is also given. Moreover, a number of numerical experiments are proposed in order to validate test findings.
Next Generation Monitoring: Tier 2 Experience
NASA Astrophysics Data System (ADS)
Fay, R.; Bland, J.; Jones, S.
2017-10-01
Monitoring IT infrastructure is essential for maximizing availability and minimizing disruption by detecting failures and developing issues. The HEP group at Liverpool have recently updated our monitoring infrastructure with the goal of increasing coverage, improving visualization capabilities, and streamlining configuration and maintenance. Here we present a summary of Liverpool’s experience, the monitoring infrastructure, and the tools used to build it. In brief, system checks are configured in Puppet using Hiera, and managed by Sensu, replacing Nagios. Centralised logging is managed with Elasticsearch, together with Logstash and Filebeat. Kibana provides an interface for interactive analysis, including visualization and dashboards. Metric collection is also configured in Puppet, managed by collectd and stored in Graphite, with Grafana providing a visualization and dashboard tool. The Uchiwa dashboard for Sensu provides a web interface for viewing infrastructure status. Alert capabilities are provided via external handlers. A custom alert handler is in development to provide an easily configurable, extensible and maintainable alert facility.
Pak, JuGeon; Park, KeeHyun
2012-01-01
We propose a smart medication dispenser having a high degree of scalability and remote manageability. We construct the dispenser to have extensible hardware architecture for achieving scalability, and we install an agent program in it for achieving remote manageability. The dispenser operates as follows: when the real-time clock reaches the predetermined medication time and the user presses the dispense button at that time, the predetermined medication is dispensed from the medication dispensing tray (MDT). In the proposed dispenser, the medication for each patient is stored in an MDT. One smart medication dispenser contains mainly one MDT; however, the dispenser can be extended to include more MDTs in order to support multiple users using one dispenser. For remote management, the proposed dispenser transmits the medication status and the system configurations to the monitoring server. In the case of a specific event such as a shortage of medication, memory overload, software error, or non-adherence, the event is transmitted immediately. All these operations are performed automatically without the intervention of patients, through the agent program installed in the dispenser. Results of implementation and verification show that the proposed dispenser operates normally and performs the management operations from the medication monitoring server suitably.
Improvements to information management systems simulator
NASA Technical Reports Server (NTRS)
Bilek, R. W.
1972-01-01
The performance of personnel in the augmentation and improvement of the interactive IMSIM information management simulation model is summarized. With this augmented model, NASA now has even greater capabilities for the simulation of computer system configurations, data processing loads imposed on these configurations, and executive software to control system operations. Through these simulations, NASA has an extremely cost effective capability for the design and analysis of computer-based data management systems.
Case studies in configuration control for redundant robots
NASA Technical Reports Server (NTRS)
Seraji, H.; Lee, T.; Colbaugh, R.; Glass, K.
1989-01-01
A simple approach to configuration control of redundant robots is presented. The redundancy is utilized to control the robot configuration directly in task space, where the task will be performed. A number of task-related kinematic functions are defined and combined with the end-effector coordinates to form a set of configuration variables. An adaptive control scheme is then utilized to ensure that the configuration variables track the desired reference trajectories as closely as possible. Simulation results are presented to illustrate the control scheme. The scheme has also been implemented for direct online control of a PUMA industrial robot, and experimental results are presented. The simulation and experimental results validate the configuration control scheme for performing various realistic tasks.
Particle dispersion in homogeneous turbulence using the one-dimensional turbulence model
Sun, Guangyuan; Lignell, David O.; Hewson, John C.; ...
2014-10-09
Lagrangian particle dispersion is studied using the one-dimensional turbulence (ODT) model in homogeneous decaying turbulence configurations. The ODT model has been widely and successfully applied to a number of reacting and nonreacting flow configurations, but only limited application has been made to multiphase flows. We present a version of the particle implementation and interaction with the stochastic and instantaneous ODT eddy events. The model is characterized by comparison to experimental data of particle dispersion for a range of intrinsic particle time scales and body forces. Particle dispersion, velocity, and integral time scale results are presented. Moreover, the particle implementation introducesmore » a single model parameter β p , and sensitivity to this parameter and behavior of the model are discussed. Good agreement is found with experimental data and the ODT model is able to capture the particle inertial and trajectory crossing effects. Our results serve as a validation case of the multiphase implementations of ODT for extensions to other flow configurations.« less
CMOS Rad-Hard Front-End Electronics for Precise Sensors Measurements
NASA Astrophysics Data System (ADS)
Sordo-Ibáñez, Samuel; Piñero-García, Blanca; Muñoz-Díaz, Manuel; Ragel-Morales, Antonio; Ceballos-Cáceres, Joaquín; Carranza-González, Luis; Espejo-Meana, Servando; Arias-Drake, Alberto; Ramos-Martos, Juan; Mora-Gutiérrez, José Miguel; Lagos-Florido, Miguel Angel
2016-08-01
This paper reports a single-chip solution for the implementation of radiation-tolerant CMOS front-end electronics (FEE) for applications requiring the acquisition of base-band sensor signals. The FEE has been designed in a 0.35μm CMOS process, and implements a set of parallel conversion channels with high levels of configurability to adapt the resolution, conversion rate, as well as the dynamic input range for the required application. Each conversion channel has been designed with a fully-differential implementation of a configurable-gain instrumentation amplifier, followed by an also configurable dual-slope ADC (DS ADC) up to 16 bits. The ASIC also incorporates precise thermal monitoring, sensor conditioning and error detection functionalities to ensure proper operation in extreme environments. Experimental results confirm that the proposed topologies, in conjunction with the applied radiation-hardening techniques, are reliable enough to be used without loss in the performance in environments with an extended temperature range (between -25 and 125 °C) and a total dose beyond 300 krad.
System Oriented Runway Management: A Research Update
NASA Technical Reports Server (NTRS)
Lohr, Gary W.; Brown, Sherilyn A.; Stough, Harry P., III; Eisenhawer, Steve; Atkins, Stephen; Long, Dou
2011-01-01
The runway configuration used by an airport has significant implications with respect to its capacity and ability to effectively manage surface and airborne traffic. Aircraft operators rely on runway configuration information because it can significantly affect an airline's operations and planning of their resources. Current practices in runway management are limited by a relatively short time horizon for reliable weather information and little assistance from automation. Wind velocity is the primary consideration when selecting a runway configuration; however when winds are below a defined threshold, discretion may be used to determine the configuration. Other considerations relevant to runway configuration selection include airport operator constraints, weather conditions (other than winds) traffic demand, user preferences, surface congestion, and navigational system outages. The future offers an increasingly complex landscape for the runway management process. Concepts and technologies that hold the potential for capacity and efficiency increases for both operations on the airport surface and in terminal and enroute airspace are currently under investigation. Complementary advances in runway management are required if capacity and efficiency increases in those areas are to be realized. The System Oriented Runway Management (SORM) concept has been developed to address this critical part of the traffic flow process. The SORM concept was developed to address all aspects of runway management for airports of varying sizes and to accommodate a myriad of traffic mixes. SORM, to date, addresses the single airport environment; however, the longer term vision is to incorporate capabilities for multiple airport (Metroplex) operations as well as to accommodate advances in capabilities resulting from ongoing research. This paper provides an update of research supporting the SORM concept including the following: a concept of overview, results of a TRCM simulation, single airport and Metroplex modeling effort and a benefits assessment.
NASA Technical Reports Server (NTRS)
1974-01-01
The Earth Observatory Satellite (EOS) data management system (DMS) is discussed. The DMS is composed of several subsystems or system elements which have basic purposes and are connected together so that the DMS can support the EOS program by providing the following: (1) payload data acquisition and recording, (2) data processing and product generation, (3) spacecraft and processing management and control, and (4) data user services. The configuration and purposes of the primary or high-data rate system and the secondary or local user system are explained. Diagrams of the systems are provided to support the systems analysis.
Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing
NASA Technical Reports Server (NTRS)
Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.
2010-01-01
The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development and throughout the life of the Orion project.
A self-configuring control system for storage and computing departments at INFN-CNAF Tierl
NASA Astrophysics Data System (ADS)
Gregori, Daniele; Dal Pra, Stefano; Ricci, Pier Paolo; Pezzi, Michele; Prosperini, Andrea; Sapunenko, Vladimir
2015-05-01
The storage and farming departments at the INFN-CNAF Tier1[1] manage approximately thousands of computing nodes and several hundreds of servers that provides access to the disk and tape storage. In particular, the storage server machines should provide the following services: an efficient access to about 15 petabytes of disk space with different cluster of GPFS file system, the data transfers between LHC Tiers sites (Tier0, Tier1 and Tier2) via GridFTP cluster and Xrootd protocol and finally the writing and reading data operations on magnetic tape backend. One of the most important and essential point in order to get a reliable service is a control system that can warn if problems arise and which is able to perform automatic recovery operations in case of service interruptions or major failures. Moreover, during daily operations the configurations can change, i.e. if the GPFS cluster nodes roles can be modified and therefore the obsolete nodes must be removed from the control system production, and the new servers should be added to the ones that are already present. The manual management of all these changes is an operation that can be somewhat difficult in case of several changes, it can also take a long time and is easily subject to human error or misconfiguration. For these reasons we have developed a control system with the feature of self-configure itself if any change occurs. Currently, this system has been in production for about a year at the INFN-CNAF Tier1 with good results and hardly any major drawback. There are three major key points in this system. The first is a software configurator service (e.g. Quattor or Puppet) for the servers machines that we want to monitor with the control system; this service must ensure the presence of appropriate sensors and custom scripts on the nodes to check and should be able to install and update software packages on them. The second key element is a database containing information, according to a suitable format, on all the machines in production and able to provide for each of them the principal information such as the type of hardware, the network switch to which the machine is connected, if the machine is real (physical) or virtual, the possible hypervisor to which it belongs and so on. The last key point is a control system software (in our implementation we choose the Nagios software), capable of assessing the status of the servers and services, and that can attempt to restore the working state, restart or inhibit software services and send suitable alarm messages to the site administrators. The integration of these three elements was made by appropriate scripts and custom implementation that allow the self-configuration of the system according to a decisional logic and the whole combination of all the above-mentioned components will be deeply discussed in this paper.
Considerations for Software Defined Networking (SDN): Approaches and use cases
NASA Astrophysics Data System (ADS)
Bakshi, K.
Software Defined Networking (SDN) is an evolutionary approach to network design and functionality based on the ability to programmatically modify the behavior of network devices. SDN uses user-customizable and configurable software that's independent of hardware to enable networked systems to expand data flow control. SDN is in large part about understanding and managing a network as a unified abstraction. It will make networks more flexible, dynamic, and cost-efficient, while greatly simplifying operational complexity. And this advanced solution provides several benefits including network and service customizability, configurability, improved operations, and increased performance. There are several approaches to SDN and its practical implementation. Among them, two have risen to prominence with differences in pedigree and implementation. This paper's main focus will be to define, review, and evaluate salient approaches and use cases of the OpenFlow and Virtual Network Overlay approaches to SDN. OpenFlow is a communication protocol that gives access to the forwarding plane of a network's switches and routers. The Virtual Network Overlay relies on a completely virtualized network infrastructure and services to abstract the underlying physical network, which allows the overlay to be mobile to other physical networks. This is an important requirement for cloud computing, where applications and associated network services are migrated to cloud service providers and remote data centers on the fly as resource demands dictate. The paper will discuss how and where SDN can be applied and implemented, including research and academia, virtual multitenant data center, and cloud computing applications. Specific attention will be given to the cloud computing use case, where automated provisioning and programmable overlay for scalable multi-tenancy is leveraged via the SDN approach.
HDL Based FPGA Interface Library for Data Acquisition and Multipurpose Real Time Algorithms
NASA Astrophysics Data System (ADS)
Fernandes, Ana M.; Pereira, R. C.; Sousa, J.; Batista, A. J. N.; Combo, A.; Carvalho, B. B.; Correia, C. M. B. A.; Varandas, C. A. F.
2011-08-01
The inherent parallelism of the logic resources, the flexibility in its configuration and the performance at high processing frequencies makes the field programmable gate array (FPGA) the most suitable device to be used both for real time algorithm processing and data transfer in instrumentation modules. Moreover, the reconfigurability of these FPGA based modules enables exploiting different applications on the same module. When using a reconfigurable module for various applications, the availability of a common interface library for easier implementation of the algorithms on the FPGA leads to more efficient development. The FPGA configuration is usually specified in a hardware description language (HDL) or other higher level descriptive language. The critical paths, such as the management of internal hardware clocks that require deep knowledge of the module behavior shall be implemented in HDL to optimize the timing constraints. The common interface library should include these critical paths, freeing the application designer from hardware complexity and able to choose any of the available high-level abstraction languages for the algorithm implementation. With this purpose a modular Verilog code was developed for the Virtex 4 FPGA of the in-house Transient Recorder and Processor (TRP) hardware module, based on the Advanced Telecommunications Computing Architecture (ATCA), with eight channels sampling at up to 400 MSamples/s (MSPS). The TRP was designed to perform real time Pulse Height Analysis (PHA), Pulse Shape Discrimination (PSD) and Pile-Up Rejection (PUR) algorithms at a high count rate (few Mevent/s). A brief description of this modular code is presented and examples of its use as an interface with end user algorithms, including a PHA with PUR, are described.
10 Management Controller for Time and Space Partitioning Architectures
NASA Astrophysics Data System (ADS)
Lachaize, Jerome; Deredempt, Marie-Helene; Galizzi, Julien
2015-09-01
The Integrated Modular Avionics (IMA) has been industrialized in aeronautical domain to enable the independent qualification of different application softwares from different suppliers on the same generic computer, this latter computer being a single terminal in a deterministic network. This concept allowed to distribute efficiently and transparently the different applications across the network, sizing accurately the HW equipments to embed on the aircraft, through the configuration of the virtual computers and the virtual network. , This concept has been studied for space domain and requirements issued [D04],[D05]. Experiments in the space domain have been done, for the computer level, through ESA and CNES initiatives [D02] [D03]. One possible IMA implementation may use Time and Space Partitioning (TSP) technology. Studies on Time and Space Partitioning [D02] for controlling resources access such as CPU and memories and studies on hardware/software interface standardization [D01] showed that for space domain technologies where I/O components (or IP) do not cover advanced features such as buffering, descriptors or virtualization, CPU overhead in terms of performances is mainly due to shared interface management in the execution platform, and to the high frequency of I/O accesses, these latter leading to an important number of context switches. This paper will present a solution to reduce this execution overhead with an open, modular and configurable controller.
A new approach to configurable primary data collection.
Stanek, J; Babkin, E; Zubov, M
2016-09-01
The formats, semantics and operational rules of data processing tasks in genomics (and health in general) are highly divergent and can rapidly change. In such an environment, the problem of consistent transformation and loading of heterogeneous input data to various target repositories becomes a critical success factor. The objective of the project was to design a new conceptual approach to configurable data transformation, de-identification, and submission of health and genomic data sets. Main motivation was to facilitate automated or human-driven data uploading, as well as consolidation of heterogeneous sources in large genomic or health projects. Modern methods of on-demand specialization of generic software components were applied. For specification of input-output data and required data collection activities, we propose a simple data model of flat tables as well as a domain-oriented graphical interface and portable representation of transformations in XML. Using such methods, the prototype of the Configurable Data Collection System (CDCS) was implemented in Java programming language with Swing graphical interfaces. The core logic of transformations was implemented as a library of reusable plugins. The solution is implemented as a software prototype for a configurable service-oriented system for semi-automatic data collection, transformation, sanitization and safe uploading to heterogeneous data repositories-CDCS. To address the dynamic nature of data schemas and data collection processes, the CDCS prototype facilitates interactive, user-driven configuration of the data collection process and extends basic functionality with a wide range of third-party plugins. Notably, our solution also allows for the reduction of manual data entry for data originally missing in the output data sets. First experiments and feedback from domain experts confirm the prototype is flexible, configurable and extensible; runs well on data owner's systems; and is not dependent on vendor's standards. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hoang, Long Phi; Biesbroek, Robbert; Tri, Van Pham Dang; Kummu, Matti; van Vliet, Michelle T H; Leemans, Rik; Kabat, Pavel; Ludwig, Fulco
2018-02-24
Climate change and accelerating socioeconomic developments increasingly challenge flood-risk management in the Vietnamese Mekong River Delta-a typical large, economically dynamic and highly vulnerable delta. This study identifies and addresses the emerging challenges for flood-risk management. Furthermore, we identify and analyse response solutions, focusing on meaningful configurations of the individual solutions and how they can be tailored to specific challenges using expert surveys, content analysis techniques and statistical inferences. Our findings show that the challenges for flood-risk management are diverse, but critical challenges predominantly arise from the current governance and institutional settings. The top-three challenges include weak collaboration, conflicting management objectives and low responsiveness to new issues. We identified 114 reported solutions and developed six flood management strategies that are tailored to specific challenges. We conclude that the current technology-centric flood management approach is insufficient given the rapid socioecological changes. This approach therefore should be adapted towards a more balanced management configuration where technical and infrastructural measures are combined with institutional and governance resolutions. Insights from this study contribute to the emerging repertoire of contemporary flood management solutions, especially through their configurations and tailoring to specific challenges.
Orbiting deep space relay station. Volume 3: Implementation plan
NASA Technical Reports Server (NTRS)
Hunter, J. A.
1979-01-01
An implementation plan for the Orbiting Deep Space Relay Station (ODSRS) is described. A comparison of ODSRS life cycle costs to other configuration options meeting future communication requirements is presented.
A data management system to enable urgent natural disaster computing
NASA Astrophysics Data System (ADS)
Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton
2014-05-01
Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe consequences Hard deadline: Missing a hard deadline renders the computation useless and results in full catastrophic consequences. A prototype of this system has a REST-based service manager. The REST-based implementation provides a uniform interface that is easy to use. New and upcoming file transfer protocols can easily be extended and accessed via the service manager. The service manager interacts with the other four managers to coordinate the data activities so that the fundamental natural disaster urgent computing requirement, i.e. deadline, can be fulfilled in a reliable manner. A data activity can include data storing, data archiving and data storing. Reliability is ensured by the choice of a network of managers organisation model[1] the configuration manager and the fault tolerance manager. With this proposed design, an easy to use, resource-independent data management system that can support and fulfill the computation of a natural disaster prediction within stipulated deadlines can thus be realised. References [1] H. G. Hegering, S. Abeck, and B. Neumair, Integrated management of networked systems - concepts, architectures, and their operational application, Morgan Kaufmann Publishers, 340 Pine Stret, Sixth Floor, San Francisco, CA 94104-3205, USA, 1999. [2] H. Kopetz, Real-time systems design principles for distributed embedded applications, second edition, Springer, LLC, 233 Spring Street, New York, NY 10013, USA, 2011. [3] S. H. Leong, A. Frank, and D. Kranzlmu¨ ller, Leveraging e-infrastructures for urgent computing, Procedia Computer Science 18 (2013), no. 0, 2177 - 2186, 2013 International Conference on Computational Science. [4] N. Trebon, Enabling urgent computing within the existing distributed computing infrastructure, Ph.D. thesis, University of Chicago, August 2011, http://people.cs.uchicago.edu/~ntrebon/docs/dissertation.pdf.
NASA Technical Reports Server (NTRS)
Ghofranian, Siamak (Inventor); Chuang, Li-Ping Christopher (Inventor); Motaghedi, Pejmun (Inventor)
2016-01-01
A method and apparatus for docking a spacecraft. The apparatus comprises elongate members, movement systems, and force management systems. The elongate members are associated with a docking structure for a spacecraft. The movement systems are configured to move the elongate members axially such that the docking structure for the spacecraft moves. Each of the elongate members is configured to move independently. The force management systems connect the movement systems to the elongate members and are configured to limit a force applied by the each of the elongate members to a desired threshold during movement of the elongate members.
Scholze, Stefan; Schiefer, Stefan; Partzsch, Johannes; Hartmann, Stephan; Mayr, Christian Georg; Höppner, Sebastian; Eisenreich, Holger; Henker, Stephan; Vogginger, Bernhard; Schüffny, Rene
2011-01-01
State-of-the-art large-scale neuromorphic systems require sophisticated spike event communication between units of the neural network. We present a high-speed communication infrastructure for a waferscale neuromorphic system, based on application-specific neuromorphic communication ICs in an field programmable gate arrays (FPGA)-maintained environment. The ICs implement configurable axonal delays, as required for certain types of dynamic processing or for emulating spike-based learning among distant cortical areas. Measurements are presented which show the efficacy of these delays in influencing behavior of neuromorphic benchmarks. The specialized, dedicated address-event-representation communication in most current systems requires separate, low-bandwidth configuration channels. In contrast, the configuration of the waferscale neuromorphic system is also handled by the digital packet-based pulse channel, which transmits configuration data at the full bandwidth otherwise used for pulse transmission. The overall so-called pulse communication subgroup (ICs and FPGA) delivers a factor 25–50 more event transmission rate than other current neuromorphic communication infrastructures. PMID:22016720
Aerodynamic shape optimization of wing and wing-body configurations using control theory
NASA Technical Reports Server (NTRS)
Reuther, James; Jameson, Antony
1995-01-01
This paper describes the implementation of optimization techniques based on control theory for wing and wing-body design. In previous studies it was shown that control theory could be used to devise an effective optimization procedure for airfoils and wings in which the shape and the surrounding body-fitted mesh are both generated analytically, and the control is the mapping function. Recently, the method has been implemented for both potential flows and flows governed by the Euler equations using an alternative formulation which employs numerically generated grids, so that it can more easily be extended to treat general configurations. Here results are presented both for the optimization of a swept wing using an analytic mapping, and for the optimization of wing and wing-body configurations using a general mesh.
Search systems and computer-implemented search methods
Payne, Deborah A.; Burtner, Edwin R.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.
2017-03-07
Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.
Search systems and computer-implemented search methods
Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.
2015-12-22
Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.
An Execution Service for Grid Computing
NASA Technical Reports Server (NTRS)
Smith, Warren; Hu, Chaumin
2004-01-01
This paper describes the design and implementation of the IPG Execution Service that reliably executes complex jobs on a computational grid. Our Execution Service is part of the IPG service architecture whose goal is to support location-independent computing. In such an environment, once n user ports an npplicntion to one or more hardware/software platfrms, the user can describe this environment to the grid the grid can locate instances of this platfrm, configure the platfrm as required for the application, and then execute the application. Our Execution Service runs jobs that set up such environments for applications and executes them. These jobs consist of a set of tasks for executing applications and managing data. The tasks have user-defined starting conditions that allow users to specih complex dependencies including task to execute when tasks fail, afiequent occurrence in a large distributed system, or are cancelled. The execution task provided by our service also configures the application environment exactly as specified by the user and captures the exit code of the application, features that many grid execution services do not support due to dflculties interfacing to local scheduling systems.
Tuti, Timothy; Bitok, Michael; Paton, Chris; Makone, Boniface; Malla, Lucas; Muinga, Naomi; Gathara, David; English, Mike
2016-01-01
Objective To share approaches and innovations adopted to deliver a relatively inexpensive clinical data management (CDM) framework within a low-income setting that aims to deliver quality pediatric data useful for supporting research, strengthening the information culture and informing improvement efforts in local clinical practice. Materials and methods The authors implemented a CDM framework to support a Clinical Information Network (CIN) using Research Electronic Data Capture (REDCap), a noncommercial software solution designed for rapid development and deployment of electronic data capture tools. It was used for collection of standardized data from case records of multiple hospitals’ pediatric wards. R, an open-source statistical language, was used for data quality enhancement, analysis, and report generation for the hospitals. Results In the first year of CIN, the authors have developed innovative solutions to support the implementation of a secure, rapid pediatric data collection system spanning 14 hospital sites with stringent data quality checks. Data have been collated on over 37 000 admission episodes, with considerable improvement in clinical documentation of admissions observed. Using meta-programming techniques in R, coupled with branching logic, randomization, data lookup, and Application Programming Interface (API) features offered by REDCap, CDM tasks were configured and automated to ensure quality data was delivered for clinical improvement and research use. Conclusion A low-cost clinically focused but geographically dispersed quality CDM (Clinical Data Management) in a long-term, multi-site, and real world context can be achieved and sustained and challenges can be overcome through thoughtful design and implementation of open-source tools for handling data and supporting research. PMID:26063746
Wang, Fei; Prier, Beth; Bauer, Karri A; Mellett, John
2018-06-01
The development and implementation of a clinical decision support system (CDSS) for pharmacists to use for identification of and intervention on patients with Staphylococcus aureus bacteremia (SAB) are described. A project team consisting of 3 informatics pharmacists and 2 infectious diseases (ID) pharmacists was formed to develop the CDSS. The primary CDSS component was a scoring system that generates a score in real time for a patient with a positive blood culture for S. aureus. In addition, 4 tools were configured in the CDSS to facilitate pharmacists' workflow and documentation tasks: a patient list, a patient list report, a handoff note, and a standardized progress note. Pharmacists are required to evaluate the patient list at least once per shift to identify newly listed patients with a blood culture positive for S. aureus and provide recommendations if necessary. The CDSS was implemented over a period of 2.5 months, with a pharmacy informatics resident dedicating approximately 200 hours in total. An audit showed that the standardized progress note was completed for 100% of the patients, with a mean time to completion of 8.5 hours. Importantly, this initiative can be implemented in hospitals without specialty-trained ID pharmacists. This study provides a framework for future antimicrobial stewardship program initiatives to incorporate pharmacists into the process of providing real-time recommendations. A pharmacist-driven patient scoring system was successfully used to improve adherence to quality performance measures for management of SAB. A pharmacist-driven CDSS can be utilized to assist in the management of SAB. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Tuti, Timothy; Bitok, Michael; Paton, Chris; Makone, Boniface; Malla, Lucas; Muinga, Naomi; Gathara, David; English, Mike
2016-01-01
To share approaches and innovations adopted to deliver a relatively inexpensive clinical data management (CDM) framework within a low-income setting that aims to deliver quality pediatric data useful for supporting research, strengthening the information culture and informing improvement efforts in local clinical practice. The authors implemented a CDM framework to support a Clinical Information Network (CIN) using Research Electronic Data Capture (REDCap), a noncommercial software solution designed for rapid development and deployment of electronic data capture tools. It was used for collection of standardized data from case records of multiple hospitals' pediatric wards. R, an open-source statistical language, was used for data quality enhancement, analysis, and report generation for the hospitals. In the first year of CIN, the authors have developed innovative solutions to support the implementation of a secure, rapid pediatric data collection system spanning 14 hospital sites with stringent data quality checks. Data have been collated on over 37 000 admission episodes, with considerable improvement in clinical documentation of admissions observed. Using meta-programming techniques in R, coupled with branching logic, randomization, data lookup, and Application Programming Interface (API) features offered by REDCap, CDM tasks were configured and automated to ensure quality data was delivered for clinical improvement and research use. A low-cost clinically focused but geographically dispersed quality CDM (Clinical Data Management) in a long-term, multi-site, and real world context can be achieved and sustained and challenges can be overcome through thoughtful design and implementation of open-source tools for handling data and supporting research. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Liberati, Elisa Giulia; Gorli, Mara; Scaratti, Giuseppe
2015-01-01
The purpose of this paper is to understand how the introduction of a patient-centered model (PCM) in Italian hospitals affects the pre-existent configuration of clinical work and interacts with established intra/inter-professional relationships. Qualitative multi-phase study based on three main sources: health policy analysis, an exploratory interview study with senior managers of eight Italian hospitals implementing the PCM, and an in-depth case study that involved managerial and clinical staff of one Italian hospital implementing the PCM. The introduction of the PCM challenges clinical work and professional relationships, but such challenges are interpreted differently by the organisational actors involved, thus giving rise to two different "narratives of change". The "political narrative" (the views conveyed by formal policies and senior managers) focuses on the power shifts and conflict between nurses and doctors, while the "workplace narrative" (the experiences of frontline clinicians) emphasises the problems linked to the disruption of previous discipline-based inter-professional groups. Medical disciplines, rather than professional groupings, are the main source of identification of doctors and nurses, and represent a crucial aspect of clinicians' professional identity. Although the need for collaboration among medical disciplines is acknowledged, creating multi-disciplinary groups in practice requires the sustaining of new aggregators and binding forces. This study suggests further acknowledgment of the inherent complexity of the political and workplace narratives of change rather than interpreting them as the signal of irreconcilable perspectives between managers and clinicians. By addressing the specific issues regarding which the political and workplace narratives clash, relationship of trust may be developed through which problems can be identified, mutually acknowledged, articulated, and solved.
NASA Astrophysics Data System (ADS)
Bertrand, Régis; Alby, Fernand; Costes, Thierry; Dejoie, Joël; Delmas, Dominique-Roland; Delobette, Damien; Gibek, Isabelle; Gleyzes, Alain; Masson, Françoise; Meyer, Jean-Renaud; Moreau, Agathe; Perret, Lionel; Riclet, François; Ruiz, Hélène; Schiavon, Françoise; Spizzi, Pierre; Viallefont, Pierre; Villaret, Colette
2012-10-01
The French Space Agency (CNES) is currently operating thirteen satellites among which five remote sensing satellites. This fleet is composed of two civilian (SPOT) and three military (HELIOS) satellites and it has been recently completed by the first PLEIADES satellite which is devoted to both civil and military purposes. The CNES operation board decided to appoint a Working Group (WG) in order to anticipate and tackle issues related to the emergency End Of Life (EOL) operations due to unexpected on-board events affecting the satellite. This is of particular interest in the context of the French Law on Space Operations (LSO), entered in force on Dec. 2010, which states that any satellite operator must demonstrate its capability to control the space vehicle whatever the mission phase from the launch up to the EOL. Indeed, after several years in orbit the satellites may be affected by on-board anomalies which could damage the implementation of EOL operations, i.e. orbital manoeuvres or platform disposal. Even if automatic recovery actions ensure autonomous reconfigurations on redundant equipment, i.e. setting for instance the satellite into a safe mode, it is crucial to anticipate the consequences of failures of every equipment and functions necessary for the EOL operations. For this purpose, the WG has focused on each potential anomaly by analysing: its emergency level, as well as the EOL operations potentially inhibited by the failure and the needs of on-board software workarounds… The main contribution of the WG consisted in identifying a particular satellite configuration called "minimal Withdrawal From Service (WFS) configuration". This configuration corresponds to an operational status which involves a redundancy necessary for the EOL operations. Therefore as soon as a satellite reaches this state, a dedicated steering committee is activated and decides of the future of the satellite with respect to three options: a/. the satellite is considered safe and can continue its mission using the redundancy, b/. the EOL operations must be planned within a mid-term period, or c/. the EOL operations must be implemented as soon as possible by the operational teams. The paper describes this management and operational process illustrated with study cases of failures on SPOT and PLEIADES satellites corresponding to various emergency situations.
An Implementation Model for Integrated Learning Systems.
ERIC Educational Resources Information Center
Mills, Steven C.; Ragan, Tillman R.
This paper describes the development, validation, and research application of the Computer-Delivered Instruction Configuration Matrix (CDICM), an instrument for evaluating the implementation of Integrated Learning Systems (ILS). The CDICM consists of a 15-item checklist, describing the major components of implementation of ILS technology, to be…
Sens, Brigitte
2010-01-01
The concept of general process orientation as an instrument of organisation development is the core principle of quality management philosophy, i.e. the learning organisation. Accordingly, prestigious quality awards and certification systems focus on process configuration and continual improvement. In German health care organisations, particularly in hospitals, this general process orientation has not been widely implemented yet - despite enormous change dynamics and the requirements of both quality and economic efficiency of health care processes. But based on a consistent process architecture that considers key processes as well as management and support processes, the strategy of excellent health service provision including quality, safety and transparency can be realised in daily operative work. The core elements of quality (e.g., evidence-based medicine), patient safety and risk management, environmental management, health and safety at work can be embedded in daily health care processes as an integrated management system (the "all in one system" principle). Sustainable advantages and benefits for patients, staff, and the organisation will result: stable, high-quality, efficient, and indicator-based health care processes. Hospitals with their broad variety of complex health care procedures should now exploit the full potential of total process orientation. Copyright © 2010. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao
2012-11-01
Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.
49 CFR 232.603 - Design, interoperability, and configuration management requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... requirements. (a) General. A freight car or freight train equipped with an ECP brake system shall, at a minimum...) Approval. A freight train or freight car equipped with an ECP brake system and equipment covered by the AAR...) Configuration management. A railroad operating a freight train or freight car equipped with ECP brake systems...
49 CFR 232.603 - Design, interoperability, and configuration management requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... requirements. (a) General. A freight car or freight train equipped with an ECP brake system shall, at a minimum...) Approval. A freight train or freight car equipped with an ECP brake system and equipment covered by the AAR...) Configuration management. A railroad operating a freight train or freight car equipped with ECP brake systems...
49 CFR 232.603 - Design, interoperability, and configuration management requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... requirements. (a) General. A freight car or freight train equipped with an ECP brake system shall, at a minimum...) Approval. A freight train or freight car equipped with an ECP brake system and equipment covered by the AAR...) Configuration management. A railroad operating a freight train or freight car equipped with ECP brake systems...
49 CFR 232.603 - Design, interoperability, and configuration management requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... requirements. (a) General. A freight car or freight train equipped with an ECP brake system shall, at a minimum...) Approval. A freight train or freight car equipped with an ECP brake system and equipment covered by the AAR...) Configuration management. A railroad operating a freight train or freight car equipped with ECP brake systems...
Lighting system with thermal management system having point contact synthetic jets
Arik, Mehmet; Weaver, Stanton Earl; Kuenzler, Glenn Howard; Wolfe, Jr., Charles Franklin; Sharma, Rajdeep
2013-12-10
Lighting system having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system includes a plurality of synthetic jets. The synthetic jets are arranged within the lighting system such that they are secured at contact points.
Lighting system with thermal management system having point contact synthetic jets
Arik, Mehmet; Weaver, Stanton Earl; Kuenzler, Glenn Howard; Wolfe, Jr, Charles Franklin; Sharma, Rajdeep
2016-08-30
Lighting systems having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system includes a plurality of synthetic jets. The synthetic jets are arranged within the lighting system such that they are secured at contact points.
Lighting system with thermal management system having point contact synthetic jets
Arik, Mehmet; Weaver, Stanton Earl; Kuenzler, Glenn Howard; Wolfe, Jr., Charles Franklin; Sharma, Rajdeep
2016-08-23
Lighting systems having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system includes a plurality of synthetic jets. The synthetic jets are arranged within the lighting system such that they are secured at contact points.
Factors Impacting School Closure and Configuration
ERIC Educational Resources Information Center
Corrales, Antonio
2017-01-01
Newly implemented state policy dealing with school finance created several consequences in a school district to include school configuration and restructuring of educational programs. This case describes how a new school finance law changes the entire dynamic of a school district and its newly appointed superintendent. The superintendent…
Assessing the Effects of Multi-Node Sensor Network Configurations on the Operational Tempo
2014-09-01
receiver, nP is the noise power of the receiver, and iL is the implementation loss of the receiver due to hardware manufacturing. The received...13. ABSTRACT (maximum 200 words) The LPISimNet software tool provides the capability to quantify the performance of sensor network configurations by...INTENTIONALLY LEFT BLANK v ABSTRACT The LPISimNet software tool provides the capability to quantify the performance of sensor network configurations
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.
2000-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
Development of NETCONF-Based Network Management Systems in Web Services Framework
NASA Astrophysics Data System (ADS)
Iijima, Tomoyuki; Kimura, Hiroyasu; Kitani, Makoto; Atarashi, Yoshifumi
To develop a network management system (NMS) more easily, the authors developed an application programming interface (API) for configuring network devices. Because this API is used in a Java development environment, an NMS can be developed by utilizing the API and other commonly available Java libraries. It is thus possible to easily develop an NMS that is highly compatible with other IT systems. And operations that are generated from the API and that are exchanged between the NMS and network devices are based on NETCONF, which is standardized by the Internet Engineering Task Force (IETF) as a next-generation network-configuration protocol. Adopting a standardized technology ensures that the NMS developed by using the API can manage network devices provided from multi-vendors in a unified manner. Furthermore, the configuration items exchanged over NETCONF are specified in an object-oriented design. They are therefore easier to manage than such items in the Management Information Base (MIB), which is defined as data to be managed by the Simple Network Management Protocol (SNMP). We actually developed several NMSs by using the API. Evaluation of these NMSs showed that, in terms of configuration time and development time, the NMS developed by using the API performed as well as NMSs developed by using a command line interface (CLI) and SNMP. The NMS developed by using the API showed feasibility to achieve “autonomic network management” and “high interoperability with IT systems.”
NASA Technical Reports Server (NTRS)
Nagle, Gail; Masotto, Thomas; Alger, Linda
1990-01-01
The need to meet the stringent performance and reliability requirements of advanced avionics systems has frequently led to implementations which are tailored to a specific application and are therefore difficult to modify or extend. Furthermore, many integrated flight critical systems are input/output intensive. By using a design methodology which customizes the input/output mechanism for each new application, the cost of implementing new systems becomes prohibitively expensive. One solution to this dilemma is to design computer systems and input/output subsystems which are general purpose, but which can be easily configured to support the needs of a specific application. The Advanced Information Processing System (AIPS), currently under development has these characteristics. The design and implementation of the prototype I/O communication system for AIPS is described. AIPS addresses reliability issues related to data communications by the use of reconfigurable I/O networks. When a fault or damage event occurs, communication is restored to functioning parts of the network and the failed or damage components are isolated. Performance issues are addressed by using a parallelized computer architecture which decouples Input/Output (I/O) redundancy management and I/O processing from the computational stream of an application. The autonomous nature of the system derives from the highly automated and independent manner in which I/O transactions are conducted for the application as well as from the fact that the hardware redundancy management is entirely transparent to the application.
Edwards, M.D.
1987-01-01
The Water Resources Division of the U.S. Geological Survey is developing a National Water Information System (NWIS) that will integrate and replace its existing water data and information systems of the National Water Data Storage and Retrieval System, National Water Data Exchange, National Water-Use Information, and Water Resources Scientific Information Center programs. It will be a distributed data system operated as part of the Division 's Distributed Information System, which is a network of computers linked together through a national telecommunication network known as GEONET. The NWIS is being developed as a series of prototypes that will be integrated as they are completed to allow the development and implementation of the system in a phased manner. It also is being developed in a distributed manner using personnel who work under the coordination of a central NWIS Project Office. Work on the development of the NWIS began in 1983 and it is scheduled for completion in 1990. This document presents an overall plan for the design, development, implementation, and operation of the system. Detailed discussions are presented on each of these phases of the NWIS life cycle. The planning, quality assurance, and configuration management phases of the life cycle also are discussed. The plan is intended to be a working document for use by NWIS management and participants in its design and development and to assist offices of the Division in planning and preparing for installation and operation of the system. (Author 's abstract)
Pursley, Randall H.; Salem, Ghadi; Devasahayam, Nallathamby; Subramanian, Sankaran; Koscielniak, Janusz; Krishna, Murali C.; Pohida, Thomas J.
2006-01-01
The integration of modern data acquisition and digital signal processing (DSP) technologies with Fourier transform electron paramagnetic resonance (FT-EPR) imaging at radiofrequencies (RF) is described. The FT-EPR system operates at a Larmor frequency (Lf) of 300 MHz to facilitate in vivo studies. This relatively low frequency Lf, in conjunction with our ~10 MHz signal bandwidth, enables the use of direct free induction decay time-locked subsampling (TLSS). This particular technique provides advantages by eliminating the traditional analog intermediate frequency downconversion stage along with the corresponding noise sources. TLSS also results in manageable sample rates that facilitate the design of DSP-based data acquisition and image processing platforms. More specifically, we utilize a high-speed field programmable gate array (FPGA) and a DSP processor to perform advanced real-time signal and image processing. The migration to a DSP-based configuration offers the benefits of improved EPR system performance, as well as increased adaptability to various EPR system configurations (i.e., software configurable systems instead of hardware reconfigurations). The required modifications to the FT-EPR system design are described, with focus on the addition of DSP technologies including the application-specific hardware, software, and firmware developed for the FPGA and DSP processor. The first results of using real-time DSP technologies in conjunction with direct detection bandpass sampling to implement EPR imaging at RF frequencies are presented. PMID:16243552
Framework GRASP: routine library for optimize processing of aerosol remote sensing observation
NASA Astrophysics Data System (ADS)
Fuertes, David; Torres, Benjamin; Dubovik, Oleg; Litvinov, Pavel; Lapyonok, Tatyana; Ducos, Fabrice; Aspetsberger, Michael; Federspiel, Christian
The present the development of a Framework for the Generalized Retrieval of Aerosol and Surface Properties (GRASP) developed by Dubovik et al., (2011). The framework is a source code project that attempts to strengthen the value of the GRASP inversion algorithm by transforming it into a library that will be used later for a group of customized application modules. The functions of the independent modules include the managing of the configuration of the code execution, as well as preparation of the input and output. The framework provides a number of advantages in utilization of the code. First, it implements loading data to the core of the scientific code directly from memory without passing through intermediary files on disk. Second, the framework allows consecutive use of the inversion code without the re-initiation of the core routine when new input is received. These features are essential for optimizing performance of the data production in processing of large observation sets, such as satellite images by the GRASP. Furthermore, the framework is a very convenient tool for further development, because this open-source platform is easily extended for implementing new features. For example, it could accommodate loading of raw data directly onto the inversion code from a specific instrument not included in default settings of the software. Finally, it will be demonstrated that from the user point of view, the framework provides a flexible, powerful and informative configuration system.
ZoroufchiBenis, Khaled; Fatehifar, Esmaeil; Ahmadi, Javad; Rouhi, Alireza
2015-01-01
Background: Industrial air pollution is a growing challenge to humane health, especially in developing countries, where there is no systematic monitoring of air pollution. Given the importance of the availability of valid information on population exposure to air pollutants, it is important to design an optimal Air Quality Monitoring Network (AQMN) for assessing population exposure to air pollution and predicting the magnitude of the health risks to the population. Methods: A multi-pollutant method (implemented as a MATLAB program) was explored for configuring an AQMN to detect the highest level of pollution around an oil refinery plant. The method ranks potential monitoring sites (grids) according to their ability to represent the ambient concentration. The term of cluster of contiguous grids that exceed a threshold value was used to calculate the Station Dosage. Selection of the best configuration of AQMN was done based on the ratio of a station’s dosage to the total dosage in the network. Results: Six monitoring stations were needed to detect the pollutants concentrations around the study area for estimating the level and distribution of exposure in the population with total network efficiency of about 99%. An analysis of the design procedure showed that wind regimes have greatest effect on the location of monitoring stations. Conclusion: The optimal AQMN enables authorities to implement an effective program of air quality management for protecting human health. PMID:26933646
NASA Technical Reports Server (NTRS)
1990-01-01
This report contains the individual presentations delivered at the Space Station Evolution Symposium in League City, Texas on February 6, 7, 8, 1990. Personnel responsible for Advanced Systems Studies and Advanced Development within the Space Station Freedom program reported on the results of their work to date. Systems Studies presentations focused on identifying the baseline design provisions (hooks and scars) necessary to enable evolution of the facility to support changing space policy and anticipated user needs. Also emphasized were evolution configuration and operations concepts including on-orbit processing of space transfer vehicles. Advanced Development task managers discussed transitioning advanced technologies to the baseline program, including those near-term technologies which will enhance the safety and productivity of the crew and the reliability of station systems. Special emphasis was placed on applying advanced automation technology to ground and flight systems. This publication consists of two volumes. Volume 1 contains the results of the advanced system studies with the emphasis on reference evolution configurations, system design requirements and accommodations, and long-range technology projections. Volume 2 reports on advanced development tasks within the Transition Definition Program. Products of these tasks include: engineering fidelity demonstrations and evaluations on Station development testbeds and Shuttle-based flight experiments; detailed requirements and performance specifications which address advanced technology implementation issues; and mature applications and the tools required for the development, implementation, and support of advanced technology within the Space Station Freedom Program.
Autonomous Satellite Command and Control Through the World Wide Web. Phase 3
NASA Technical Reports Server (NTRS)
Cantwell, Brian; Twiggs, Robert
1998-01-01
The Automated Space System Experimental Testbed (ASSET) system is a simple yet comprehensive real-world operations network being developed. Phase 3 of the ASSET Project was January-December 1997 and is the subject of this report. This phase permitted SSDL and its project partners to expand the ASSET system in a variety of ways. These added capabilities included the advancement of ground station capabilities, the adaptation of spacecraft on-board software, and the expansion of capabilities of the ASSET management algorithms. Specific goals of Phase 3 were: (1) Extend Web-based goal-level commanding for both the payload PI and the spacecraft engineer. (2) Support prioritized handling of multiple (PIs) Principle Investigators as well as associated payload experimenters. (3) Expand the number and types of experiments supported by the ASSET system and its associated spacecraft. (4) Implement more advanced resource management, modeling and fault management capabilities that integrate the space and ground segments of the space system hardware. (5) Implement a beacon monitoring test. (6) Implement an experimental blackboard controller for space system management. (7) Further define typical ground station developments required for Internet-based remote control and for full system automation of the PI-to-spacecraft link. Each of those goals are examined. Significant sections of this report were also published as a conference paper. Several publications produced in support of this grant are included as attachments. Titles include: 1) Experimental Initiatives in Space System Operations; 2) The ASSET Client Interface: Balancing High Level Specification with Low Level Control; 3) Specifying Spacecraft Operations At The Product/Service Level; 4) The Design of a Highly Configurable, Reusable Operating System for Testbed Satellites; 5) Automated Health Operations For The Sapphire Spacecraft; 6) Engineering Data Summaries for Space Missions; and 7) Experiments In Automated Health Assessment And Notification For The Sapphire Microsatellite.
NASA Technical Reports Server (NTRS)
Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony
1996-01-01
This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations. In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that this basic methodology could be ported to distributed memory parallel computing architectures. In this paper, our concern will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.
Development and Performance of an Atomic Interferometer Gravity Gradiometer for Earth Science
NASA Astrophysics Data System (ADS)
Luthcke, S. B.; Saif, B.; Sugarbaker, A.; Rowlands, D. D.; Loomis, B.
2016-12-01
The wealth of multi-disciplinary science achieved from the GRACE mission, the commitment to GRACE Follow On (GRACE-FO), and Resolution 2 from the International Union of Geodesy and Geophysics (IUGG, 2015), highlight the importance to implement a long-term satellite gravity observational constellation. Such a constellation would measure time variable gravity (TVG) with accuracies 50 times better than the first generation missions, at spatial and temporal resolutions to support regional and sub-basin scale multi-disciplinary science. Improved TVG measurements would achieve significant societal benefits including: forecasting of floods and droughts, improved estimates of climate impacts on water cycle and ice sheets, coastal vulnerability, land management, risk assessment of natural hazards, and water management. To meet the accuracy and resolution challenge of the next generation gravity observational system, NASA GSFC and AOSense are currently developing an Atomic Interferometer Gravity Gradiometer (AIGG). This technology is capable of achieving the desired accuracy and resolution with a single instrument, exploiting the advantages of the microgravity environment. The AIGG development is funded under NASA's Earth Science Technology Office (ESTO) Instrument Incubator Program (IIP), and includes the design, build, and testing of a high-performance, single-tensor-component gravity gradiometer for TVG recovery from a satellite in low Earth orbit. The sensitivity per shot is 10-5 Eötvös (E) with a flat spectral bandwidth from 0.3 mHz - 0.03 Hz. Numerical simulations show that a single space-based AIGG in a 326 km altitude polar orbit is capable of exceeding the IUGG target requirement for monthly TVG accuracy of 1 cm equivalent water height at 200 km resolution. We discuss the current status of the AIGG IIP development and estimated instrument performance, and we present results of simulated Earth TVG recovery of the space-based AIGG. We explore the accuracy, and spatial and temporal resolution of surface mass change observations from several space-based implementations of the AIGG instrument, including various orbit configurations and multi-satellite/multi-orbit configurations.
Aerial Networking for the Implementation of Cooperative Control on Small Unmanned Aerial Systems
2013-03-01
the relay aircraft to an optimal location. Secondly, a mesh network was configured and tested. This configuration successfully relayed aircraft...functionality, such as updating navigation waypoints to each aircraft. The results suggest the system be updated with more capable modems in a mesh ...
Distributed Virtual System (DIVIRS) Project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1993-01-01
As outlined in our continuation proposal 92-ISI-50R (revised) on contract NCC 2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to program parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the virtual system model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
DIstributed VIRtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1994-01-01
As outlined in our continuation proposal 92-ISI-. OR (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
DIstributed VIRtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, Clifford B.
1995-01-01
As outlined in our continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
Distributed Virtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1993-01-01
As outlined in the continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC 2-539, the investigators are developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; developing communications routines that support the abstractions implemented; continuing the development of file and information systems based on the Virtual System Model; and incorporating appropriate security measures to allow the mechanisms developed to be used on an open network. The goal throughout the work is to provide a uniform model that can be applied to both parallel and distributed systems. The authors believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. The work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
An Ada programming support environment
NASA Technical Reports Server (NTRS)
Tyrrill, AL; Chan, A. David
1986-01-01
The toolset of an Ada Programming Support Environment (APSE) being developed at North American Aircraft Operations (NAAO) of Rockwell International, is described. The APSE is resident on three different hosts and must support developments for the hosts and for embedded targets. Tools and developed software must be freely portable between the hosts. The toolset includes the usual editors, compilers, linkers, debuggers, configuration magnagers, and documentation tools. Generally, these are being supplied by the host computer vendors. Other tools, for example, pretty printer, cross referencer, compilation order tool, and management tools were obtained from public-domain sources, are implemented in Ada and are being ported to the hosts. Several tools being implemented in-house are of interest, these include an Ada Design Language processor based on compilable Ada. A Standalone Test Environment Generator facilitates test tool construction and partially automates unit level testing. A Code Auditor/Static Analyzer permits the Ada programs to be evaluated against measures of quality. An Ada Comment Box Generator partially automates generation of header comment boxes.
Villa, Stefano; Barbieri, Marta; Lega, Federico
2009-06-01
To make hospitals more patient-centered it is necessary to intervene on patient flow logistics. The study analyzes three innovative redesign projects implemented at three Italian hospitals. The three hospitals have reorganized patient flow logistics around patient care needs using, as proxies, the expected length of stay and the level of nursing assistance. In order to do this, they have extensively revised their logistical configuration changing: (1) the organization of wards, (2) the hospital's physical lay-out, (3) the capacity planning system, and (4) the organizational roles supporting the patient flow management. The study describes the changes implemented as well as the results achieved and draws some general lessons that provide useful hints for those other hospitals involved in such type of redesign projects. The paper ends by discussing some policy implications. In fact, the results achieved in the three cases investigated provide interesting material for further discussion on clinical, operational, and economic issues.
STAR: an integrated solution to management and visualization of sequencing data
Wang, Tao; Liu, Jie; Shen, Li; Tonti-Filippini, Julian; Zhu, Yun; Jia, Haiyang; Lister, Ryan; Whitaker, John W.; Ecker, Joseph R.; Millar, A. Harvey; Ren, Bing; Wang, Wei
2013-01-01
Motivation: Easily visualization of complex data features is a necessary step to conduct studies on next-generation sequencing (NGS) data. We developed STAR, an integrated web application that enables online management, visualization and track-based analysis of NGS data. Results: STAR is a multilayer web service system. On the client side, STAR leverages JavaScript, HTML5 Canvas and asynchronous communications to deliver a smoothly scrolling desktop-like graphical user interface with a suite of in-browser analysis tools that range from providing simple track configuration controls to sophisticated feature detection within datasets. On the server side, STAR supports private session state retention via an account management system and provides data management modules that enable collection, visualization and analysis of third-party sequencing data from the public domain with over thousands of tracks hosted to date. Overall, STAR represents a next-generation data exploration solution to match the requirements of NGS data, enabling both intuitive visualization and dynamic analysis of data. Availability and implementation: STAR browser system is freely available on the web at http://wanglab.ucsd.edu/star/browser and https://github.com/angell1117/STAR-genome-browser. Contact: wei-wang@ucsd.edu PMID:24078702
Performance Evaluation of Resource Management in Cloud Computing Environments.
Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci
2015-01-01
Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.
Performance Evaluation of Resource Management in Cloud Computing Environments
Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci
2015-01-01
Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price. PMID:26555730
Software Defined Cyberinfrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, Ian; Blaiszik, Ben; Chard, Kyle
Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policiesmore » by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.« less
Process Management inside ATLAS DAQ
NASA Astrophysics Data System (ADS)
Alexandrov, I.; Amorim, A.; Badescu, E.; Burckhart-Chromek, D.; Caprini, M.; Dobson, M.; Duval, P. Y.; Hart, R.; Jones, R.; Kazarov, A.; Kolos, S.; Kotov, V.; Liko, D.; Lucio, L.; Mapelli, L.; Mineev, M.; Moneta, L.; Nassiakou, M.; Pedro, L.; Ribeiro, A.; Roumiantsev, V.; Ryabov, Y.; Schweiger, D.; Soloviev, I.; Wolters, H.
2002-10-01
The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.
NASA Technical Reports Server (NTRS)
George, Jude (Inventor); Schlecht, Leslie (Inventor); McCabe, James D. (Inventor); LeKashman, John Jr. (Inventor)
1998-01-01
A network management system has SNMP agents distributed at one or more sites, an input output module at each site, and a server module located at a selected site for communicating with input output modules, each of which is configured for both SNMP and HNMP communications. The server module is configured exclusively for HNMP communications, and it communicates with each input output module according to the HNMP. Non-iconified, informationally complete views are provided of network elements to aid in network management.
NCCDS configuration management process improvement
NASA Technical Reports Server (NTRS)
Shay, Kathy
1993-01-01
By concentrating on defining and improving specific Configuration Management (CM) functions, processes, procedures, personnel selection/development, and tools, internal and external customers received improved CM services. Job performance within the section increased in both satisfaction and output. Participation in achieving major improvements has led to the delivery of consistent quality CM products as well as significant decreases in every measured CM metrics category.
Configuration Management Process Assessment Strategy
NASA Technical Reports Server (NTRS)
Henry, Thad
2014-01-01
Purpose: To propose a strategy for assessing the development and effectiveness of configuration management systems within Programs, Projects, and Design Activities performed by technical organizations and their supporting development contractors. Scope: Various entities CM Systems will be assessed dependent on Project Scope (DDT&E), Support Services and Acquisition Agreements. Approach: Model based structured against assessing organizations CM requirements including best practices maturity criteria. The model is tailored to the entity being assessed dependent on their CM system. The assessment approach provides objective feedback to Engineering and Project Management of the observed CM system maturity state versus the ideal state of the configuration management processes and outcomes(system). center dot Identifies strengths and risks versus audit gotcha's (findings/observations). center dot Used "recursively and iteratively" throughout program lifecycle at select points of need. (Typical assessments timing is Post PDR/Post CDR) center dot Ideal state criteria and maturity targets are reviewed with the assessed entity prior to an assessment (Tailoring) and is dependent on the assessed phase of the CM system. center dot Supports exit success criteria for Preliminary and Critical Design Reviews. center dot Gives a comprehensive CM system assessment which ultimately supports configuration verification activities.*
National Voice Response System (VRS) Implementation Plan Alternatives Study
DOT National Transportation Integrated Search
1979-07-01
This study examines the alternatives available to implement a national Voice Response System (VRS) for automated preflight weather briefings and flight plan filing. Four major hardware configurations are discussed. A computerized analysis model was d...
Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.
Yu, Theodore; Cauwenberghs, Gert
2009-01-01
We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.
Extension of HCDstruct for Transonic Aeroservoelastic Analysis of Unconventional Aircraft Concepts
NASA Technical Reports Server (NTRS)
Quinlan, Jesse R.; Gern, Frank H.
2017-01-01
A substantial effort has been made to implement an enhanced aerodynamic modeling capability in the Higher-fidelity Conceptual Design and structural optimization tool. This additional capability is needed for a rapid, physics-based method of modeling advanced aircraft concepts at risk of structural failure due to dynamic aeroelastic instabilities. To adequately predict these instabilities, in particular for transonic applications, a generalized aerodynamic matching algorithm was implemented to correct the doublet-lattice model available in Nastran using solution data from a priori computational fluid dynamics anal- ysis. This new capability is demonstrated for two tube-and-wing aircraft configurations, including a Boeing 737-200 for implementation validation and the NASA D8 as a first use case. Results validate the current implementation of the aerodynamic matching utility and demonstrate the importance of using such a method for aircraft configurations featuring fuselage-wing aerodynamic interaction.
NASA Astrophysics Data System (ADS)
Cosgrove, B.; Gochis, D.; Clark, E. P.; Cui, Z.; Dugger, A. L.; Fall, G. M.; Feng, X.; Fresch, M. A.; Gourley, J. J.; Khan, S.; Kitzmiller, D.; Lee, H. S.; Liu, Y.; McCreight, J. L.; Newman, A. J.; Oubeidillah, A.; Pan, L.; Pham, C.; Salas, F.; Sampson, K. M.; Smith, M.; Sood, G.; Wood, A.; Yates, D. N.; Yu, W.; Zhang, Y.
2015-12-01
The National Weather Service (NWS) National Water Center(NWC) is collaborating with the NWS National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR) to implement a first-of-its-kind operational instance of the Weather Research and Forecasting (WRF)-Hydro model over the Continental United States (CONUS) and contributing drainage areas on the NWS Weather and Climate Operational Supercomputing System (WCOSS) supercomputer. The system will provide seamless, high-resolution, continuously cycling forecasts of streamflow and other hydrologic outputs of value from both deterministic- and ensemble-type runs. WRF-Hydro will form the core of the NWC national water modeling strategy, supporting NWS hydrologic forecast operations along with emergency response and water management efforts of partner agencies. Input and output from the system will be comprehensively verified via the NWC Water Resource Evaluation Service. Hydrologic events occur on a wide range of temporal scales, from fast acting flash floods, to long-term flow events impacting water supply. In order to capture this range of events, the initial operational WRF-Hydro configuration will feature 1) hourly analysis runs, 2) short-and medium-range deterministic forecasts out to two day and ten day horizons and 3) long-range ensemble forecasts out to 30 days. All three of these configurations are underpinned by a 1km execution of the NoahMP land surface model, with channel routing taking place on 2.67 million NHDPlusV2 catchments covering the CONUS and contributing areas. Additionally, the short- and medium-range forecasts runs will feature surface and sub-surface routing on a 250m grid, while the hourly analyses will feature this same 250m routing in addition to nudging-based assimilation of US Geological Survey (USGS) streamflow observations. A limited number of major reservoirs will be configured within the model to begin to represent the first-order impacts of streamflow regulation.
Staff views of an opportunistic chlamydia testing pilot in a primary health organisation.
McKernon, Stephen; Azariah, Sunita
2013-12-01
The Auckland chlamydia pilot was one of three pilots funded by the Ministry of Health to trial implementation of the 2008 Chlamydia Management Guidelines prior to national roll-out. To assess what elements in the testing programme pilot worked best for staff and to determine how an opportunistic testing programme could be better configured to meet staff needs and preferences. A staff survey listed key chlamydia testing tasks in chronological order, and service interventions supporting these tasks. Staff were asked to rate each task on its difficulty prior to the pilot, and then on the difference the pilot had made to each task. They were also asked to rate service interventions on their usefulness during the pilot implementation. The survey had a response rate of 94%. The testing tasks posing the greatest difficulties to staff were those involving patient interactions (41%) and management of follow-up (52%). About 70% of staff felt tasks were improved by the pilot. Staff considered the three most useful service interventions to be a chlamydia-specific template created for the practice management system, provision of printed patient resources, and regular team discussions with other staff. A significant proportion of staff reported difficulties with routine tasks required for opportunistic testing for chlamydia, highlighting the need to involve staff during programme design. Practice nurse-led approaches to future opportunistic testing programmes should be considered as nurses had a more positive response to the pilot and nurse-led approaches have been shown to be successful overseas.
Increasing Usability in Ocean Observing Systems
NASA Astrophysics Data System (ADS)
Chase, A. C.; Gomes, K.; O'Reilly, T.
2005-12-01
As observatory systems move to more advanced techniques for instrument configuration and data management, standardized frameworks are being developed to benefit from commodities of scale. ACE (A Configuror and Editor) is a tool that was developed for SIAM (Software Infrastructure and Application for MOOS), a framework for the seamless integration of self-describing plug-and-work instruments into the Monterey Ocean Observing System. As a comprehensive solution, the SIAM infrastructure requires a number of processes to be run to configure an instrument for use within its framework. As solutions move from the lab to the field, the steps needed to implement the solution must be made bulletproof so that they may be used in the field with confidence. Loosely defined command line interfaces don't always provide enough user feedback and business logic can be difficult to maintain over a series of scripts. ACE is a tool developed for guiding the user through a number of complicated steps, removing the reliance on command-line utilities and reducing the difficulty of completing the necessary steps, while also preventing operator error and enforcing system constraints. Utilizing the cross-platform nature of the Java programming language, ACE provides a complete solution for deploying an instrument within the SIAM infrastructure without depending on special software being installed on the users computer. Requirements such as the installation of a Unix emulator for users running Windows machines, and the installation of, and ability to use, a CVS client, have all been removed by providing the equivalent functionality from within ACE. In order to achieve a "one stop shop" for configuring instruments, ACE had to be written to handle a wide variety of functionality including: compiling java code, interacting with a CVS server and maintaining client-side CVS information, editing XML, interacting with a server side database, and negotiating serial port communications through Java. This paper will address the relative tradeoffs of including all the afore-mentioned functionality in a single tool, its affects on user adoption of the framework (SIAM) it provides access to, as well as further discussion of some of the functionality generally pertinent to data management (XML editing, source code management and compilation, etc).
Omics Metadata Management Software (OMMS).
Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo
2015-01-01
Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. The OMMS can be obtained at http://omms.sandia.gov.
Omics Metadata Management Software (OMMS)
Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo
2015-01-01
Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. Availability The OMMS can be obtained at http://omms.sandia.gov PMID:26124554
Yu, Dantong; Katramatos, Dimitrios; Sim, Alexander; Shoshani, Arie
2014-04-22
A cross-domain network resource reservation scheduler configured to schedule a path from at least one end-site includes a management plane device configured to monitor and provide information representing at least one of functionality, performance, faults, and fault recovery associated with a network resource; a control plane device configured to at least one of schedule the network resource, provision local area network quality of service, provision local area network bandwidth, and provision wide area network bandwidth; and a service plane device configured to interface with the control plane device to reserve the network resource based on a reservation request and the information from the management plane device. Corresponding methods and computer-readable medium are also disclosed.
A framework for porting the NeuroBayes machine learning algorithm to FPGAs
NASA Astrophysics Data System (ADS)
Baehr, S.; Sander, O.; Heck, M.; Feindt, M.; Becker, J.
2016-01-01
The NeuroBayes machine learning algorithm is deployed for online data reduction at the pixel detector of Belle II. In order to test, characterize and easily adapt its implementation on FPGAs, a framework was developed. Within the framework an HDL model, written in python using MyHDL, is used for fast exploration of possible configurations. Under usage of input data from physics simulations figures of merit like throughput, accuracy and resource demand of the implementation are evaluated in a fast and flexible way. Functional validation is supported by usage of unit tests and HDL simulation for chosen configurations.
Transparent 3D display for augmented reality
NASA Astrophysics Data System (ADS)
Lee, Byoungho; Hong, Jisoo
2012-11-01
Two types of transparent three-dimensional display systems applicable for the augmented reality are demonstrated. One of them is a head-mounted-display-type implementation which utilizes the principle of the system adopting the concave floating lens to the virtual mode integral imaging. Such configuration has an advantage in that the threedimensional image can be displayed at sufficiently far distance resolving the accommodation conflict with the real world scene. Incorporating the convex half mirror, which shows a partial transparency, instead of the concave floating lens, makes it possible to implement the transparent three-dimensional display system. The other type is the projection-type implementation, which is more appropriate for the general use than the head-mounted-display-type implementation. Its imaging principle is based on the well-known reflection-type integral imaging. We realize the feature of transparent display by imposing the partial transparency to the array of concave mirror which is used for the screen of reflection-type integral imaging. Two types of configurations, relying on incoherent and coherent light sources, are both possible. For the incoherent configuration, we introduce the concave half mirror array, whereas the coherent one adopts the holographic optical element which replicates the functionality of the lenslet array. Though the projection-type implementation is beneficial than the head-mounted-display in principle, the present status of the technical advance of the spatial light modulator still does not provide the satisfactory visual quality of the displayed three-dimensional image. Hence we expect that the head-mounted-display-type and projection-type implementations will come up in the market in sequence.
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Clinical Genomics in the World of the Electronic Health Record
Marsolo, Keith; Spooner, S. Andrew
2014-01-01
The widespread of adoption of EHRs presents a number of benefits to the field of clinical genomics. They include the ability to return results to the practitioner, the ability to use genetic findings in clinical decision support, and to have data collected in the EHR serve as a source of phenotypic information for analysis purposes. Not all EHRs are created equal, however. They differ in their features, capabilities and ease-of-use. Therefore, in order to understand the potential of the EHR, it is first necessary to understand its capabilities and the impact that implementation strategy has on usability. Specifically, we focus on the following areas: 1) how the EHR is used to capture data in clinical practice settings; 2) how the implementation and configuration of the EHR affects the quality and availability of data; 3) the management of clinical genetic test results and the feasibility of EHR integration; and 4) the challenges of implementing an EHR in a research-intensive environment. This is followed by a discussion of the minimum functional requirements that an EHR must meet to enable the satisfactory integration of genomic results as well as the open issues that remain. PMID:23846403
NASA Technical Reports Server (NTRS)
Greer, Lawrence (Inventor)
2017-01-01
An apparatus and a computer-implemented method for generating pulses synchronized to a rising edge of a tachometer signal from rotating machinery are disclosed. For example, in one embodiment, a pulse state machine may be configured to generate a plurality of pulses, and a period state machine may be configured to determine a period for each of the plurality of pulses.
Communications Management at the Parks Reserve Forces Training Area, Camp Parks, California
1994-10-31
The overall objective of the audit was to evaluate DoD management of circuit configurations for Defense Switched Network access requirements. The specific objective for this segment of the audit was to determine whether the Army used the most cost effective configuration of base and long haul telecommunications equipment and services at Camp Parks to access the Defense Switched Network.
NASA Technical Reports Server (NTRS)
Keltner, D. J.
1975-01-01
The stowage list and hardware tracking system, a computer based information management system, used in support of the space shuttle orbiter stowage configuration and the Johnson Space Center hardware tracking is described. The input, processing, and output requirements that serve as a baseline for system development are defined.
Braa, Jørn; Kanter, Andrew S.; Lesh, Neal; Crichton, Ryan; Jolliffe, Bob; Sæbø, Johan; Kossi, Edem; Seebregts, Christopher J.
2010-01-01
We address the problem of how to integrate health information systems in low-income African countries in which technical infrastructure and human resources vary wildly within countries. We describe a set of tools to meet the needs of different service areas including managing aggregate indicators, patient level record systems, and mobile tools for community outreach. We present the case of Sierra Leone and use this case to motivate and illustrate an architecture that allows us to provide services at each level of the health system (national, regional, facility and community) and provide different configurations of the tools as appropriate for the individual area. Finally, we present a, collaborative implementation of this approach in Sierra Leone. PMID:21347003
Energy management - The delayed flap approach
NASA Technical Reports Server (NTRS)
Bull, J. S.
1976-01-01
Flight test evaluation of a Delayed Flap approach procedure intended to provide reductions in noise and fuel consumption is underway using the NASA CV-990 test aircraft. Approach is initiated at a high airspeed (240 kt) and in a drag configuration that allows for low thrust. The aircraft is flown along the conventional ILS glide slope. A Fast/Slow message display signals the pilot when to extend approach flaps, landing gear, and land flaps. Implementation of the procedure in commercial service may require the addition of a DME navigation aid co-located with the ILS glide slope transmitter. The Delayed Flap approach saves 250 lb of fuel over the Reduced Flap approach, with a 95 EPNdB noise contour only 43% as large.
The Telecommunications and Data Acquisition Report
NASA Technical Reports Server (NTRS)
Yuen, Joseph H. (Editor)
1995-01-01
This quarterly publiction provides archival reports on developments in programs managed by JPL Telecommunications and Mission Operations Directorate (TMOD), which now includes the former communications and Data Acquisition (TDA) Office. In space communications, radio navigation, radio science, and ground-based radio and radar astronomy, it reports on activities of the Deep Space Network (DSN) in planning, supporting research and technology, implementation, and operations. Also included are standards activity at JPL for space data and information systems and reimbursable DSN work performed for other space agencies through NASA. The Orbital Debris Radar Program, funded by the Office of Space Systems Development, makes use of the planetary radar capability when the antennas are configured at science instruments making direct observations of planets, their satellites, and asteroids of our solar system.
Research and realization of key technology in HILS interactive system
NASA Astrophysics Data System (ADS)
Liu, Che; Lu, Huiming; Wang, Fankai
2018-03-01
This paper designed HILS (Hardware In the Loop Simulation) interactive system based on xPC platform . Through the interface between C++ and MATLAB engine, establish the seamless data connection between Simulink and interactive system, complete data interaction between system and Simulink, realize the function development of model configuration, parameter modification and off line simulation. We establish the data communication between host and target machine through TCP/IP protocol to realize the model download and real-time simulation. Use database to store simulation data, implement real-time simulation monitoring and simulation data management. Realize system function integration by Qt graphic interface library and dynamic link library. At last, take the typical control system as an example to verify the feasibility of HILS interactive system.
Methods of forming thermal management systems and thermal management methods
Gering, Kevin L.; Haefner, Daryl R.
2012-06-05
A thermal management system for a vehicle includes a heat exchanger having a thermal energy storage material provided therein, a first coolant loop thermally coupled to an electrochemical storage device located within the first coolant loop and to the heat exchanger, and a second coolant loop thermally coupled to the heat exchanger. The first and second coolant loops are configured to carry distinct thermal energy transfer media. The thermal management system also includes an interface configured to facilitate transfer of heat generated by an internal combustion engine to the heat exchanger via the second coolant loop in order to selectively deliver the heat to the electrochemical storage device. Thermal management methods are also provided.
Visual probes and methods for placing visual probes into subsurface areas
Clark, Don T.; Erickson, Eugene E.; Casper, William L.; Everett, David M.
2004-11-23
Visual probes and methods for placing visual probes into subsurface areas in either contaminated or non-contaminated sites are described. In one implementation, the method includes driving at least a portion of a visual probe into the ground using direct push, sonic drilling, or a combination of direct push and sonic drilling. Such is accomplished without providing an open pathway for contaminants or fugitive gases to reach the surface. According to one implementation, the invention includes an entry segment configured for insertion into the ground or through difficult materials (e.g., concrete, steel, asphalt, metals, or items associated with waste), at least one extension segment configured to selectively couple with the entry segment, at least one push rod, and a pressure cap. Additional implementations are contemplated.
Shuttle Propulsion System Major Events and the Final 22 Flights
NASA Technical Reports Server (NTRS)
Owen, James W.
2011-01-01
Numerous lessons have been documented from the Space Shuttle Propulsion elements. Major events include loss of the Solid Rocket Boosters (SRB's) on STS-4 and shutdown of a Space Shuttle Main Engine (SSME) during ascent on STS-51F. On STS-112 only half the pyrotechnics fired during release of the vehicle from the launch pad, a testament for redundancy. STS-91 exhibited freezing of a main combustion chamber pressure measurement and on STS-93 nozzle tube ruptures necessitated a low liquid level oxygen cut off of the main engines. A number of on pad aborts were experienced during the early program resulting in delays. And the two accidents, STS-51L and STS-107, had unique heritage in history from early program decisions and vehicle configuration. Following STS-51L significant resources were invested in developing fundamental physical understanding of solid rocket motor environments and material system behavior. And following STS-107, the risk of ascent debris was better characterized and controlled. Situational awareness during all mission phases improved, and the management team instituted effective risk assessment practices. The last 22 flights of the Space Shuttle, following the Columbia accident, were characterized by remarkable improvement in safety and reliability. Numerous problems were solved in addition to reduction of the ascent debris hazard. The Shuttle system, though not as operable as envisioned in the 1970's, successfully assembled the International Space Station (ISS). By the end of the program, the remarkable Space Shuttle Propulsion system achieved very high performance, was largely reusable, exhibited high reliability, and was a heavy lift earth to orbit propulsion system. During the program a number of project management and engineering processes were implemented and improved. Technical performance, schedule accountability, cost control, and risk management were effectively managed and implemented. Award fee contracting was implemented to provide performance incentives. The Certification of Flight Readiness and Mission Management processes became very effective. A key to the success of the propulsion element projects was related to relationships between the MSFC project office and support organizations with their counterpart contractor organizations. The teams worked diligently to understand and satisfy requirements and achieve mission success.
Mukumbang, Ferdinand C; Van Belle, Sara; Marchal, Bruno; Van Wyk, Brian
2016-01-01
Introduction Suboptimal retention in care and poor treatment adherence are key challenges to antiretroviral therapy (ART) in sub-Saharan Africa. Community-based approaches to HIV service delivery are recommended to improve patient retention in care and ART adherence. The implementation of the adherence clubs in the Western Cape province of South Africa was with variable success in terms of implementation and outcomes. The need for operational guidelines for its implementation has been identified. Therefore, understanding the contexts and mechanisms for successful implementation of the adherence clubs is crucial to inform the roll-out to the rest of South Africa. The protocol outlines an evaluation of adherence club intervention in selected primary healthcare facilities in the metropolitan area of the Western Cape Province, using the realist approach. Methods and analysis In the first phase, an exploratory study design will be used. Document review and key informant interviews will be used to elicit the programme theory. In phase two, a multiple case study design will be used to describe the adherence clubs in five contrastive sites. Semistructured interviews will be conducted with purposively selected programme implementers and members of the clubs to assess the context and mechanisms of the adherence clubs. For the programme's primary outcomes, a longitudinal retrospective cohort analysis will be conducted using routine patient data. Data analysis will involve classifying emerging themes using the context-mechanism-outcome (CMO) configuration, and refining the primary CMO configurations to conjectured CMO configurations. Finally, we will compare the conjectured CMO configurations from the cases with the initial programme theory. The final CMOs obtained will be translated into middle range theories. Ethics and dissemination The study will be conducted according to the principles of the declaration of Helsinki (1964). Ethics clearance was obtained from the University of the Western Cape. Dissemination will be done through publications and curation. PMID:27044575
Lin, Munan; Liu, Ming; Zhu, Guanghui; Wang, Yanpeng; Shi, Peiyun; Sun, Xuan
2017-08-01
A high voltage pulse generator based on a silicon-controlled rectifier has been designed and implemented for a field reversed configuration experiment. A critical damping circuit is used in the generator to produce the desired pulse waveform. Depending on the load, the rise time of the output trigger signal can be less than 1 μs, and the peak amplitudes of trigger voltage and current are up to 8 kV and 85 A in a single output. The output voltage can be easily adjusted by changing the voltage on a capacitor of the generator. In addition, the generator integrates an electrically floating heater circuit so it is capable of triggering either pseudosparks (TDI-type hydrogen thyratron) or ignitrons. Details of the circuits and their implementation are described in the paper. The trigger generator has successfully controlled the discharging sequence of the pulsed power supply for a field reversed configuration experiment.
NASA Astrophysics Data System (ADS)
Lin, Munan; Liu, Ming; Zhu, Guanghui; Wang, Yanpeng; Shi, Peiyun; Sun, Xuan
2017-08-01
A high voltage pulse generator based on a silicon-controlled rectifier has been designed and implemented for a field reversed configuration experiment. A critical damping circuit is used in the generator to produce the desired pulse waveform. Depending on the load, the rise time of the output trigger signal can be less than 1 μs, and the peak amplitudes of trigger voltage and current are up to 8 kV and 85 A in a single output. The output voltage can be easily adjusted by changing the voltage on a capacitor of the generator. In addition, the generator integrates an electrically floating heater circuit so it is capable of triggering either pseudosparks (TDI-type hydrogen thyratron) or ignitrons. Details of the circuits and their implementation are described in the paper. The trigger generator has successfully controlled the discharging sequence of the pulsed power supply for a field reversed configuration experiment.
Fales, B Scott; Levine, Benjamin G
2015-10-13
Methods based on a full configuration interaction (FCI) expansion in an active space of orbitals are widely used for modeling chemical phenomena such as bond breaking, multiply excited states, and conical intersections in small-to-medium-sized molecules, but these phenomena occur in systems of all sizes. To scale such calculations up to the nanoscale, we have developed an implementation of FCI in which electron repulsion integral transformation and several of the more expensive steps in σ vector formation are performed on graphical processing unit (GPU) hardware. When applied to a 1.7 × 1.4 × 1.4 nm silicon nanoparticle (Si72H64) described with the polarized, all-electron 6-31G** basis set, our implementation can solve for the ground state of the 16-active-electron/16-active-orbital CASCI Hamiltonian (more than 100,000,000 configurations) in 39 min on a single NVidia K40 GPU.
Web-based reactive transport modeling using PFLOTRAN
NASA Astrophysics Data System (ADS)
Zhou, H.; Karra, S.; Lichtner, P. C.; Versteeg, R.; Zhang, Y.
2017-12-01
Actionable understanding of system behavior in the subsurface is required for a wide spectrum of societal and engineering needs by both commercial firms and government entities and academia. These needs include, for example, water resource management, precision agriculture, contaminant remediation, unconventional energy production, CO2 sequestration monitoring, and climate studies. Such understanding requires the ability to numerically model various coupled processes that occur across different temporal and spatial scales as well as multiple physical domains (reservoirs - overburden, surface-subsurface, groundwater-surface water, saturated-unsaturated zone). Currently, this ability is typically met through an in-house approach where computational resources, model expertise, and data for model parameterization are brought together to meet modeling needs. However, such an approach has multiple drawbacks which limit the application of high-end reactive transport codes such as the Department of Energy funded[?] PFLOTRAN code. In addition, while many end users have a need for the capabilities provided by high-end reactive transport codes, they do not have the expertise - nor the time required to obtain the expertise - to effectively use these codes. We have developed and are actively enhancing a cloud-based software platform through which diverse users are able to easily configure, execute, visualize, share, and interpret PFLOTRAN models. This platform consists of a web application and available on-demand HPC computational infrastructure. The web application consists of (1) a browser-based graphical user interface which allows users to configure models and visualize results interactively, and (2) a central server with back-end relational databases which hold configuration, data, modeling results, and Python scripts for model configuration, and (3) a HPC environment for on-demand model execution. We will discuss lessons learned in the development of this platform, the rationale for different interfaces, implementation choices, as well as the planned path forward.
NASA Technical Reports Server (NTRS)
Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony
1996-01-01
This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods (13, 12, 44, 38). The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method (19, 20, 21, 23, 39, 25, 40, 41, 42, 43, 9) was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations (39, 25). In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that the basic methodology could be ported to distributed memory parallel computing architectures [241. In this paper, our concem will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.
Aided generation of search interfaces to astronomical archives
NASA Astrophysics Data System (ADS)
Zorba, Sonia; Bignamini, Andrea; Cepparo, Francesco; Knapic, Cristina; Molinaro, Marco; Smareglia, Riccardo
2016-07-01
Astrophysical data provider organizations that host web based interfaces to provide access to data resources have to cope with possible changes in data management that imply partial rewrites of web applications. To avoid doing this manually it was decided to develop a dynamically configurable Java EE web application that can set itself up reading needed information from configuration files. Specification of what information the astronomical archive database has to expose is managed using the TAP SCHEMA schema from the IVOA TAP recommendation, that can be edited using a graphical interface. When configuration steps are done the tool will build a war file to allow easy deployment of the application.
Environmental control/life support system for Space Station
NASA Technical Reports Server (NTRS)
Miller, C. W.; Heppner, D. B.; Schubert, F. H.; Dahlhausen, M. J.
1986-01-01
The functional, operational, and design load requirements for the Environmental Control/Life Support System (ECLSS) are described. The ECLSS is divided into two groups: (1) an atmosphere management group and (2) a water and waste management group. The interaction between the ECLSS and the Space Station Habitability System is examined. The cruciform baseline station design, the delta and big T module configuration, and the reference Space Station configuration are evaluated in terms of ECLSS requirements. The distribution of ECLSS equipment in a reference Space Station configuration is studied as a function of initial operating conditions and growth orbit capabilities. The benefits of water electrolysis as a Space Station utility are considered.
Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)
NASA Technical Reports Server (NTRS)
Niewoehner, Kevin R.; Carter, John (Technical Monitor)
2001-01-01
The research accomplishments for the cooperative agreement 'Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)' include the following: (1) previous IFC program data collection and analysis; (2) IFC program support site (configured IFC systems support network, configured Tornado/VxWorks OS development system, made Configuration and Documentation Management Systems Internet accessible); (3) Airborne Research Test Systems (ARTS) II Hardware (developed hardware requirements specification, developing environmental testing requirements, hardware design, and hardware design development); (4) ARTS II software development laboratory unit (procurement of lab style hardware, configured lab style hardware, and designed interface module equivalent to ARTS II faceplate); (5) program support documentation (developed software development plan, configuration management plan, and software verification and validation plan); (6) LWR algorithm analysis (performed timing and profiling on algorithm); (7) pre-trained neural network analysis; (8) Dynamic Cell Structures (DCS) Neural Network Analysis (performing timing and profiling on algorithm); and (9) conducted technical interchange and quarterly meetings to define IFC research goals.
Land-mobile satellite communication system
NASA Technical Reports Server (NTRS)
Yan, Tsun-Yee (Inventor); Rafferty, William (Inventor); Dessouky, Khaled I. (Inventor); Wang, Charles C. (Inventor); Cheng, Unjeng (Inventor)
1993-01-01
A satellite communications system includes an orbiting communications satellite for relaying communications to and from a plurality of ground stations, and a network management center for making connections via the satellite between the ground stations in response to connection requests received via the satellite from the ground stations, the network management center being configured to provide both open-end service and closed-end service. The network management center of one embodiment is configured to provides both types of service according to a predefined channel access protocol that enables the ground stations to request the type of service desired. The channel access protocol may be configured to adaptively allocate channels to open-end service and closed-end service according to changes in the traffic pattern and include a free-access tree algorithm that coordinates collision resolution among the ground stations.
System control of an autonomous planetary mobile spacecraft
NASA Technical Reports Server (NTRS)
Dias, William C.; Zimmerman, Barbara A.
1990-01-01
The goal is to suggest the scheduling and control functions necessary for accomplishing mission objectives of a fairly autonomous interplanetary mobile spacecraft, while maximizing reliability. Goals are to provide an extensible, reliable system conservative in its use of on-board resources, while getting full value from subsystem autonomy, and avoiding the lure of ground micromanagement. A functional layout consisting of four basic elements is proposed: GROUND and SYSTEM EXECUTIVE system functions and RESOURCE CONTROL and ACTIVITY MANAGER subsystem functions. The system executive includes six subfunctions: SYSTEM MANAGER, SYSTEM FAULT PROTECTION, PLANNER, SCHEDULE ADAPTER, EVENT MONITOR and RESOURCE MONITOR. The full configuration is needed for autonomous operation on Moon or Mars, whereas a reduced version without the planning, schedule adaption and event monitoring functions could be appropriate for lower-autonomy use on the Moon. An implementation concept is suggested which is conservative in use of system resources and consists of modules combined with a network communications fabric. A language concept termed a scheduling calculus for rapidly performing essential on-board schedule adaption functions is introduced.
Flight Crew Responses to the Interval Management Alternative Clearances (IMAC) Experiment
NASA Technical Reports Server (NTRS)
Baxley, Brian T.; Wilson, Sara R.; Swieringa, Kurt A.; Roper, Roy D.
2016-01-01
Interval Management Alternative Clearances (IMAC) was a human-in-the-loop simulation experiment conducted to explore the efficacy and acceptability of three IM operations: CAPTURE, CROSS, and MAINTAIN. Two weeks of data collection were conducted, with each week using twelve subject pilots and four subject controllers flying ten high-density arrival scenarios into the Denver International Airport. Overall, both the IM operations and procedures were rated very favorably by the flight crew in terms of acceptability, workload, and pilot head down time. However, several critical issues were identified requiring resolution prior to real-world implementation, including the high frequency of IM speed commands, IM speed commands requiring changes to aircraft configuration, and ambiguous IM cockpit displays that did not trigger the intended pilot reaction. The results from this experiment will be used to prepare for a flight test in 2017, and to support the development of an advanced IM concept of operations by the FAA (Federal Aviation Agency) and aviation industry.
NASA Technical Reports Server (NTRS)
Crowley, Sandra L.
2000-01-01
Ubiquitous is a real word. I thank a former Total Quality Coach for my first exposure some years ago to its existence. My version of Webster's dictionary defines ubiquitous as "present, or seeming to be present, everywhere at the same time; omnipresent." While I believe that God is omnipresent, I have come to discover that CM and DM are present everywhere. Oh, yes; I define CM as Configuration Management and DM as either Data or Document Management. Ten years ago, I had my first introduction to the CM world. I had an opportunity to do CM for the Space Station effort at the NASA Lewis Research Center. I learned that CM was a discipline that had four areas of focus: identification, control, status accounting, and verification. I was certified as a CMIl graduate and was indoctrinated about clear, concise, and valid. Off I went into a world of entirely new experiences. I was exposed to change requests and change boards first hand. I also learned about implementation of changes, and then of technical and CM requirements.
Thermal management systems and methods
Gering, Kevin L.; Haefner, Daryl R.
2006-12-12
A thermal management system for a vehicle includes a heat exchanger having a thermal energy storage material provided therein, a first coolant loop thermally coupled to an electrochemical storage device located within the first coolant loop and to the heat exchanger, and a second coolant loop thermally coupled to the heat exchanger. The first and second coolant loops are configured to carry distinct thermal energy transfer media. The thermal management system also includes an interface configured to facilitate transfer of heat generated by an internal combustion engine to the heat exchanger via the second coolant loop in order to selectively deliver the heat to the electrochemical storage device. Thermal management methods are also provided.
ERIC Educational Resources Information Center
Turel, Ofir; Zhang, Yi
2010-01-01
Due to the increased importance and usage of self-managed virtual teams, many recent studies have examined factors that affect their success. One such factor that merits examination is the configuration or composition of virtual teams. This article tackles this point by (1) empirically testing trait-configuration effects on virtual team…
Cloudweaver: Adaptive and Data-Driven Workload Manager for Generic Clouds
NASA Astrophysics Data System (ADS)
Li, Rui; Chen, Lei; Li, Wen-Syan
Cloud computing denotes the latest trend in application development for parallel computing on massive data volumes. It relies on clouds of servers to handle tasks that used to be managed by an individual server. With cloud computing, software vendors can provide business intelligence and data analytic services for internet scale data sets. Many open source projects, such as Hadoop, offer various software components that are essential for building a cloud infrastructure. Current Hadoop (and many others) requires users to configure cloud infrastructures via programs and APIs and such configuration is fixed during the runtime. In this chapter, we propose a workload manager (WLM), called CloudWeaver, which provides automated configuration of a cloud infrastructure for runtime execution. The workload management is data-driven and can adapt to dynamic nature of operator throughput during different execution phases. CloudWeaver works for a single job and a workload consisting of multiple jobs running concurrently, which aims at maximum throughput using a minimum set of processors.
A WiFi public address system for disaster management.
Andrade, Nicholas; Palmer, Douglas A; Lenert, Leslie A
2006-01-01
The WiFi Bullhorn is designed to assist emergency workers in the event of a disaster situation by offering a rapidly configurable wireless of public address system for disaster sites. The current configuration plays either pre recorded or custom recorded messages and utilizes 802.11b networks for communication. Units can be position anywhere wireless coverage exists to help manage crowds or to recall first responders from dangerous areas.
A WiFi Public Address System for Disaster Management
Andrade, Nicholas; Palmer, Douglas A.; Lenert, Leslie A.
2006-01-01
The WiFi Bullhorn is designed to assist emergency workers in the event of a disaster situation by offering a rapidly configurable wireless public address system for disaster sites. The current configuration plays either pre recorded or custom recorded messages and utilizes 802.11b networks for communication. Units can be position anywhere wireless coverage exists to help manage crowds or to recall first responders from dangerous areas. PMID:17238466
Intelligent Sensors for Integrated Systems Health Management (ISHM)
NASA Technical Reports Server (NTRS)
Schmalzel, John L.
2008-01-01
IEEE 1451 Smart Sensors contribute to a number of ISHM goals including cost reduction achieved through: a) Improved configuration management (TEDS); and b) Plug-and-play re-configuration. Intelligent Sensors are adaptation of Smart Sensors to include ISHM algorithms; this offers further benefits: a) Sensor validation. b) Confidence assessment of measurement, and c) Distributed ISHM processing. Space-qualified intelligent sensors are possible a) Size, mass, power constraints. b) Bus structure/protocol.
Lighting system with heat distribution face plate
Arik, Mehmet; Weaver, Stanton Earl; Stecher, Thomas Elliot; Kuenzler, Glenn Howard; Wolfe, Jr., Charles Franklin; Li, Ri
2013-09-10
Lighting systems having a light source and a thermal management system are provided. The thermal management system includes synthetic jet devices, a heat sink and a heat distribution face plate. The synthetic jet devices are arranged in parallel to one and other and are configured to actively cool the lighting system. The heat distribution face plate is configured to radially transfer heat from the light source into the ambient air.
Mukumbang, Ferdinand C; van Belle, Sara; Marchal, Bruno; van Wyk, Brian
2016-01-01
The antiretroviral adherence club intervention was rolled out in primary health care facilities in the Western Cape province of South Africa to relieve clinic congestion, and improve retention in care, and treatment adherence in the face of growing patient loads. We adopted the realist evaluation approach to evaluate what aspects of antiretroviral club intervention works, for what sections of the patient population, and under which community and health systems contexts, to inform guidelines for scaling up of the intervention. In this article, we report on a step towards the development of a programme theory-the assumptions of programme designers and health service managers with regard to how and why the adherence club intervention is expected to achieve its goals and perceptions on how it has done so (or not). We adopted an exploratory qualitative research design. We conducted a document review of 12 documents on the design and implementation of the adherence club intervention, and key informant interviews with 12 purposively selected programme designers and managers. Thematic content analysis was used to identify themes attributed to the programme actors, context, mechanisms, and outcomes. Using the context-mechanism-outcome configurational tool, we provided an explanatory focus of how the adherence club intervention is roll-out and works guided by the realist perspective. We classified the assumptions of the adherence club designers and managers into the rollout, implementation, and utilisation of the adherence club programme, constructed around the providers, management/operational staff, and patients, respectively. Two rival theories were identified at the patient-perspective level. We used these perspectives to develop an initial programme theory of the adherence club intervention, which will be tested in a later phase. The perspectives of the programme designers and managers provided an important step towards developing an initial programme theory, which will guide our realist evaluation of the adherence club programme in South Africa.
Moving NSDC's Staff Development Standards into Practice: Innovation Configurations. Volume I
ERIC Educational Resources Information Center
National Staff Development Council, 2003
2003-01-01
NSDC's groundbreaking work in developing standards for staff development has now been joined by an equally important book that spells out exactly how those standards would look if they were being implemented by school districts. An Innovation Configuration map is a device that identifies and describes the major components of a new practice--in…
ERIC Educational Resources Information Center
Towndrow, Phillip A.; Fareed, Wan
2015-01-01
This article illustrates how findings from a study of teachers' and students' uses of laptop computers in a secondary school in Singapore informed the development of an Innovation Configuration (IC) Map--a tool for identifying and describing alternative ways of implementing innovations based on teachers' unique feelings, preoccupations, thoughts…
PIMS-Universal Payload Information Management
NASA Technical Reports Server (NTRS)
Elmore, Ralph; McNair, Ann R. (Technical Monitor)
2002-01-01
As the overall manager and integrator of International Space Station (ISS) science payloads and experiments, the Payload Operations Integration Center (POIC) at Marshall Space Flight Center had a critical need to provide an information management system for exchange and management of ISS payload files as well as to coordinate ISS payload related operational changes. The POIC's information management system has a fundamental requirement to provide secure operational access not only to users physically located at the POIC, but also to provide collaborative access to remote experimenters and International Partners. The Payload Information Management System (PIMS) is a ground based electronic document configuration management and workflow system that was built to service that need. Functionally, PIMS provides the following document management related capabilities: 1. File access control, storage and retrieval from a central repository vault. 2. Collect supplemental data about files in the vault. 3. File exchange with a PMS GUI client, or any FTP connection. 4. Files placement into an FTP accessible dropbox for pickup by interfacing facilities, included files transmitted for spacecraft uplink. 5. Transmission of email messages to users notifying them of new version availability. 6. Polling of intermediate facility dropboxes for files that will automatically be processed by PIMS. 7. Provide an API that allows other POIC applications to access PIMS information. Functionally, PIMS provides the following Change Request processing capabilities: 1. Ability to create, view, manipulate, and query information about Operations Change Requests (OCRs). 2. Provides an adaptable workflow approval of OCRs with routing through developers, facility leads, POIC leads, reviewers, and implementers. Email messages can be sent to users either involving them in the workflow process or simply notifying them of OCR approval progress. All PIMS document management and OCR workflow controls are coordinated through and routed to individual user's "to do" list tasks. A user is given a task when it is their turn to perform some action relating to the approval of the Document or OCR. The user's available actions are restricted to only functions available for the assigned task. Certain actions, such as review or action implementation by non-PIMS users, can also be coordinated through automated emails.
Jun Liu; Fan Zhang; Huang, He Helen
2014-01-01
Pattern recognition (PR) based on electromyographic (EMG) signals has been developed for multifunctional artificial arms for decades. However, assessment of EMG PR control for daily prosthesis use is still limited. One of the major barriers is the lack of a portable and configurable embedded system to implement the EMG PR control. This paper aimed to design an open and configurable embedded system for EMG PR implementation so that researchers can easily modify and optimize the control algorithms upon our designed platform and test the EMG PR control outside of the lab environments. The open platform was built on an open source embedded Linux Operating System running a high-performance Gumstix board. Both the hardware and software system framework were openly designed. The system was highly flexible in terms of number of inputs/outputs and calibration interfaces used. Such flexibility enabled easy integration of our embedded system with different types of commercialized or prototypic artificial arms. Thus far, our system was portable for take-home use. Additionally, compared with previously reported embedded systems for EMG PR implementation, our system demonstrated improved processing efficiency and high system precision. Our long-term goals are (1) to develop a wearable and practical EMG PR-based control for multifunctional artificial arms, and (2) to quantify the benefits of EMG PR-based control over conventional myoelectric prosthesis control in a home setting.
Design and Implementation of the CEBAF Element Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theodore Larrieu, Christopher Slominski, Michele Joyce
2011-10-01
With inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, andmore » element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous. Notice: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.« less
Is STAPLE algorithm confident to assess segmentation methods in PET imaging?
NASA Astrophysics Data System (ADS)
Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien
2015-12-01
Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.
Is STAPLE algorithm confident to assess segmentation methods in PET imaging?
Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien
2015-12-21
Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians' manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.
Schinegger, Rafaela; Pucher, Matthias; Aschauer, Christiane; Schmutz, Stefan
2018-03-01
This work addresses multiple human stressors and their impacts on fish assemblages of the Drava and Mura rivers in southern Austria. The impacts of single and multiple human stressors on riverine fish assemblages in these basins were disentangled, based on an extensive dataset. Stressor configuration, i.e. various metrics of multiple stressors belonging to stressor groups hydrology, morphology, connectivity and water quality were investigated for the first time at river basin scale in Austria. As biological response variables, the Fish Index Austria (FIA) and its related single as well as the WFD biological- and total state were investigated. Stressor-response analysis shows divergent results, but a general trend of decreasing ecological integrity with increasing number of stressors and maximum stressor is observed. Fish metrics based on age structure, fish region index and biological status responded best to single stressors and/or their combinations. The knowledge gained in this work provides a basis for advanced investigations in Alpine river basins and beyond, supports WFD implementation and helps prioritizing further actions towards multi-stressor restoration- and management. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Life Support Systems Microbial Challenges
NASA Technical Reports Server (NTRS)
Roman, Monsi C.
2010-01-01
Many microbiological studies were performed during the development of the Space Station Water Recovery and Management System from1990-2009. Studies include assessments of: (1) bulk phase (planktonic) microbial population (2) biofilms, (3) microbially influenced corrosion (4) biofouling treatments. This slide presentation summarizes the studies performed to assess the bulk phase microbial community during the Space Station Water Recovery Tests (WRT) from 1990 to 1998. This report provides an overview of some of the microbiological analyses performed during the Space Station WRT program. These tests not only integrated several technologies with the goal of producing water that met NASA s potable water specifications, but also integrated humans, and therefore human flora into the protocols. At the time these tests were performed, not much was known (or published) about the microbial composition of these types of wastewater. It is important to note that design changes to the WRS have been implemented over the years and results discussed in this report might be directly related to test configurations that were not chosen for the final flight configuration. Results microbiological analyses performed Conclusion from the during the WRT showed that it was possible to recycle water from different sources, including urine, and produce water that can exceed the quality of municipally produced water.
Distributed network management in the flat structured mobile communities
NASA Astrophysics Data System (ADS)
Balandina, Elena
2005-10-01
Delivering proper management into the flat structured mobile communities is crucial for improving users experience and increase applications diversity in mobile networks. The available P2P applications do application-centric management, but it cannot replace network-wide management, especially when a number of different applications are used simultaneously in the network. The network-wide management is the key element required for a smooth transition from standalone P2P applications to the self-organizing mobile communities that maintain various services with quality and security guaranties. The classical centralized network management solutions are not applicable in the flat structured mobile communities due to the decentralized nature and high mobility of the underlying networks. Also the basic network management tasks have to be revised taking into account specialties of the flat structured mobile communities. The network performance management becomes more dependent on the current nodes' context, which also requires extension of the configuration management functionality. The fault management has to take into account high mobility of the network nodes. The performance and accounting managements are mainly targeted in maintain an efficient and fair access to the resources within the community, however they also allow unbalanced resource use of the nodes that explicitly permit it, e.g. as a voluntary donation to the community or due to the profession (commercial) reasons. The security management must implement the new trust models, which are based on the community feedback, professional authorization, and a mix of both. For fulfilling these and another specialties of the flat structured mobile communities, a new network management solution is demanded. The paper presents a distributed network management solution for flat structured mobile communities. Also the paper points out possible network management roles for the different parties (e.g. operators, service providing hubs/super nodes, etc.) involved in a service providing chain.
Introduction of Service Systems Implementation
NASA Astrophysics Data System (ADS)
Demirkan, Haluk; Spohrer, James C.; Krishna, Vikas
Services systems can range from an individual to a firm to an entire nation. They can also be nested and composed of other service systems. They are configurations of people, information, technology and organizations to co-create value between a service customer and a provider (Maglio et al. 2006; Spohrer et al. 2007). While these configurations can take many, potentially infinite, forms, they can be optimized for the subject service to eliminate unnecessary costs in the forms of redundancies, over allocation, etc. So what is an ideal configuration that a provider and a customer might strive to achieve? As much as it would be nice to have a formula for such configurations, experiences that are result of engagement, are very different for each value co-creation configurations. The variances and dynamism of customer provider engagements result in potentially infinite types and numbers of configurations in today's global economy.
Tian, Gui Yun; Gao, Yunlai; Li, Kongjing; Wang, Yizhe; Gao, Bin; He, Yunze
2016-06-08
This paper reviews recent developments of eddy current pulsed thermography (ECPT) for material characterization and nondestructive evaluation (NDE). Due to the fact that line-coil-based ECPT, with the limitation of non-uniform heating and a restricted view, is not suitable for complex geometry structures evaluation, Helmholtz coils and ferrite-yoke-based excitation configurations of ECPT are proposed and compared. Simulations and experiments of new ECPT configurations considering the multi-physical-phenomenon of hysteresis losses, stray losses, and eddy current heating in conjunction with uniform induction magnetic field have been conducted and implemented for ferromagnetic and non-ferromagnetic materials. These configurations of ECPT for metallic material and defect characterization are discussed and compared with conventional line-coil configuration. The results indicate that the proposed ECPT excitation configurations can be applied for different shapes of samples such as turbine blade edges and rail tracks.
ERIC Educational Resources Information Center
Fernando, Sheara
2010-01-01
The success of an implementation effort depends on the ability for a system to utilize the innovation effectively; the effective usage of an innovation can be determined by monitoring for program integrity and fidelity, and assessing the degree to which the program implementation matches the intended plan (Fixsen, Blase, Horner, & Sugai 2007). The…
Birken, Sarah A; Lee, Shoou-Yih Daniel; Weiner, Bryan J; Chin, Marshall H; Chiu, Michael; Schaefer, Cynthia T
2015-01-01
Evidence suggests that top managers' support influences middle managers' commitment to innovation implementation. What remains unclear is how top managers' support influences middle managers' commitment. Results may be used to improve dismal rates of innovation implementation. We used a mixed-method sequential design. We surveyed (n = 120) and interviewed (n = 16) middle managers implementing an innovation intended to reduce health disparities in 120 U.S. health centers to assess whether top managers' support directly influences middle managers' commitment; by allocating implementation policies and practices; or by moderating the influence of implementation policies and practices on middle managers' commitment. For quantitative analyses, multivariable regression assessed direct and moderated effects; a mediation model assessed mediating effects. We used template analysis to assess qualitative data. We found support for each hypothesized relationship: Results suggest that top managers increase middle managers' commitment by directly conveying to middle managers that innovation implementation is an organizational priority (β = 0.37, p = .09); allocating implementation policies and practices including performance reviews, human resources, training, and funding (bootstrapped estimate for performance reviews = 0.09; 95% confidence interval [0.03, 0.17]); and encouraging middle managers to leverage performance reviews and human resources to achieve innovation implementation. Top managers can demonstrate their support directly by conveying to middle managers that an initiative is an organizational priority, allocating implementation policies and practices such as human resources and funding to facilitate innovation implementation, and convincing middle managers that innovation implementation is possible using available implementation policies and practices. Middle managers may maximize the influence of top managers' support on their commitment by communicating with top managers about what kind of support would be most effective in increasing their commitment to innovation implementation.
Dynamic Airspace Configuration
NASA Technical Reports Server (NTRS)
Bloem, Michael J.
2014-01-01
In air traffic management systems, airspace is partitioned into regions in part to distribute the tasks associated with managing air traffic among different systems and people. These regions, as well as the systems and people allocated to each, are changed dynamically so that air traffic can be safely and efficiently managed. It is expected that new air traffic control systems will enable greater flexibility in how airspace is partitioned and how resources are allocated to airspace regions. In this talk, I will begin by providing an overview of some previous work and open questions in Dynamic Airspace Configuration research, which is concerned with how to partition airspace and assign resources to regions of airspace. For example, I will introduce airspace partitioning algorithms based on clustering, integer programming optimization, and computational geometry. I will conclude by discussing the development of a tablet-based tool that is intended to help air traffic controller supervisors configure airspace and controllers in current operations.
A mixed-signal implementation of a polychronous spiking neural network with delay adaptation
Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan C.; van Schaik, André
2014-01-01
We present a mixed-signal implementation of a re-configurable polychronous spiking neural network capable of storing and recalling spatio-temporal patterns. The proposed neural network contains one neuron array and one axon array. Spike Timing Dependent Delay Plasticity is used to fine-tune delays and add dynamics to the network. In our mixed-signal implementation, the neurons and axons have been implemented as both analog and digital circuits. The system thus consists of one FPGA, containing the digital neuron array and the digital axon array, and one analog IC containing the analog neuron array and the analog axon array. The system can be easily configured to use different combinations of each. We present and discuss the experimental results of all combinations of the analog and digital axon arrays and the analog and digital neuron arrays. The test results show that the proposed neural network is capable of successfully recalling more than 85% of stored patterns using both analog and digital circuits. PMID:24672422
A mixed-signal implementation of a polychronous spiking neural network with delay adaptation.
Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan C; van Schaik, André
2014-01-01
We present a mixed-signal implementation of a re-configurable polychronous spiking neural network capable of storing and recalling spatio-temporal patterns. The proposed neural network contains one neuron array and one axon array. Spike Timing Dependent Delay Plasticity is used to fine-tune delays and add dynamics to the network. In our mixed-signal implementation, the neurons and axons have been implemented as both analog and digital circuits. The system thus consists of one FPGA, containing the digital neuron array and the digital axon array, and one analog IC containing the analog neuron array and the analog axon array. The system can be easily configured to use different combinations of each. We present and discuss the experimental results of all combinations of the analog and digital axon arrays and the analog and digital neuron arrays. The test results show that the proposed neural network is capable of successfully recalling more than 85% of stored patterns using both analog and digital circuits.
Applications and requirements for real-time simulators in ground-test facilities
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Blech, Richard A.
1986-01-01
This report relates simulator functions and capabilities to the operation of ground test facilities, in general. The potential benefits of having a simulator are described to aid in the selection of desired applications for a specific facility. Configuration options for integrating a simulator into the facility control system are discussed, and a logical approach to configuration selection based on desired applications is presented. The functional and data path requirements to support selected applications and configurations are defined. Finally, practical considerations for implementation (i.e., available hardware and costs) are discussed.
CONFU: Configuration Fuzzing Testing Framework for Software Vulnerability Detection
Dai, Huning; Murphy, Christian; Kaiser, Gail
2010-01-01
Many software security vulnerabilities only reveal themselves under certain conditions, i.e., particular configurations and inputs together with a certain runtime environment. One approach to detecting these vulnerabilities is fuzz testing. However, typical fuzz testing makes no guarantees regarding the syntactic and semantic validity of the input, or of how much of the input space will be explored. To address these problems, we present a new testing methodology called Configuration Fuzzing. Configuration Fuzzing is a technique whereby the configuration of the running application is mutated at certain execution points, in order to check for vulnerabilities that only arise in certain conditions. As the application runs in the deployment environment, this testing technique continuously fuzzes the configuration and checks “security invariants” that, if violated, indicate a vulnerability. We discuss the approach and introduce a prototype framework called ConFu (CONfiguration FUzzing testing framework) for implementation. We also present the results of case studies that demonstrate the approach’s feasibility and evaluate its performance. PMID:21037923
Knowledge information management toolkit and method
Hempstead, Antoinette R.; Brown, Kenneth L.
2006-08-15
A system is provided for managing user entry and/or modification of knowledge information into a knowledge base file having an integrator support component and a data source access support component. The system includes processing circuitry, memory, a user interface, and a knowledge base toolkit. The memory communicates with the processing circuitry and is configured to store at least one knowledge base. The user interface communicates with the processing circuitry and is configured for user entry and/or modification of knowledge pieces within a knowledge base. The knowledge base toolkit is configured for converting knowledge in at least one knowledge base from a first knowledge base form into a second knowledge base form. A method is also provided.
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Sunkel, John W.
1990-01-01
An attitude-control and momentum-management (ACMM) system for the Space Station in a large-angle torque-equilibrium-attitude (TEA) configuration is developed analytically and demonstrated by means of numerical simulations. The equations of motion for a rigid-body Space Station model are outlined; linearized equations for an arbitrary TEA (resulting from misalignment of control and body axes) are derived; the general requirements for an ACMM are summarized; and a pole-placement linear-quadratic regulator solution based on scheduled gains is proposed. Results are presented in graphs for (1) simulations based on configuration MB3 (showing the importance of accounting for the cross-inertia terms in the TEA estimate) and (2) simulations of a stepwise change from configuration MB3 to the 'assembly complete' stage over 130 orbits (indicating that the present ACCM scheme maintains sufficient control over slowly varying Space Station dynamics).
Design and Implementation of Decoy Enhanced Dynamic Virtualization Networks
2016-12-12
From - To) 12/12/2016 Final 07/01/2015-08/31/2016 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Design and Implementation of Decoy Enhanced Dynamic...TELEPHONE NUMBER (Include area code) 703-993-1715 Standard Form 298 (Rev . 8/98) Prescribed by ANSI Std . Z39.18 " Design and Implementation of...8 2 Design and Implementation ofDecoy Enhanced Dynamic Virtualization Networks 1 Major Goals The relatively static configurations of networks and
Configuration Management (CM) Support for KM Processes at NASA/Johnson Space Center (JSC)
NASA Technical Reports Server (NTRS)
Cioletti, Louis
2010-01-01
Collection and processing of information are critical aspects of every business activity from raw data to information to an executable decision. Configuration Management (CM) supports KM practices through its automated business practices and its integrated operations within the organization. This presentation delivers an overview of JSC/Space Life Sciences Directorate (SLSD) and its methods to encourage innovation through collaboration and participation. Specifically, this presentation will illustrate how SLSD CM creates an embedded KM activity with an established IT platform to control and update baselines, requirements, documents, schedules, budgets, while tracking changes essentially managing critical knowledge elements.
Uncovering middle managers' role in healthcare innovation implementation
2012-01-01
Background Middle managers have received little attention in extant health services research, yet they may have a key role in healthcare innovation implementation. The gap between evidence of effective care and practice may be attributed in part to poor healthcare innovation implementation. Investigating middle managers' role in healthcare innovation implementation may reveal an opportunity for improvement. In this paper, we present a theory of middle managers' role in healthcare innovation implementation to fill the gap in the literature and to stimulate research that empirically examines middle managers' influence on innovation implementation in healthcare organizations. Discussion Extant healthcare innovation implementation research has primarily focused on the roles of physicians and top managers. Largely overlooked is the role of middle managers. We suggest that middle managers influence healthcare innovation implementation by diffusing information, synthesizing information, mediating between strategy and day-to-day activities, and selling innovation implementation. Summary Teamwork designs have become popular in healthcare organizations. Because middle managers oversee these team initiatives, their potential to influence innovation implementation has grown. Future research should investigate middle managers' role in healthcare innovation implementation. Findings may aid top managers in leveraging middle managers' influence to improve the effectiveness of healthcare innovation implementation. PMID:22472001
Uncovering middle managers' role in healthcare innovation implementation.
Birken, Sarah A; Lee, Shoou-Yih Daniel; Weiner, Bryan J
2012-04-03
Middle managers have received little attention in extant health services research, yet they may have a key role in healthcare innovation implementation. The gap between evidence of effective care and practice may be attributed in part to poor healthcare innovation implementation. Investigating middle managers' role in healthcare innovation implementation may reveal an opportunity for improvement. In this paper, we present a theory of middle managers' role in healthcare innovation implementation to fill the gap in the literature and to stimulate research that empirically examines middle managers' influence on innovation implementation in healthcare organizations. Extant healthcare innovation implementation research has primarily focused on the roles of physicians and top managers. Largely overlooked is the role of middle managers. We suggest that middle managers influence healthcare innovation implementation by diffusing information, synthesizing information, mediating between strategy and day-to-day activities, and selling innovation implementation. Teamwork designs have become popular in healthcare organizations. Because middle managers oversee these team initiatives, their potential to influence innovation implementation has grown. Future research should investigate middle managers' role in healthcare innovation implementation. Findings may aid top managers in leveraging middle managers' influence to improve the effectiveness of healthcare innovation implementation.
The Advanced Communication Technology Satellite and ISDN
NASA Technical Reports Server (NTRS)
Lowry, Peter A.
1996-01-01
This paper depicts the Advanced Communication Technology Satellite (ACTS) system as a global central office switch. The ground portion of the system is the collection of earth stations or T1-VSAT's (T1 very small aperture terminals). The control software for the T1-VSAT's resides in a single CPU. The software consists of two modules, the modem manager and the call manager. The modem manager (MM) controls the RF modem portion of the T1-VSAT. It processes the orderwires from the satellite or from signaling generated by the call manager (CM). The CM controls the Recom Laboratories MSPs by receiving signaling messages from the stacked MSP shelves ro units and sending appropriate setup commands to them. There are two methods used to setup and process calls in the CM; first by dialing up a circuit using a standard telephone handset or, secondly by using an external processor connected to the CPU's second COM port, by sending and receiving signaling orderwires. It is the use of the external processor which permits the ISDN (Integrated Services Digital Network) Signaling Processor to implement ISDN calls. In August 1993, the initial testing of the ISDN Signaling Processor was carried out at ACTS System Test at Lockheed Marietta, Princeton, NJ using the spacecraft in its test configuration on the ground.
NASA Astrophysics Data System (ADS)
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-01
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of 26% in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-16
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of [Formula: see text] in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Flood Impact Modelling and Natural Flood Management
NASA Astrophysics Data System (ADS)
Owen, Gareth; Quinn, Paul; ODonnell, Greg
2016-04-01
Local implementation of Natural Flood Management methods are now being proposed in many flood schemes. In principal it offers a cost effective solution to a number of catchment based problem as NFM tackles both flood risk and WFD issues. However within larger catchments there is the issue of which subcatchments to target first and how much NFM to implement. If each catchment has its own configuration of subcatchment and rivers how can the issues of flood synchronisation and strategic investment be addressed? In this study we will show two key aspects to resolving these issues. Firstly, a multi-scale network water level recorder is placed throughout the system to capture the flow concentration and travel time operating in the catchment being studied. The second is a Flood Impact Model (FIM), which is a subcatchment based model that can generate runoff in any location using any hydrological model. The key aspect to the model is that it has a function to represent the impact of NFM in any subcatchment and the ability to route that flood wave to the outfall. This function allows a realistic representation of the synchronisation issues for that catchment. By running the model in interactive mode the user can define an appropriate scheme that minimises or removes the risk of synchornisation and gives confidence that the NFM investment is having a good level of impact downstream in large flood events.
NASA Astrophysics Data System (ADS)
Menguy, Theotime
Because of its critical nature, avionic industry is bound with numerous constraints such as security standards and certifications while having to fulfill the clients' desires for personalization. In this context, variability management is a very important issue for re-engineering projects of avionic softwares. In this thesis, we propose a new approach, based on formal concept analysis and semantic web, to support variability management. The first goal of this research is to identify characteristic behaviors and interactions of configuration variables in a dynamically configured system. To identify such elements, we used formal concept analysis on different levels of abstractions in the system and defined new metrics. Then, we built a classification for the configuration variables and their relations in order to enable a quick identification of a variable's behavior in the system. This classification could help finding a systematic approach to process variables during a re-engineering operation, depending on their category. To have a better understanding of the system, we also studied the shared controls of code between configuration variables. A second objective of this research is to build a knowledge platform to gather the results of all the analysis performed, and to store any additional element relevant in the variability management context, for instance new results helping define re-engineering process for each of the categories. To address this goal, we built a solution based on a semantic web, defining a new ontology, very extensive and enabling to build inferences related to the evolution processes. The approach presented here is, to the best of our knowledge, the first classification of configuration variables of a dynamically configured software and an original use of documentation and variability management techniques using semantic web in the aeronautic field. The analysis performed and the final results show that formal concept analysis is a way to identify specific properties and behaviors and that semantic web is a good solution to store and explore the results. However, the use of formal concept analysis with new boolean relations, such as the link between configuration variables and files, and the definition of new inferences may be a way to draw better conclusions. The use of the same methodology with other systems would enable to validate the approach in other contexts.
NASA Technical Reports Server (NTRS)
1986-01-01
Activities that will be conducted in support of the development and verification of the Block 2 Solid Rocket Motor (SRM) are described. Development includes design, fabrication, processing, and testing activities in which the results are fed back into the project. Verification includes analytical and test activities which demonstrate SRM component/subassembly/assembly capability to perform its intended function. The management organization responsible for formulating and implementing the verification program is introduced. It also identifies the controls which will monitor and track the verification program. Integral with the design and certification of the SRM are other pieces of equipment used in transportation, handling, and testing which influence the reliability and maintainability of the SRM configuration. The certification of this equipment is also discussed.
An instrument thermal data base system. [for future shuttle missions
NASA Technical Reports Server (NTRS)
Bartoszek, J. T.; Csigi, K. I.; Ollendorf, S.; Oberright, J. E.
1981-01-01
The rationale for the implementation of an Instrument Thermal Data Base System (ITDBS) is discussed and the potential application of a data base management system in support of future space missions, the design of scientific instruments needed, and the potential payload groupings is described. Two basic data files are suggested, the first containing a detailed narrative information list pertaining to design configurations and optimum performance of each instrument, and the second consisting of a description of the parameters pertinent to the instruments' thermal control and design in the form of a summary record of coded information, and serving as a recall record. The applicability of a data request sheet for preliminary planning is described and is concluded that the proposed system may additionally prove to be a method of inventory control.
Using object-oriented analysis to design a multi-mission ground data system
NASA Technical Reports Server (NTRS)
Shames, Peter
1995-01-01
This paper describes an analytical approach and descriptive methodology that is adapted from Object-Oriented Analysis (OOA) techniques. The technique is described and then used to communicate key issues of system logical architecture. The essence of the approach is to limit the analysis to only service objects, with the idea of providing a direct mapping from the design to a client-server implementation. Key perspectives on the system, such as user interaction, data flow and management, service interfaces, hardware configuration, and system and data integrity are covered. A significant advantage of this service-oriented approach is that it permits mapping all of these different perspectives on the system onto a single common substrate. This services substrate is readily represented diagramatically, thus making details of the overall design much more accessible.
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
SCUT: clinical data organization for physicians using pen computers.
Wormuth, D. W.
1992-01-01
The role of computers in assisting physicians with patient care is rapidly advancing. One of the significant obstacles to efficient use of computers in patient care has been the unavailability of reasonably configured portable computers. Lightweight portable computers are becoming more attractive as physician data-management devices, but still pose a significant problem with bedside use. The advent of computers designed to accept input from a pen and having no keyboard present a usable computer platform to enable physicians to perform clinical computing at the bedside. This paper describes a prototype system to maintain an electronic "scut" sheet. SCUT makes use of pen-input and background rule checking to enhance patient care. GO Corporation's PenPoint Operating System is used to implement the SCUT project. PMID:1483012
Model Based Verification of Cyber Range Event Environments
2015-11-13
Commercial and Open Source Systems," in SOSP, Cascais, Portugal, 2011. [3] Sanjai Narain, Sharad Malik, and Ehab Al-Shaer, "Towards Eliminating...Configuration Errors in Cyber Infrastructure," in 4th IEEE Symposium on Configuration Analytics and Automation, Arlington, VA, 2011. [4] Sanjai Narain...Verlag, 2010. [5] Sanjai Narain, "Network Configuration Management via Model Finding," in 19th Large Installation System Administration Conference, San
Implementing RDA Data Citation Recommendations: Case Study in South Africa
NASA Astrophysics Data System (ADS)
Hugo, Wim
2016-04-01
SAEON operates a shared research data infrastructure for its own data sets and for clients and end users in the Earth and Environmental Sciences domain. SAEON has a license to issue Digital Object Identifiers via DataCite on behalf of third parties, and have recently concluded development work to make a universal data deposit, description, and DOI minting facility available. This facility will be used to develop a number of end user gateways, including DataCite South Africa (in collaboration with National Research Foundation and addressing all grant-funded research in the country), DIRISA (Data-intensive Research Infrastructure for South Africa - in collaboration with Meraka Institute and Department of Science and Technology), and SASDI (South African Spatial Data Infrastructure). The RDA recently published Data Citation Recommendations [1], and this was used as a basis for specification of Digital Object Identifier implementation, raising two significant challenges: 1. Synchronisation of frequently harvested meta-data sets where version management practice did not align with the RDA recommendations, and 2. Handling sub-sets of and queries on large, continuously updated data sets. In the first case, we have developed a set of tests that determine the logical course of action when discrepancies are found during synchronization, and we have incorporated these into meta-data harvester configurations. Additionally, we have developed a state diagram and attendant workflow for meta-data that includes problem states emanating from DOI management, reporting services for data depositors, and feedback to end users in respect of synchronisation issues. In the second case, in the absence of firm guidelines from DataCite, we are seeking community consensus and feedback on an approach that caches all queries performed and subsets derived from data, and provide these with anchor-style extensions linked to the dataset's original DOI. This allows extended DOIs to resolve to a meta-data page on which the cached data set is available as an anchored download link.All cached datasets are provided with checksum values to verify the contents against such copies as may exist. The paper reviews recent service-driven portal interface developments, both services and graphical user interfaces, including wizard-style, configurable applications for meta-data management and DOI minting, discovery, download, visualization, and reporting. It showcases examples of the two permanent identifier problem areas and how these were addressed. The paper concludes with contributions to open research questions, including (1) determining optimal meta-data granularity and (2) proposing an implementation guideline for extended DOIs. [1] A. Rauber, D. van Uytvanck, A. Asmi, S. Pröll, "Data Citation Recommendations", November 2015, RDA. https://rd-alliance.org/group/data-citation-wg/outcomes/data-citation-recommendation.htm
Event Driven Messaging with Role-Based Subscriptions
NASA Technical Reports Server (NTRS)
Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, rachel; Allen, Christopher; Luong, Ivy; Chang, George; Zendejas, Silvino; Sadaqathulla, Syed
2009-01-01
Event Driven Messaging with Role-Based Subscriptions (EDM-RBS) is a framework integrated into the Service Management Database (SMDB) to allow for role-based and subscription-based delivery of synchronous and asynchronous messages over JMS (Java Messaging Service), SMTP (Simple Mail Transfer Protocol), or SMS (Short Messaging Service). This allows for 24/7 operation with users in all parts of the world. The software classifies messages by triggering data type, application source, owner of data triggering event (mission), classification, sub-classification and various other secondary classifying tags. Messages are routed to applications or users based on subscription rules using a combination of the above message attributes. This program provides a framework for identifying connected users and their applications for targeted delivery of messages over JMS to the client applications the user is logged into. EDMRBS provides the ability to send notifications over e-mail or pager rather than having to rely on a live human to do it. It is implemented as an Oracle application that uses Oracle relational database management system intrinsic functions. It is configurable to use Oracle AQ JMS API or an external JMS provider for messaging. It fully integrates into the event-logging framework of SMDB (Subnet Management Database).
Reinventing The Design Process: Teams and Models
NASA Technical Reports Server (NTRS)
Wall, Stephen D.
1999-01-01
The future of space mission designing will be dramatically different from the past. Formerly, performance-driven paradigms emphasized data return with cost and schedule being secondary issues. Now and in the future, costs are capped and schedules fixed-these two variables must be treated as independent in the design process. Accordingly, JPL has redesigned its design process. At the conceptual level, design times have been reduced by properly defining the required design depth, improving the linkages between tools, and managing team dynamics. In implementation-phase design, system requirements will be held in crosscutting models, linked to subsystem design tools through a central database that captures the design and supplies needed configuration management and control. Mission goals will then be captured in timelining software that drives the models, testing their capability to execute the goals. Metrics are used to measure and control both processes and to ensure that design parameters converge through the design process within schedule constraints. This methodology manages margins controlled by acceptable risk levels. Thus, teams can evolve risk tolerance (and cost) as they would any engineering parameter. This new approach allows more design freedom for a longer time, which tends to encourage revolutionary and unexpected improvements in design.
Postirradiation Testing Laboratory (327 Building)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kammenzind, D.E.
A Standards/Requirements Identification Document (S/RID) is the total list of the Environment, Safety and Health (ES and H) requirements to be implemented by a site, facility, or activity. These requirements are appropriate to the life cycle phase to achieve an adequate level of protection for worker and public health and safety, and the environment during design, construction, operation, decontamination and decommissioning, and environmental restoration. S/RlDs are living documents, to be revised appropriately based on change in the site`s or facility`s mission or configuration, a change in the facility`s life cycle phase, or a change to the applicable standards/requirements. S/RIDs encompassmore » health and safety, environmental, and safety related safeguards and security (S and S) standards/requirements related to the functional areas listed in the US Department of Energy (DOE) Environment, Safety and Health Configuration Guide. The Fluor Daniel Hanford (FDH) Contract S/RID contains standards/requirements, applicable to FDH and FDH subcontractors, necessary for safe operation of Project Hanford Management Contract (PHMC) facilities, that are not the direct responsibility of the facility manager (e.g., a site-wide fire department). Facility S/RIDs contain standards/requirements applicable to a specific facility that are the direct responsibility of the facility manager. S/RlDs are prepared by those responsible for managing the operation of facilities or the conduct of activities that present a potential threat to the health and safety of workers, public, or the environment, including: Hazard Category 1 and 2 nuclear facilities and activities, as defined in DOE 5480.23. Selected Hazard Category 3 nuclear, and Low Hazard non-nuclear facilities and activities, as agreed upon by RL. The Postirradiation Testing Laboratory (PTL) S/RID contains standards/ requirements that are necessary for safe operation of the PTL facility, and other building/areas that are the direct responsibility of the specific facility manager. The specific DOE Orders, regulations, industry codes/standards, guidance documents and good industry practices that serve as the basis for each element/subelement are identified and aligned with each subelement.« less
NASA Technical Reports Server (NTRS)
Hoh, R. H.; Klein, R. H.; Johnson, W. A.
1977-01-01
A system analysis method for the development of an integrated configuration management/flight director system for IFR STOL approaches is presented. Curved descending decelerating approach trajectories are considered. Considerable emphasis is placed on satisfying the pilot centered requirements (acceptable workload) as well as the usual guidance and control requirements (acceptable performance). The Augmentor Wing Jet STOL Research Aircraft was utilized to allow illustration by example, and to validate the analysis procedure via manned simulation.
NASA Technical Reports Server (NTRS)
Gwaltney, David A.; Briscoe, Jeri M.
2005-01-01
Integrated System Health Management (ISHM) architectures for spacecraft will include hard real-time, critical subsystems and soft real-time monitoring subsystems. Interaction between these subsystems will be necessary and an architecture supporting multiple criticality levels will be required. Demonstration hardware for the Integrated Safety-Critical Advanced Avionics Communication & Control (ISAACC) system has been developed at NASA Marshall Space Flight Center. It is a modular system using a commercially available time-triggered protocol, ?Tp/C, that supports hard real-time distributed control systems independent of the data transmission medium. The protocol is implemented in hardware and provides guaranteed low-latency messaging with inherent fault-tolerance and fault-containment. Interoperability between modules and systems of modules using the TTP/C is guaranteed through definition of messages and the precise message schedule implemented by the master-less Time Division Multiple Access (TDMA) communications protocol. "Plug-and-play" capability for sensors and actuators provides automatically configurable modules supporting sensor recalibration and control algorithm re-tuning without software modification. Modular components of controlled physical system(s) critical to control algorithm tuning, such as pumps or valve components in an engine, can be replaced or upgraded as "plug and play" components without modification to the ISAACC module hardware or software. ISAACC modules can communicate with other vehicle subsystems through time-triggered protocols or other communications protocols implemented over Ethernet, MIL-STD- 1553 and RS-485/422. Other communication bus physical layers and protocols can be included as required. In this way, the ISAACC modules can be part of a system-of-systems in a vehicle with multi-tier subsystems of varying criticality. The goal of the ISAACC architecture development is control and monitoring of safety critical systems of a manned spacecraft. These systems include spacecraft navigation and attitude control, propulsion, automated docking, vehicle health management and life support. ISAACC can integrate local critical subsystem health management with subsystems performing long term health monitoring. The ISAACC system and its relationship to ISHM will be presented.
Robust design of configurations and parameters of adaptable products
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua
2014-03-01
An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.
Birken, Sarah A; Lee, Shoou-Yih Daniel; Weiner, Bryan J; Chin, Marshall H; Schaefer, Cynthia T
2013-02-01
The rate of successful health care innovation implementation is dismal. Middle managers have a potentially important yet poorly understood role in health care innovation implementation. This study used self-administered surveys and interviews of middle managers in health centers that implemented an innovation to reduce health disparities to address the questions: Does middle managers' commitment to health care innovation implementation influence implementation effectiveness? If so, in what ways does their commitment influence implementation effectiveness? Although quantitative survey data analysis results suggest a weak relationship, qualitative interview data analysis results indicate that middle managers' commitment influences implementation effectiveness when middle managers are proactive. Scholars should account for middle managers' influence in implementation research, and health care executives may promote implementation effectiveness by hiring proactive middle managers and creating climates in which proactivity is rewarded, supported, and expected.
A systems engineering perspective on the human-centered design of health information systems.
Samaras, George M; Horst, Richard L
2005-02-01
The discipline of systems engineering, over the past five decades, has used a structured systematic approach to managing the "cradle to grave" development of products and processes. While elements of this approach are typically used to guide the development of information systems that instantiate a significant user interface, it appears to be rare for the entire process to be implemented. In fact, a number of authors have put forth development lifecycle models that are subsets of the classical systems engineering method, but fail to include steps such as incremental hazard analysis and post-deployment corrective and preventative actions. In that most health information systems have safety implications, we argue that the design and development of such systems would benefit by implementing this systems engineering approach in full. Particularly with regard to bringing a human-centered perspective to the formulation of system requirements and the configuration of effective user interfaces, this classical systems engineering method provides an excellent framework for incorporating human factors (ergonomics) knowledge and integrating ergonomists in the interdisciplinary development of health information systems.
PV based converter with integrated charger for DC micro-grid applications
NASA Astrophysics Data System (ADS)
Salve, Rima
This thesis presents a converter topology for photovoltaic panels. This topology minimizes the number of switching devices used thereby reducing power losses that arise from high frequency switching operations. The control strategy is implemented using a simple microcontroller that implements the proportional plus integral control. All the control loops are closed feedback loops hence minimizing error instantaneously and adjusting efficiently to system variations. The energy management between three components, namely, the photovoltaic panel, a battery and a DC link for a microgrid is shown distributed over three modes. These modes are dependent on the irradiance from the sunlight. All three modes are simulated. The maximum power point tracking of the system plays a crutial role in this configuration as it is one of the main challenge tackled by the control system. Various methods of MPPT are discussed and the Perturb and Observe method is employed and is described in detail. Experimental results are shown for the maximum power point tracking of this system with a scaled down version of the panel's actual capability.
NASA Astrophysics Data System (ADS)
Martin, E. H.; Caughman, J. B. O.; Shannon, S. C.; Klepper, C. C.; Isler, R. C.
2013-10-01
A major challenge facing magnetic fusion devices and the success of ITER is the design and implementation of reliable ICRH systems. The primary issue facing ICRH is the parasitic near-field which leads to an increased heat flux, sputtering, and arcing of the antenna/faraday screen. In order to aid the theoretical development of near-field physics and thus propel the design process experimental measurements are highly desired. In this work we have developed a diagnostic based on passive emission spectroscopy capable of measuring time periodic electric fields utilizing a generalized dynamic Stark effect model and a novel spectral line profile fitting package. The diagnostic was implemented on a small scale laboratory experiment designed to simulate the edge environment associated with ICRF antenna/faraday screen. The spatially and temporally resolved electric field associated with magnetized RF sheaths will be presented for two field configurations: magnetic field parallel to electric field and magnetic field perpendicular to electric field, both hydrogen and helium discharges where investigated. ORNL is managed by UT-Battelle, LCC, for the US DOE under Contract No. DE-AC05-00OR22725.
Information System through ANIS at CeSAM
NASA Astrophysics Data System (ADS)
Moreau, C.; Agneray, F.; Gimenez, S.
2015-09-01
ANIS (AstroNomical Information System) is a web generic tool developed at CeSAM to facilitate and standardize the implementation of astronomical data of various kinds through private and/or public dedicated Information Systems. The architecture of ANIS is composed of a database server which contains the project data, a web user interface template which provides high level services (search, extract and display imaging and spectroscopic data using a combination of criteria, an object list, a sql query module or a cone search interfaces), a framework composed of several packages, and a metadata database managed by a web administration entity. The process to implement a new ANIS instance at CeSAM is easy and fast : the scientific project has to submit data or a data secure access, the CeSAM team installs the new instance (web interface template and the metadata database), and the project administrator can configure the instance with the web ANIS-administration entity. Currently, the CeSAM offers through ANIS a web access to VO compliant Information Systems for different projects (HeDaM, HST-COSMOS, CFHTLS-ZPhots, ExoDAT,...).
Implementation and design of a teleoperation system based on a VMEBUS/68020 pipelined architecture
NASA Technical Reports Server (NTRS)
Lee, Thomas S.
1989-01-01
A pipelined control design and architecture for a force-feedback teleoperation system that is being implemented at the Jet Propulsion Laboratory and which will be integrated with the autonomous portion of the testbed to achieve share control is described. At the local site, the operator sees real-time force/torque displays and moves two 6-degree of freedom (dof) force-reflecting hand-controllers as his hands feel the contact force/torques generated at the remote site where the robots interact with the environment. He also uses a graphical user menu to monitor robot states and specify system options. The teleoperation software is written in the C language and runs on MC68020-based processor boards in the VME chassis, which utilizes a real-time operating system; the hardware is configured to realize a four-stage pipeline configuration. The environment is very flexible, such that the system can easily be configured as a stand-alone facility for performing independent research in human factors, force control, and time-delayed systems.
NASA Astrophysics Data System (ADS)
Danobeitia, J.; Oscar, G.; Bartolomé, R.; Sorribas, J.; Del Rio, J.; Cadena, J.; Toma, D. M.; Bghiel, I.; Martinez, E.; Bardaji, R.; Piera, J.; Favali, P.; Beranzoli, L.; Rolin, J. F.; Moreau, B.; Andriani, P.; Lykousis, V.; Hernandez Brito, J.; Ruhl, H.; Gillooly, M.; Terrinha, P.; Radulescu, V.; O'Neill, N.; Best, M.; Marinaro, G.
2016-12-01
European Multidisciplinary seafloor and the Observatory of the water column for Development (EMSODEV) is a Horizon-2020 UE project whose overall objective is the operationalization of eleven marine observatories and four test sites distributed throughout Europe, from the Arctic to the Atlantic, from the Mediterranean to the Black Sea. The whole infrastructure is managed by the European consortium EMSO-ERIC (European Research Infrastructure Consortium) with the participation of 8 European countries and other partner countries. Now, we are implementing a Generic Sensor Module (EGIM) within the EMSO ERIC distributed marine research infrastructure. Our involvement is mainly on developing standard-compliant generic software for Sensor Web Enablement (SWE) on EGIM device. The main goal of this development is to support the sensors data acquisition on a new interoperable EGIM system. The EGIM software structure is made up of one acquisition layer located between the recorded data at EGIM module and the data management services. Therefore, two main interfaces are implemented: first, assuring the EGIM hardware acquisition and second allowing push and pull data from data management layer (Sensor Web Enable standard compliant). All software components used are Open source licensed and has been configured to manage different roles on the whole system (52º North SOS Server, Zabbix Monitoring System). The acquisition data module has been implemented with the aim to join all components for EGIM data acquisition and server fulfilling SOS standards interface. The system is already achieved awaiting for the first laboratory bench test and shallow water test connection to the OBSEA node, offshore Vilanova I la Geltrú (Barcelona, Spain). The EGIM module will record a wide range of ocean parameters in a long-term consistent, accurate and comparable manner from disciplines such as biology, geology, chemistry, physics, engineering, and computer science, from polar to subtropical environments, through the water column down to the deep sea. The measurements recorded along EMSO NODES are critical to respond accurately to the social and scientific challenges such as climate change, changes in marine ecosystems, and marine hazards.
A parallel optimization method for product configuration and supplier selection based on interval
NASA Astrophysics Data System (ADS)
Zheng, Jian; Zhang, Meng; Li, Guoxi
2017-06-01
In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.
Intelligent redundant actuation system requirements and preliminary system design
NASA Technical Reports Server (NTRS)
Defeo, P.; Geiger, L. J.; Harris, J.
1985-01-01
Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.
Programming in a proposed 9X distributed Ada
NASA Technical Reports Server (NTRS)
Waldrop, Raymond S.; Volz, Richard A.; Goldsack, Stephen J.; Holzbach-Valero, A. A.
1991-01-01
The studies of the proposed Ada 9X constructs for distribution, now referred to as AdaPT are reported. The goals for this time period were to revise the chosen example scenario and to begin studying about how the proposed constructs might be implemented. The example scenario chosen is the Submarine Combat Information Center (CIC) developed by IBM for the Navy. The specification provided by IBM was preliminary and had several deficiencies. To address these problems, some changes to the scenario specification were made. Some of the more important changes include: (1) addition of a system database management function; (2) addition of a fourth processing unit to the standard resources; (3) addition of an operator console interface function; and (4) removal of the time synchronization function. To implement the CIC scenario in AdaPT, the decided strategy were publics, partitions, and nodes. The principle purpose for implementing the CIC scenario was to demonstrate how the AdaPT constructs interact with the program structure. While considering ways that the AdaPt constructs might be translated to Ada 83, it was observed that the partition construct could reasonably be modeled as an abstract data type. Although this gives a useful method of modeling partitions, it does not at all address the configuration aspects on the node construct.
Clinical genomics in the world of the electronic health record.
Marsolo, Keith; Spooner, S Andrew
2013-10-01
The widespread adoption of electronic health records presents a number of benefits to the field of clinical genomics. They include the ability to return results to the practitioner, to use genetic findings in clinical decision support, and to have data collected in the electronic health record that serve as a source of phenotypic information for analysis purposes. Not all electronic health records are created equal, however. They differ in their features, capabilities, and ease of use. Therefore, to understand the potential of the electronic health record, it is first necessary to understand its capabilities and the impact that implementation strategy has on usability. Specifically, we focus on the following areas: (i) how the electronic health record is used to capture data in clinical practice settings; (ii) how the implementation and configuration of the electronic health record affect the quality and availability of data; (iii) the management of clinical genetic test results and the feasibility of electronic health record integration; and (iv) the challenges of implementing an electronic health record in a research-intensive environment. This is followed by a discussion of the minimum functional requirements that an electronic health record must meet to enable the satisfactory integration of genomic results as well as the open issues that remain.
ERIC Educational Resources Information Center
Donovan, Loretta; Green, Tim; Hartley, Kendall
2010-01-01
This study explores configurations of laptop use in a one-to-one environment. Guided by methodologies of the Concerns-Based Adoption Model of change, an Innovation Configuration Map (description of the multiple ways an innovation is implemented) of a 1:1 laptop program at a middle school was developed and analyzed. Three distinct configurations…
Mukumbang, Ferdinand C; Van Belle, Sara; Marchal, Bruno; Van Wyk, Brian
2016-04-04
Suboptimal retention in care and poor treatment adherence are key challenges to antiretroviral therapy (ART) in sub-Saharan Africa. Community-based approaches to HIV service delivery are recommended to improve patient retention in care and ART adherence. The implementation of the adherence clubs in the Western Cape province of South Africa was with variable success in terms of implementation and outcomes. The need for operational guidelines for its implementation has been identified. Therefore, understanding the contexts and mechanisms for successful implementation of the adherence clubs is crucial to inform the roll-out to the rest of South Africa. The protocol outlines an evaluation of adherence club intervention in selected primary healthcare facilities in the metropolitan area of the Western Cape Province, using the realist approach. In the first phase, an exploratory study design will be used. Document review and key informant interviews will be used to elicit the programme theory. In phase two, a multiple case study design will be used to describe the adherence clubs in five contrastive sites. Semistructured interviews will be conducted with purposively selected programme implementers and members of the clubs to assess the context and mechanisms of the adherence clubs. For the programme's primary outcomes, a longitudinal retrospective cohort analysis will be conducted using routine patient data. Data analysis will involve classifying emerging themes using the context-mechanism-outcome (CMO) configuration, and refining the primary CMO configurations to conjectured CMO configurations. Finally, we will compare the conjectured CMO configurations from the cases with the initial programme theory. The final CMOs obtained will be translated into middle range theories. The study will be conducted according to the principles of the declaration of Helsinki (1964). Ethics clearance was obtained from the University of the Western Cape. Dissemination will be done through publications and curation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
RICA: a reliable and image configurable arena for cyborg bumblebee based on CAN bus.
Gong, Fan; Zheng, Nenggan; Xue, Lei; Xu, Kedi; Zheng, Xiaoxiang
2014-01-01
In this paper, we designed a reliable and image configurable flight arena, RICA, for developing cyborg bumblebees. To meet the spatial and temporal requirements of bumblebees, the Controller Area Network (CAN) bus is adopted to interconnect the LED display modules to ensure the reliability and real-time performance of the arena system. Easily-configurable interfaces on a desktop computer implemented by python scripts are provided to transmit the visual patterns to the LED distributor online and configure RICA dynamically. The new arena system will be a power tool to investigate the quantitative relationship between the visual inputs and induced flight behaviors and also will be helpful to the visual-motor research in other related fields.
Design of a flight director/configuration management system for piloted STOL approaches
NASA Technical Reports Server (NTRS)
Hoh, R. H.; Klein, R. H.; Johnson, W. A.
1973-01-01
The design and characteristics of a flight director for V/STOL aircraft are discussed. A configuration management system for piloted STOL approaches is described. The individual components of the overall system designed to reduce pilot workload to an acceptable level during curved, decelerating, and descending STOL approaches are defined. The application of the system to augmentor wing aircraft is analyzed. System performance checks and piloted evaluations were conducted on a flight simulator and the results are summarized.
NASA Technical Reports Server (NTRS)
Mckay, Charles
1991-01-01
This is the configuration management Plan for the AdaNet Repository Based Software Engineering (RBSE) contract. This document establishes the requirements and activities needed to ensure that the products developed for the AdaNet RBSE contract are accurately identified, that proposed changes to the product are systematically evaluated and controlled, that the status of all change activity is known at all times, and that the product achieves its functional performance requirements and is accurately documented.
SAGA: A project to automate the management of software production systems
NASA Technical Reports Server (NTRS)
Campbell, R. H.; Badger, W.; Beckman, C. S.; Beshers, G.; Hammerslag, D.; Kimball, J.; Kirslis, P. A.; Render, H.; Richards, P.; Terwilliger, R.
1984-01-01
The project to automate the management of software production systems is described. The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. Several major components of the SAGA system are completed to prototype form. The construction methods are described.
NASA Astrophysics Data System (ADS)
Hincks, A. D.; Shaw, J. R.; Chime Collaboration
2015-09-01
The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is an ambitious new radio telescope project for measuring cosmic expansion and investigating dark energy. Keeping good records of both physical configuration of its 1280 antennas and their analogue signal chains as well as the ˜100 TB of data produced daily from its correlator will be essential to the success of CHIME. In these proceedings we describe the database-driven software we have developed to manage this complexity.
LIBS data analysis using a predictor-corrector based digital signal processor algorithm
NASA Astrophysics Data System (ADS)
Sanders, Alex; Griffin, Steven T.; Robinson, Aaron
2012-06-01
There are many accepted sensor technologies for generating spectra for material classification. Once the spectra are generated, communication bandwidth limitations favor local material classification with its attendant reduction in data transfer rates and power consumption. Transferring sensor technologies such as Cavity Ring-Down Spectroscopy (CRDS) and Laser Induced Breakdown Spectroscopy (LIBS) require effective material classifiers. A result of recent efforts has been emphasis on Partial Least Squares - Discriminant Analysis (PLS-DA) and Principle Component Analysis (PCA). Implementation of these via general purpose computers is difficult in small portable sensor configurations. This paper addresses the creation of a low mass, low power, robust hardware spectra classifier for a limited set of predetermined materials in an atmospheric matrix. Crucial to this is the incorporation of PCA or PLS-DA classifiers into a predictor-corrector style implementation. The system configuration guarantees rapid convergence. Software running on multi-core Digital Signal Processor (DSPs) simulates a stream-lined plasma physics model estimator, reducing Analog-to-Digital (ADC) power requirements. This paper presents the results of a predictorcorrector model implemented on a low power multi-core DSP to perform substance classification. This configuration emphasizes the hardware system and software design via a predictor corrector model that simultaneously decreases the sample rate while performing the classification.
Varsi, Cecilie; Ekstedt, Mirjam; Gammon, Deede; Børøsund, Elin; Ruland, Cornelia M
2015-06-01
The role of nurse and physician managers is considered crucial for implementing eHealth interventions in clinical practice, but few studies have explored this. The aim of the current study was to examine the perceptions of nurse and physician managers regarding facilitators, barriers, management role, responsibility, and action taken in the implementation of an eHealth intervention called Choice into clinical practice. Individual qualitative interviews were conducted with six nurses and three physicians in management positions at five hospital units. The findings revealed that nurse managers reported conscientiously supporting the implementation, but workloads prevented them from participating in the process as closely as they wanted. Physician managers reported less contribution. The implementation process was influenced by facilitating factors such as perceptions of benefits from Choice and use of implementation strategies, along with barriers such as physician resistance, contextual factors and difficulties for front-line providers in learning a new way of communicating with the patients. The findings suggest that role descriptions for both nurse and physician managers should include implementation knowledge and implementation skills. Managers could benefit from an implementation toolkit. Implementation management should be included in management education for healthcare managers to prepare them for the constant need for implementation and improvement in clinical practice.
Granovsky, Alexander A
2015-12-21
We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granovsky, Alexander A., E-mail: alex.granovsky@gmail.com
We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.
Maximum Torque and Momentum Envelopes for Reaction Wheel Arrays
NASA Technical Reports Server (NTRS)
Reynolds, R. G.; Markley, F. Landis
2001-01-01
Spacecraft reaction wheel maneuvers are limited by the maximum torque and/or angular momentum which the wheels can provide. For an n-wheel configuration, the torque or momentum envelope can be obtained by projecting the n-dimensional hypercube, representing the domain boundary of individual wheel torques or momenta, into three dimensional space via the 3xn matrix of wheel axes. In this paper, the properties of the projected hypercube are discussed, and algorithms are proposed for determining this maximal torque or momentum envelope for general wheel configurations. Practical implementation strategies for specific wheel configurations are also considered.
System and method for merging clusters of wireless nodes in a wireless network
Budampati, Ramakrishna S [Maple Grove, MN; Gonia, Patrick S [Maplewood, MN; Kolavennu, Soumitri N [Blaine, MN; Mahasenan, Arun V [Kerala, IN
2012-05-29
A system includes a first cluster having multiple first wireless nodes. One first node is configured to act as a first cluster master, and other first nodes are configured to receive time synchronization information provided by the first cluster master. The system also includes a second cluster having one or more second wireless nodes. One second node is configured to act as a second cluster master, and any other second nodes configured to receive time synchronization information provided by the second cluster master. The system further includes a manager configured to merge the clusters into a combined cluster. One of the nodes is configured to act as a single cluster master for the combined cluster, and the other nodes are configured to receive time synchronization information provided by the single cluster master.
NASA Astrophysics Data System (ADS)
Lim, Jaechang; Choi, Sunghwan; Kim, Jaewook; Kim, Woo Youn
2016-12-01
To assess the performance of multi-configuration methods using exact exchange Kohn-Sham (KS) orbitals, we implemented configuration interaction singles and doubles (CISD) in a real-space numerical grid code. We obtained KS orbitals with the exchange-only optimized effective potential under the Krieger-Li-Iafrate (KLI) approximation. Thanks to the distinctive features of KLI orbitals against Hartree-Fock (HF), such as bound virtual orbitals with compact shapes and orbital energy gaps similar to excitation energies; KLI-CISD for small molecules shows much faster convergence as a function of simulation box size and active space (i.e., the number of virtual orbitals) than HF-CISD. The former also gives more accurate excitation energies with a few dominant configurations than the latter, even with many more configurations. The systematic control of basis set errors is straightforward in grid bases. Therefore, grid-based multi-configuration methods using exact exchange KS orbitals provide a promising new way to make accurate electronic structure calculations.
Prediction of Protein Configurational Entropy (Popcoen).
Goethe, Martin; Gleixner, Jan; Fita, Ignacio; Rubi, J Miguel
2018-03-13
A knowledge-based method for configurational entropy prediction of proteins is presented; this methodology is extremely fast, compared to previous approaches, because it does not involve any type of configurational sampling. Instead, the configurational entropy of a query fold is estimated by evaluating an artificial neural network, which was trained on molecular-dynamics simulations of ∼1000 proteins. The predicted entropy can be incorporated into a large class of protein software based on cost-function minimization/evaluation, in which configurational entropy is currently neglected for performance reasons. Software of this type is used for all major protein tasks such as structure predictions, proteins design, NMR and X-ray refinement, docking, and mutation effect predictions. Integrating the predicted entropy can yield a significant accuracy increase as we show exemplarily for native-state identification with the prominent protein software FoldX. The method has been termed Popcoen for Prediction of Protein Configurational Entropy. An implementation is freely available at http://fmc.ub.edu/popcoen/ .
Fracture Mechanics Analysis of Stitched Stiffener-Skin Debonding
NASA Technical Reports Server (NTRS)
Glaessgen, E. H.; Raju, I. S.; Poe, C. C., Jr.
1998-01-01
An analysis based on plate finite elements and the virtual crack closure technique has been implemented to study the effect of stitching on mode I and mode II strain energy release rates for debond configurations. The stitches were modeled as discrete nonlinear fastener elements with a compliance determined by experiment. The axial and shear behavior of the stitches was considered, however, the two compliances and failure loads were assumed to be independent. Both a double cantilever beam (mode I) and a mixed mode skin-stiffener debond configuration were studied. In the double cantilever beam configurations, G(sub I) began to decrease once the debond had grown beyond the first row of stitches and was reduced to zero for long debonds. In the mixed-mode skin-stiffener configurations, G(sub I) showed a similar behavior as in the double cantilever beam configurations, however, G(sub u), continued to increase with increasing debond length.
NASA Astrophysics Data System (ADS)
Fales, B. Scott; Shu, Yinan; Levine, Benjamin G.; Hohenstein, Edward G.
2017-09-01
A new complete active space configuration interaction (CASCI) method was recently introduced that uses state-averaged natural orbitals from the configuration interaction singles method (configuration interaction singles natural orbital CASCI, CISNO-CASCI). This method has been shown to perform as well or better than state-averaged complete active space self-consistent field for a variety of systems. However, further development and testing of this method have been limited by the lack of available analytic first derivatives of the CISNO-CASCI energy as well as the derivative coupling between electronic states. In the present work, we present a Lagrangian-based formulation of these derivatives as well as a highly efficient implementation of the resulting equations accelerated with graphical processing units. We demonstrate that the CISNO-CASCI method is practical for dynamical simulations of photochemical processes in molecular systems containing hundreds of atoms.
Fales, B Scott; Shu, Yinan; Levine, Benjamin G; Hohenstein, Edward G
2017-09-07
A new complete active space configuration interaction (CASCI) method was recently introduced that uses state-averaged natural orbitals from the configuration interaction singles method (configuration interaction singles natural orbital CASCI, CISNO-CASCI). This method has been shown to perform as well or better than state-averaged complete active space self-consistent field for a variety of systems. However, further development and testing of this method have been limited by the lack of available analytic first derivatives of the CISNO-CASCI energy as well as the derivative coupling between electronic states. In the present work, we present a Lagrangian-based formulation of these derivatives as well as a highly efficient implementation of the resulting equations accelerated with graphical processing units. We demonstrate that the CISNO-CASCI method is practical for dynamical simulations of photochemical processes in molecular systems containing hundreds of atoms.
The economic, institutional, and political determinants of public health delivery system structures.
Ingram, Richard C; Scutchfield, F Douglas; Mays, Glen P; Bhandari, Michelyn W
2012-01-01
A typology of local public health systems was recently introduced, and a large degree of structural transformation over time was discovered in the systems analyzed. We present a qualitative exploration of the factors that determine variation and change in the seven structural configurations that comprise the local public health delivery system typology. We applied a 10-item semistructured telephone interview protocol to representatives from the local health agency in two randomly selected systems from each configuration--one that had maintained configuration over time and one that had changed configuration over time. We assessed the interviews for patterns of variation between the configurations. Four key determinants of structural change emerged: availability of financial resources, interorganizational relationships, public health agency organization, and political relationships. Systems that had changed were more likely to experience strengthened partnerships between public health agencies and other community organizations and enjoy support from policy makers, while stable systems were more likely to be characterized by strong partnerships between public health agencies and other governmental bodies and less supportive relationships with policy makers. This research provides information regarding the determinants of system change, and may help public health leaders to better prepare for the impacts of change in the areas discussed. It may also help those who are seeking to implement change to determine the contextual factors that need to be in place before change can happen, or how best to implement change in the face of contextual factors that are beyond their control.
ERIC Educational Resources Information Center
Buchholz, James L.
This document summarizes the selection, configuration, implementation, and evaluation of BiblioFile, a CD-ROM based bibliographic retrieval system used to catalog and process library materials for 103 school centers in the Palm Beach County Schools (Florida). Technical processing included the production of spine labels, check-out cards and…
Configuration Control of a Mobile Dextrous Robot: Real-Time Implementation and Experimentation
NASA Technical Reports Server (NTRS)
Lim, David; Seraji, Homayoun
1996-01-01
This paper describes the design and implementation of a real-time control system with multiple modes of operation for a mobile dexterous manipulator. The manipulator under study is a kinematically redundant seven degree-of-freedom arm from Robotics Research Corporation, mounted on a one degree-of-freedom motorized platform.
Quantum logic gates based on ballistic transport in graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dragoman, Daniela; Academy of Romanian Scientists, Splaiul Independentei 54, 050094 Bucharest; Dragoman, Mircea, E-mail: mircea.dragoman@imt.ro
2016-03-07
The paper presents various configurations for the implementation of graphene-based Hadamard, C-phase, controlled-NOT, and Toffoli gates working at room temperature. These logic gates, essential for any quantum computing algorithm, involve ballistic graphene devices for qubit generation and processing and can be fabricated using existing nanolithographical techniques. All quantum gate configurations are based on the very large mean-free-paths of carriers in graphene at room temperature.
2005-12-01
Upsets in SRAM FPGAs,” Military and Aerospace Applications of Programmable Logic Devices, September 2002. 8. Wakerly , John F,. “Microcomputer...change. The goal of the Configurable Fault Tolerant Processor (CFTP) Project is to explore, develop and demonstrate the applicability of using off-the...develop and demonstrate the applicability of using commercial-of-the-shelf (COTS) Field Programmable Gate Arrays (FPGA) in the design of
RTEMS SMP and MTAPI for Efficient Multi-Core Space Applications on LEON3/LEON4 Processors
NASA Astrophysics Data System (ADS)
Cederman, Daniel; Hellstrom, Daniel; Sherrill, Joel; Bloom, Gedare; Patte, Mathieu; Zulianello, Marco
2015-09-01
This paper presents the final result of an European Space Agency (ESA) activity aimed at improving the software support for LEON processors used in SMP configurations. One of the benefits of using a multicore system in a SMP configuration is that in many instances it is possible to better utilize the available processing resources by load balancing between cores. This however comes with the cost of having to synchronize operations between cores, leading to increased complexity. While in an AMP system one can use multiple instances of operating systems that are only uni-processor capable, a SMP system requires the operating system to be written to support multicore systems. In this activity we have improved and extended the SMP support of the RTEMS real-time operating system and ensured that it fully supports the multicore capable LEON processors. The targeted hardware in the activity has been the GR712RC, a dual-core core LEON3FT processor, and the functional prototype of ESA's Next Generation Multiprocessor (NGMP), a quad core LEON4 processor. The final version of the NGMP is now available as a product under the name GR740. An implementation of the Multicore Task Management API (MTAPI) has been developed as part of this activity to aid in the parallelization of applications for RTEMS SMP. It allows for simplified development of parallel applications using the task-based programming model. An existing space application, the Gaia Video Processing Unit, has been ported to RTEMS SMP using the MTAPI implementation to demonstrate the feasibility and usefulness of multicore processors for space payload software. The activity is funded by ESA under contract 4000108560/13/NL/JK. Gedare Bloom is supported in part by NSF CNS-0934725.
A Wireless Magnetoresistive Sensing System for an Intraoral Tongue-Computer Interface
Park, Hangue; Kiani, Mehdi; Lee, Hyung-Min; Kim, Jeonghee; Block, Jacob; Gosselin, Benoit; Ghovanloo, Maysam
2015-01-01
Tongue drive system (TDS) is a tongue-operated, minimally invasive, unobtrusive, and wireless assistive technology (AT) that infers users’ intentions by detecting their voluntary tongue motion and translating them into user-defined commands. Here we present the new intraoral version of the TDS (iTDS), which has been implemented in the form of a dental retainer. The iTDS system-on-a-chip (SoC) features a configurable analog front-end (AFE) that reads the magnetic field variations inside the mouth from four 3-axial magnetoresistive sensors located at four corners of the iTDS printed circuit board (PCB). A dual-band transmitter (Tx) on the same chip operates at 27 and 432 MHz in the Industrial/Scientific/Medical (ISM) band to allow users to switch in the presence of external interference. The Tx streams the digitized samples to a custom-designed TDS universal interface, built from commercial off-the-shelf (COTS) components, which delivers the iTDS data to other devices such as smartphones, personal computers (PC), and powered wheelchairs (PWC). Another key block on the iTDS SoC is the power management integrated circuit (PMIC), which provides individually regulated and duty-cycled 1.8 V supplies for sensors, AFE, Tx, and digital control blocks. The PMIC also charges a 50 mAh Li-ion battery with constant current up to 4.2 V, and recovers data and clock to update its configuration register through a 13.56 MHz inductive link. The iTDS SoC has been implemented in a 0.5-μm standard CMOS process and consumes 3.7 mW on average. PMID:23853258
Reconfigurable Processing Module
NASA Technical Reports Server (NTRS)
Somervill, Kevin; Hodson, Robert; Jones, Robert; Williams, John
2005-01-01
To accommodate a wide spectrum of applications and technologies, NASA s Exploration System's Missions Directorate has called for reconfigurable and modular technologies to support future missions to the moon and Mars. In response, Langley Research Center is leading a program entitled Reconfigurable Scaleable Computing (RSC) that is centered on the development of FPGA-based computing resources in a stackable form factor. This paper details the architecture and implementation of the Reconfigurable Processing Module (RPM), which is the key element of the RSC system. The RPM is an FPGA-based, space-qualified printed circuit assembly leveraging terrestrial/commercial design standards into the space applications domain. The form factor is similar to, and backwards compatible with, the PCI-104 standard utilizing only the PCI interface. The size is expanded to accommodate the required functionality while still better than 30% smaller than a 3U CompactPCI(TradeMark)card and without the overhead of the backplane. The architecture is built around two FPGA devices, one hosting PCI and memory interfaces, and another hosting mission application resources; both of which are connected with a high-speed data bus. The PCI interface FPGA provides access via the PCI bus to onboard SDRAM, flash PROM, and the application resources; both configuration management as well as runtime interaction. The reconfigurable FPGA, referred to as the Application FPGA - or simply "the application" - is a radiation-tolerant Xilinx Virtex-4 FX60 hosting custom application specific logic or soft microprocessor IP. The RPM implements various SEE mitigation techniques including TMR, EDAC, and configuration scrubbing of the reconfigurable FPGA. Prototype hardware and formal modeling techniques are used to explore the performability trade space. These models provide a novel way to calculate quality-of-service performance measures while simultaneously considering fault-related behavior due to SEE soft errors.
Aerodynamic Shape Optimization of Complex Aircraft Configurations via an Adjoint Formulation
NASA Technical Reports Server (NTRS)
Reuther, James; Jameson, Antony; Farmer, James; Martinelli, Luigi; Saunders, David
1996-01-01
This work describes the implementation of optimization techniques based on control theory for complex aircraft configurations. Here control theory is employed to derive the adjoint differential equations, the solution of which allows for a drastic reduction in computational costs over previous design methods (13, 12, 43, 38). In our earlier studies (19, 20, 22, 23, 39, 25, 40, 41, 42) it was shown that this method could be used to devise effective optimization procedures for airfoils, wings and wing-bodies subject to either analytic or arbitrary meshes. Design formulations for both potential flows and flows governed by the Euler equations have been demonstrated, showing that such methods can be devised for various governing equations (39, 25). In our most recent works (40, 42) the method was extended to treat wing-body configurations with a large number of mesh points, verifying that significant computational savings can be gained for practical design problems. In this paper the method is extended for the Euler equations to treat complete aircraft configurations via a new multiblock implementation. New elements include a multiblock-multigrid flow solver, a multiblock-multigrid adjoint solver, and a multiblock mesh perturbation scheme. Two design examples are presented in which the new method is used for the wing redesign of a transonic business jet.