Configuration and Data Management Process and the System Safety Professional
NASA Technical Reports Server (NTRS)
Shivers, Charles Herbert; Parker, Nelson C. (Technical Monitor)
2001-01-01
This article presents a discussion of the configuration management (CM) and the Data Management (DM) functions and provides a perspective of the importance of configuration and data management processes to the success of system safety activities. The article addresses the basic requirements of configuration and data management generally based on NASA configuration and data management policies and practices, although the concepts are likely to represent processes of any public or private organization's well-designed configuration and data management program.
Configuration Management Plan for the Tank Farm Contractor
DOE Office of Scientific and Technical Information (OSTI.GOV)
WEIR, W.R.
The Configuration Management Plan for the Tank Farm Contractor describes configuration management the contractor uses to manage and integrate its technical baseline with the programmatic and functional operations to perform work. The Configuration Management Plan for the Tank Farm Contractor supports the management of the project baseline by providing the mechanisms to identify, document, and control the technical characteristics of the products, processes, and structures, systems, and components (SSC). This plan is one of the tools used to identify and provide controls for the technical baseline of the Tank Farm Contractor (TFC). The configuration management plan is listed in themore » management process documents for TFC as depicted in Attachment 1, TFC Document Structure. The configuration management plan is an integrated approach for control of technical, schedule, cost, and administrative processes necessary to manage the mission of the TFC. Configuration management encompasses the five functional elements of: (1) configuration management administration, (2) configuration identification, (3) configuration status accounting, (4) change control, and (5 ) configuration management assessments.« less
A PBOM configuration and management method based on templates
NASA Astrophysics Data System (ADS)
Guo, Kai; Qiao, Lihong; Qie, Yifan
2018-03-01
The design of Process Bill of Materials (PBOM) holds a hinge position in the process of product development. The requirements of PBOM configuration design and management for complex products are analysed in this paper, which include the reuse technique of configuration procedure and urgent management need of huge quantity of product family PBOM data. Based on the analysis, the function framework of PBOM configuration and management has been established. Configuration templates and modules are defined in the framework to support the customization and the reuse of configuration process. The configuration process of a detection sensor PBOM is shown as an illustration case in the end. The rapid and agile PBOM configuration and management can be achieved utilizing template-based method, which has a vital significance to improve the development efficiency for complex products.
Configuration Management Policy
This Policy establishes an Agency-wide Configuration Management Program and to provide responsibilities, compliance requirements, and overall principles for Configuration and Change Management processes to support information technology management.
Attaining and maintaining data integrity with configuration management
NASA Astrophysics Data System (ADS)
Huffman, Dorothy J.; Jeane, Shirley A.
1993-08-01
Managers and scientists are concerned about data integrity because they draw conclusions from data that can have far reaching effects. Projects managers use Configuration Management to insure that hardware, software, and project information are controlled. They have not, as yet, applied its rigorously to data. However, there is ample opportunity in the data collection and production process to jeopardize data integrity. Environmental changes, tampering and production problems can all affect data integrity. There are four functions included in the Configuration Management process: configuration identification, control, auditing and status accounting. These functions provide management the means to attain data integrity and the visibility into engineering processes needed to maintain data integrity. When project managers apply Configuration Management processes to data, the data user can trace back through history to validate data integrity. The user knows that the project allowed only orderly changes to the data. He is assured that project personnel followed procedures to maintain data quality. He also has access to status information about the data. The user receives data products with a known integrity level and a means to assess the impact of past events ont he conclusions derived from the data. To obtain these benefits, project managers should apply the Configuration Management discipline to data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This standard presents program criteria and implementation guidance for an operational configuration management program for DOE nuclear and non-nuclear facilities in the operational phase. Portions of this standard are also useful for other DOE processes, activities, and programs. This Part 1 contains foreword, glossary, acronyms, bibliography, and Chapter 1 on operational configuration management program principles. Appendices are included on configuration management program interfaces, and background material and concepts for operational configuration management.
Configuration management issues and objectives for a real-time research flight test support facility
NASA Technical Reports Server (NTRS)
Yergensen, Stephen; Rhea, Donald C.
1988-01-01
An account is given of configuration management activities for the Western Aeronautical Test Range (WATR) at NASA-Ames, whose primary function is the conduct of aeronautical research flight testing through real-time processing and display, tracking, and communications systems. The processing of WATR configuration change requests for specific research flight test projects must be conducted in such a way as to refrain from compromising the reliability of WATR support to all project users. Configuration management's scope ranges from mission planning to operations monitoring and performance trend analysis.
Configuration Management Plan for K Basins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weir, W.R.; Laney, T.
This plan describes a configuration management program for K Basins that establishes the systems, processes, and responsibilities necessary for implementation. The K Basins configuration management plan provides the methodology to establish, upgrade, reconstitute, and maintain the technical consistency among the requirements, physical configuration, and documentation. The technical consistency afforded by this plan ensures accurate technical information necessary to achieve the mission objectives that provide for the safe, economic, and environmentally sound management of K Basins and the stored material. The configuration management program architecture presented in this plan is based on the functional model established in the DOE Standard, DOE-STD-1073-93,more » {open_quotes}Guide for Operational Configuration Management Program{close_quotes}.« less
Operational concepts and implementation strategies for the design configuration management process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trauth, Sharon Lee
2007-05-01
This report describes operational concepts and implementation strategies for the Design Configuration Management Process (DCMP). It presents a process-based systems engineering model for the successful configuration management of the products generated during the operation of the design organization as a business entity. The DCMP model focuses on Pro/E and associated activities and information. It can serve as the framework for interconnecting all essential aspects of the product design business. A design operation scenario offers a sense of how to do business at a time when DCMP is second nature within the design organization.
Virtual Network Configuration Management System for Data Center Operations and Management
NASA Astrophysics Data System (ADS)
Okita, Hideki; Yoshizawa, Masahiro; Uehara, Keitaro; Mizuno, Kazuhiko; Tarui, Toshiaki; Naono, Ken
Virtualization technologies are widely deployed in data centers to improve system utilization. However, they increase the workload for operators, who have to manage the structure of virtual networks in data centers. A virtual-network management system which automates the integration of the configurations of the virtual networks is provided. The proposed system collects the configurations from server virtualization platforms and VLAN-supported switches, and integrates these configurations according to a newly developed XML-based management information model for virtual-network configurations. Preliminary evaluations show that the proposed system helps operators by reducing the time to acquire the configurations from devices and correct the inconsistency of operators' configuration management database by about 40 percent. Further, they also show that the proposed system has excellent scalability; the system takes less than 20 minutes to acquire the virtual-network configurations from a large scale network that includes 300 virtual machines. These results imply that the proposed system is effective for improving the configuration management process for virtual networks in data centers.
NASA Technical Reports Server (NTRS)
Lohr, Gary W.; Williams, Daniel M.
2008-01-01
Significant air traffic increases are anticipated for the future of the National Airspace System (NAS). To cope with future traffic increases, fundamental changes are required in many aspects of the air traffic management process including the planning and use of NAS resources. Two critical elements of this process are the selection of airport runway configurations, and the effective management of active runways. Two specific research areas in NASA's Airspace Systems Program (ASP) have been identified to address efficient runway management: Runway Configuration Management (RCM) and Arrival/Departure Runway Balancing (ADRB). This report documents efforts in assessing past as well as current work in these two areas.
Software control and system configuration management - A process that works
NASA Technical Reports Server (NTRS)
Petersen, K. L.; Flores, C., Jr.
1983-01-01
A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.
Software Configuration Management Guidebook
NASA Technical Reports Server (NTRS)
1995-01-01
The growth in cost and importance of software to NASA has caused NASA to address the improvement of software development across the agency. One of the products of this program is a series of guidebooks that define a NASA concept of the assurance processes which are used in software development. The Software Assurance Guidebook, SMAP-GB-A201, issued in September, 1989, provides an overall picture of the concepts and practices of NASA in software assurance. Lower level guidebooks focus on specific activities that fall within the software assurance discipline, and provide more detailed information for the manager and/or practitioner. This is the Software Configuration Management Guidebook which describes software configuration management in a way that is compatible with practices in industry and at NASA Centers. Software configuration management is a key software development process, and is essential for doing software assurance.
Configuration management issues and objectives for a real-time research flight test support facility
NASA Technical Reports Server (NTRS)
Yergensen, Stephen; Rhea, Donald C.
1988-01-01
Presented are some of the critical issues and objectives pertaining to configuration management for the NASA Western Aeronautical Test Range (WATR) of Ames Research Center. The primary mission of the WATR is to provide a capability for the conduct of aeronautical research flight test through real-time processing and display, tracking, and communications systems. In providing this capability, the WATR must maintain and enforce a configuration management plan which is independent of, but complimentary to, various research flight test project configuration management systems. A primary WATR objective is the continued development of generic research flight test project support capability, wherein the reliability of WATR support provided to all project users is a constant priority. Therefore, the processing of configuration change requests for specific research flight test project requirements must be evaluated within a perspective that maintains this primary objective.
An Approach for Implementation of Project Management Information Systems
NASA Astrophysics Data System (ADS)
Běrziša, Solvita; Grabis, Jānis
Project management is governed by project management methodologies, standards, and other regulatory requirements. This chapter proposes an approach for implementing and configuring project management information systems according to requirements defined by these methodologies. The approach uses a project management specification framework to describe project management methodologies in a standardized manner. This specification is used to automatically configure the project management information system by applying appropriate transformation mechanisms. Development of the standardized framework is based on analysis of typical project management concepts and process and existing XML-based representations of project management. A demonstration example of project management information system's configuration is provided.
Configuration management program plan for Hanford site systems engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, A.G.
This plan establishes the integrated configuration management program for the evolving technical baseline developed through the systems engineering process. This configuration management program aligns with the criteria identified in the DOE Standard, DOE-STD-1073-93. Included are specific requirements for control of the systems engineering RDD-100 database, and electronic data incorporated in the database that establishes the Hanford site technical baseline.
Software control and system configuration management: A systems-wide approach
NASA Technical Reports Server (NTRS)
Petersen, K. L.; Flores, C., Jr.
1984-01-01
A comprehensive software control and system configuration management process for flight-crucial digital control systems of advanced aircraft has been developed and refined to insure efficient flight system development and safe flight operations. Because of the highly complex interactions among the hardware, software, and system elements of state-of-the-art digital flight control system designs, a systems-wide approach to configuration control and management has been used. Specific procedures are implemented to govern discrepancy reporting and reconciliation, software and hardware change control, systems verification and validation testing, and formal documentation requirements. An active and knowledgeable configuration control board reviews and approves all flight system configuration modifications and revalidation tests. This flexible process has proved effective during the development and flight testing of several research aircraft and remotely piloted research vehicles with digital flight control systems that ranged from relatively simple to highly complex, integrated mechanizations.
Information management advanced development. Volume 1: Summary
NASA Technical Reports Server (NTRS)
Gerber, C. R.
1972-01-01
The information management systems designed for the modular space station are discussed. Subjects presented are: (1) communications terminal breadboard configuration, (2) digital data bus breadboard configuration, (3) data processing assembly definition, and (4) computer program (software) assembly definition.
NCCDS configuration management process improvement
NASA Technical Reports Server (NTRS)
Shay, Kathy
1993-01-01
By concentrating on defining and improving specific Configuration Management (CM) functions, processes, procedures, personnel selection/development, and tools, internal and external customers received improved CM services. Job performance within the section increased in both satisfaction and output. Participation in achieving major improvements has led to the delivery of consistent quality CM products as well as significant decreases in every measured CM metrics category.
Spent Nuclear Fuel Project Configuration Management Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reilly, M.A.
This document is a rewrite of the draft ``C`` that was agreed to ``in principle`` by SNF Project level 2 managers on EDT 609835, dated March 1995 (not released). The implementation process philosphy was changed in keeping with the ongoing reengineering of the WHC Controlled Manuals to achieve configuration management within the SNF Project.
NASA Technical Reports Server (NTRS)
1974-01-01
The Earth Observatory Satellite (EOS) data management system (DMS) is discussed. The DMS is composed of several subsystems or system elements which have basic purposes and are connected together so that the DMS can support the EOS program by providing the following: (1) payload data acquisition and recording, (2) data processing and product generation, (3) spacecraft and processing management and control, and (4) data user services. The configuration and purposes of the primary or high-data rate system and the secondary or local user system are explained. Diagrams of the systems are provided to support the systems analysis.
Knowledge information management toolkit and method
Hempstead, Antoinette R.; Brown, Kenneth L.
2006-08-15
A system is provided for managing user entry and/or modification of knowledge information into a knowledge base file having an integrator support component and a data source access support component. The system includes processing circuitry, memory, a user interface, and a knowledge base toolkit. The memory communicates with the processing circuitry and is configured to store at least one knowledge base. The user interface communicates with the processing circuitry and is configured for user entry and/or modification of knowledge pieces within a knowledge base. The knowledge base toolkit is configured for converting knowledge in at least one knowledge base from a first knowledge base form into a second knowledge base form. A method is also provided.
Managing computer-controlled operations
NASA Technical Reports Server (NTRS)
Plowden, J. B.
1985-01-01
A detailed discussion of Launch Processing System Ground Software Production is presented to establish the interrelationships of firing room resource utilization, configuration control, system build operations, and Shuttle data bank management. The production of a test configuration identifier is traced from requirement generation to program development. The challenge of the operational era is to implement fully automated utilities to interface with a resident system build requirements document to eliminate all manual intervention in the system build operations. Automatic update/processing of Shuttle data tapes will enhance operations during multi-flow processing.
7 Processes that Enable NASA Software Engineering Technologies: Value-Added Process Engineering
NASA Technical Reports Server (NTRS)
Housch, Helen; Godfrey, Sally
2011-01-01
The presentation reviews Agency process requirements and the purpose, benefits, and experiences or seven software engineering processes. The processes include: product integration, configuration management, verification, software assurance, measurement and analysis, requirements management, and planning and monitoring.
NASA Astrophysics Data System (ADS)
Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.
2017-12-01
In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.
Configuration management program plan for Hanford site systems engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kellie, C.L.
This plan establishes the integrated management program for the evolving technical baseline developed through the systems engineering process. This configuration management program aligns with the criteria identified in the DOE Standard, DOE-STD-1073-93. Included are specific requirements for control of the systems engineering RDD-100 database, and electronic data incorporated in the database that establishes the Hanford Site Technical Baseline.
Space Geodesy Project Information and Configuration Management Procedure
NASA Technical Reports Server (NTRS)
Merkowitz, Stephen M.
2016-01-01
This plan defines the Space Geodesy Project (SGP) policies, procedures, and requirements for Information and Configuration Management (CM). This procedure describes a process that is intended to ensure that all proposed and approved technical and programmatic baselines and changes to the SGP hardware, software, support systems, and equipment are documented.
Integrated Advanced Sounding Unit-A (AMSU-A). Configuration Management Plan
NASA Technical Reports Server (NTRS)
Cavanaugh, J.
1996-01-01
The purpose of this plan is to identify the baseline to be established during the development life cycle of the integrated AMSU-A, and define the methods and procedures which Aerojet will follow in the implementation of configuration control for each established baseline. Also this plan establishes the Configuration Management process to be used for the deliverable hardware, software, and firmware of the Integrated AMSU-A during development, design, fabrication, test, and delivery.
Aeropropulsion facilities configuration control: Procedures manual
NASA Technical Reports Server (NTRS)
Lavelle, James J.
1990-01-01
Lewis Research Center senior management directed that the aeropropulsion facilities be put under configuration control. A Configuration Management (CM) program was established by the Facilities Management Branch of the Aeropropulsion Facilities and Experiments Division. Under the CM program, a support service contractor was engaged to staff and implement the program. The Aeronautics Directorate has over 30 facilities at Lewis of various sizes and complexities. Under the program, a Facility Baseline List (FBL) was established for each facility, listing which systems and their documents were to be placed under configuration control. A Change Control System (CCS) was established requiring that any proposed changes to FBL systems or their documents were to be processed as per the CCS. Limited access control of the FBL master drawings was implemented and an audit system established to ensure all facility changes are properly processed. This procedures manual sets forth the policy and responsibilities to ensure all key documents constituting a facilities configuration are kept current, modified as needed, and verified to reflect any proposed change. This is the essence of the CM program.
Improvements to information management systems simulator
NASA Technical Reports Server (NTRS)
Bilek, R. W.
1972-01-01
The performance of personnel in the augmentation and improvement of the interactive IMSIM information management simulation model is summarized. With this augmented model, NASA now has even greater capabilities for the simulation of computer system configurations, data processing loads imposed on these configurations, and executive software to control system operations. Through these simulations, NASA has an extremely cost effective capability for the design and analysis of computer-based data management systems.
Configuration Management (CM) Support for KM Processes at NASA/Johnson Space Center (JSC)
NASA Technical Reports Server (NTRS)
Cioletti, Louis
2010-01-01
Collection and processing of information are critical aspects of every business activity from raw data to information to an executable decision. Configuration Management (CM) supports KM practices through its automated business practices and its integrated operations within the organization. This presentation delivers an overview of JSC/Space Life Sciences Directorate (SLSD) and its methods to encourage innovation through collaboration and participation. Specifically, this presentation will illustrate how SLSD CM creates an embedded KM activity with an established IT platform to control and update baselines, requirements, documents, schedules, budgets, while tracking changes essentially managing critical knowledge elements.
Configuration of management accounting information system for multi-stage manufacturing
NASA Astrophysics Data System (ADS)
Mkrtychev, S. V.; Ochepovsky, A. V.; Enik, O. A.
2018-05-01
The article presents an approach to configuration of a management accounting information system (MAIS) that provides automated calculations and the registration of normative production losses in multi-stage manufacturing. The use of MAIS with the proposed configuration at the enterprises of textile and woodworking industries made it possible to increase the accuracy of calculations for normative production losses and to organize accounting thereof with the reference to individual stages of the technological process. Thus, high efficiency of multi-stage manufacturing control is achieved.
Configuration Management Process Assessment Strategy
NASA Technical Reports Server (NTRS)
Henry, Thad
2014-01-01
Purpose: To propose a strategy for assessing the development and effectiveness of configuration management systems within Programs, Projects, and Design Activities performed by technical organizations and their supporting development contractors. Scope: Various entities CM Systems will be assessed dependent on Project Scope (DDT&E), Support Services and Acquisition Agreements. Approach: Model based structured against assessing organizations CM requirements including best practices maturity criteria. The model is tailored to the entity being assessed dependent on their CM system. The assessment approach provides objective feedback to Engineering and Project Management of the observed CM system maturity state versus the ideal state of the configuration management processes and outcomes(system). center dot Identifies strengths and risks versus audit gotcha's (findings/observations). center dot Used "recursively and iteratively" throughout program lifecycle at select points of need. (Typical assessments timing is Post PDR/Post CDR) center dot Ideal state criteria and maturity targets are reviewed with the assessed entity prior to an assessment (Tailoring) and is dependent on the assessed phase of the CM system. center dot Supports exit success criteria for Preliminary and Critical Design Reviews. center dot Gives a comprehensive CM system assessment which ultimately supports configuration verification activities.*
System Oriented Runway Management: A Research Update
NASA Technical Reports Server (NTRS)
Lohr, Gary W.; Brown, Sherilyn A.; Stough, Harry P., III; Eisenhawer, Steve; Atkins, Stephen; Long, Dou
2011-01-01
The runway configuration used by an airport has significant implications with respect to its capacity and ability to effectively manage surface and airborne traffic. Aircraft operators rely on runway configuration information because it can significantly affect an airline's operations and planning of their resources. Current practices in runway management are limited by a relatively short time horizon for reliable weather information and little assistance from automation. Wind velocity is the primary consideration when selecting a runway configuration; however when winds are below a defined threshold, discretion may be used to determine the configuration. Other considerations relevant to runway configuration selection include airport operator constraints, weather conditions (other than winds) traffic demand, user preferences, surface congestion, and navigational system outages. The future offers an increasingly complex landscape for the runway management process. Concepts and technologies that hold the potential for capacity and efficiency increases for both operations on the airport surface and in terminal and enroute airspace are currently under investigation. Complementary advances in runway management are required if capacity and efficiency increases in those areas are to be realized. The System Oriented Runway Management (SORM) concept has been developed to address this critical part of the traffic flow process. The SORM concept was developed to address all aspects of runway management for airports of varying sizes and to accommodate a myriad of traffic mixes. SORM, to date, addresses the single airport environment; however, the longer term vision is to incorporate capabilities for multiple airport (Metroplex) operations as well as to accommodate advances in capabilities resulting from ongoing research. This paper provides an update of research supporting the SORM concept including the following: a concept of overview, results of a TRCM simulation, single airport and Metroplex modeling effort and a benefits assessment.
Software Engineering Guidebook
NASA Technical Reports Server (NTRS)
Connell, John; Wenneson, Greg
1993-01-01
The Software Engineering Guidebook describes SEPG (Software Engineering Process Group) supported processes and techniques for engineering quality software in NASA environments. Three process models are supported: structured, object-oriented, and evolutionary rapid-prototyping. The guidebook covers software life-cycles, engineering, assurance, and configuration management. The guidebook is written for managers and engineers who manage, develop, enhance, and/or maintain software under the Computer Software Services Contract.
NASA Technical Reports Server (NTRS)
Keltner, D. J.
1975-01-01
The stowage list and hardware tracking system, a computer based information management system, used in support of the space shuttle orbiter stowage configuration and the Johnson Space Center hardware tracking is described. The input, processing, and output requirements that serve as a baseline for system development are defined.
The Careful Puppet Master: Reducing risk and fortifying acceptance testing with Jenkins CI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; Richman, Gabriel; DeStefano, John; Pryor, James; Rao, Tejas; Strecker-Kellogg, William; Wong, Tony
2015-12-01
Centralized configuration management, including the use of automation tools such as Puppet, can greatly increase provisioning speed and efficiency when configuring new systems or making changes to existing systems, reduce duplication of work, and improve automated processes. However, centralized management also brings with it a level of inherent risk: a single change in just one file can quickly be pushed out to thousands of computers and, if that change is not properly and thoroughly tested and contains an error, could result in catastrophic damage to many services, potentially bringing an entire computer facility offline. Change management procedures can—and should—be formalized in order to prevent such accidents. However, like the configuration management process itself, if such procedures are not automated, they can be difficult to enforce strictly. Therefore, to reduce the risk of merging potentially harmful changes into our production Puppet environment, we have created an automated testing system, which includes the Jenkins CI tool, to manage our Puppet testing process. This system includes the proposed changes and runs Puppet on a pool of dozens of RedHat Enterprise Virtualization (RHEV) virtual machines (VMs) that replicate most of our important production services for the purpose of testing. This paper describes our automated test system and how it hooks into our production approval process for automatic acceptance testing. All pending changes that have been pushed to production must pass this validation process before they can be approved and merged into production.
Design and Data Management System
NASA Technical Reports Server (NTRS)
Messer, Elizabeth; Messer, Brad; Carter, Judy; Singletary, Todd; Albasini, Colby; Smith, Tammy
2007-01-01
The Design and Data Management System (DDMS) was developed to automate the NASA Engineering Order (EO) and Engineering Change Request (ECR) processes at the Propulsion Test Facilities at Stennis Space Center for efficient and effective Configuration Management (CM). Prior to the development of DDMS, the CM system was a manual, paper-based system that required an EO or ECR submitter to walk the changes through the acceptance process to obtain necessary approval signatures. This approval process could take up to two weeks, and was subject to a variety of human errors. The process also requires that the CM office make copies and distribute them to the Configuration Control Board members for review prior to meetings. At any point, there was a potential for an error or loss of the change records, meaning the configuration of record was not accurate. The new Web-based DDMS eliminates unnecessary copies, reduces the time needed to distribute the paperwork, reduces time to gain the necessary signatures, and prevents the variety of errors inherent in the previous manual system. After implementation of the DDMS, all EOs and ECRs can be automatically checked prior to submittal to ensure that the documentation is complete and accurate. Much of the configuration information can be documented in the DDMS through pull-down forms to ensure consistent entries by the engineers and technicians in the field. The software also can electronically route the documents through the signature process to obtain the necessary approvals needed for work authorization. The workflow of the system allows for backups and timestamps that determine the correct routing and completion of all required authorizations in a more timely manner, as well as assuring the quality and accuracy of the configuration documents.
Intelligent Sensors for Integrated Systems Health Management (ISHM)
NASA Technical Reports Server (NTRS)
Schmalzel, John L.
2008-01-01
IEEE 1451 Smart Sensors contribute to a number of ISHM goals including cost reduction achieved through: a) Improved configuration management (TEDS); and b) Plug-and-play re-configuration. Intelligent Sensors are adaptation of Smart Sensors to include ISHM algorithms; this offers further benefits: a) Sensor validation. b) Confidence assessment of measurement, and c) Distributed ISHM processing. Space-qualified intelligent sensors are possible a) Size, mass, power constraints. b) Bus structure/protocol.
GSC configuration management plan
NASA Technical Reports Server (NTRS)
Withers, B. Edward
1990-01-01
The tools and methods used for the configuration management of the artifacts (including software and documentation) associated with the Guidance and Control Software (GCS) project are described. The GCS project is part of a software error studies research program. Three implementations of GCS are being produced in order to study the fundamental characteristics of the software failure process. The Code Management System (CMS) is used to track and retrieve versions of the documentation and software. Application of the CMS for this project is described and the numbering scheme is delineated for the versions of the project artifacts.
NASA Astrophysics Data System (ADS)
Menguy, Theotime
Because of its critical nature, avionic industry is bound with numerous constraints such as security standards and certifications while having to fulfill the clients' desires for personalization. In this context, variability management is a very important issue for re-engineering projects of avionic softwares. In this thesis, we propose a new approach, based on formal concept analysis and semantic web, to support variability management. The first goal of this research is to identify characteristic behaviors and interactions of configuration variables in a dynamically configured system. To identify such elements, we used formal concept analysis on different levels of abstractions in the system and defined new metrics. Then, we built a classification for the configuration variables and their relations in order to enable a quick identification of a variable's behavior in the system. This classification could help finding a systematic approach to process variables during a re-engineering operation, depending on their category. To have a better understanding of the system, we also studied the shared controls of code between configuration variables. A second objective of this research is to build a knowledge platform to gather the results of all the analysis performed, and to store any additional element relevant in the variability management context, for instance new results helping define re-engineering process for each of the categories. To address this goal, we built a solution based on a semantic web, defining a new ontology, very extensive and enabling to build inferences related to the evolution processes. The approach presented here is, to the best of our knowledge, the first classification of configuration variables of a dynamically configured software and an original use of documentation and variability management techniques using semantic web in the aeronautic field. The analysis performed and the final results show that formal concept analysis is a way to identify specific properties and behaviors and that semantic web is a good solution to store and explore the results. However, the use of formal concept analysis with new boolean relations, such as the link between configuration variables and files, and the definition of new inferences may be a way to draw better conclusions. The use of the same methodology with other systems would enable to validate the approach in other contexts.
NASA Technical Reports Server (NTRS)
Cavanaugh, J.
1994-01-01
This plan describes methods and procedures Aerojet will follow in the implementation of configuration control for each established baseline. The plan is written in response to the GSFC EOS CM Plan 420-02-02, dated January 1990, and also meets he requirements specified in DOD-STD-480, DOD-D 1000B, MIL-STD-483A, and MIL-STD-490B. The plan establishes the configuration management process to be used for the deliverable hardware, software, and firmware of the EOS/AMSU-A during development, design, fabrication, test, and delivery. This revision includes minor updates to reflect Aerojet's CM policies.
Requirements management for Gemini Observatory: a small organization with big development projects
NASA Astrophysics Data System (ADS)
Close, Madeline; Serio, Andrew; Cordova, Martin; Hardie, Kayla
2016-08-01
Gemini Observatory is an astronomical observatory operating two premier 8m-class telescopes, one in each hemisphere. As an operational facility, a majority of Gemini's resources are spent on operations however the observatory undertakes major development projects as well. Current projects include new facility science instruments, an operational paradigm shift to full remote operations, and new operations tools for planning, configuration and change control. Three years ago, Gemini determined that a specialized requirements management tool was needed. Over the next year, the Gemini Systems Engineering Group investigated several tools, selected one for a trial period and configured it for use. Configuration activities including definition of systems engineering processes, development of a requirements framework, and assignment of project roles to tool roles. Test projects were implemented in the tool. At the conclusion of the trial, the group determined that the Gemini could meet its requirements management needs without use of a specialized requirements management tool, and the group identified a number of lessons learned which are described in the last major section of this paper. These lessons learned include how to conduct an organizational needs analysis prior to pursuing a tool; caveats concerning tool criteria and the selection process; the prerequisites and sequence of activities necessary to achieve an optimum configuration of the tool; the need for adequate staff resources and staff training; and a special note regarding organizations in transition and archiving of requirements.
Tank waste remediation system configuration management plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vann, J.M.
The configuration management program for the Tank Waste Remediation System (TWRS) Project Mission supports management of the project baseline by providing the mechanisms to identify, document, and control the functional and physical characteristics of the products. This document is one of the tools used to develop and control the mission and work. It is an integrated approach for control of technical, cost, schedule, and administrative information necessary to manage the configurations for the TWRS Project Mission. Configuration management focuses on five principal activities: configuration management system management, configuration identification, configuration status accounting, change control, and configuration management assessments. TWRS Projectmore » personnel must execute work in a controlled fashion. Work must be performed by verbatim use of authorized and released technical information and documentation. Application of configuration management will be consistently applied across all TWRS Project activities and assessed accordingly. The Project Hanford Management Contract (PHMC) configuration management requirements are prescribed in HNF-MP-013, Configuration Management Plan (FDH 1997a). This TWRS Configuration Management Plan (CMP) implements those requirements and supersedes the Tank Waste Remediation System Configuration Management Program Plan described in Vann, 1996. HNF-SD-WM-CM-014, Tank Waste Remediation System Configuration Management Implementation Plan (Vann, 1997) will be revised to implement the requirements of this plan. This plan provides the responsibilities, actions and tools necessary to implement the requirements as defined in the above referenced documents.« less
NASA Technical Reports Server (NTRS)
1976-01-01
System specifications to be used by the mission control center (MCC) for the shuttle orbital flight test (OFT) time frame were described. The three support systems discussed are the communication interface system (CIS), the data computation complex (DCC), and the display and control system (DCS), all of which may interfere with, and share processing facilities with other applications processing supporting current MCC programs. The MCC shall provide centralized control of the space shuttle OFT from launch through orbital flight, entry, and landing until the Orbiter comes to a stop on the runway. This control shall include the functions of vehicle management in the area of hardware configuration (verification), flight planning, communication and instrumentation configuration management, trajectory, software and consumables, payloads management, flight safety, and verification of test conditions/environment.
A parallel optimization method for product configuration and supplier selection based on interval
NASA Astrophysics Data System (ADS)
Zheng, Jian; Zhang, Meng; Li, Guoxi
2017-06-01
In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ROOT, R.W.
1999-05-18
This guide provides the Tank Waste Remediation System Privatization Infrastructure Program management with processes and requirements to appropriately control information and documents in accordance with the Tank Waste Remediation System Configuration Management Plan (Vann 1998b). This includes documents and information created by the program, as well as non-program generated materials submitted to the project. It provides appropriate approval/control, distribution and filing systems.
ERIC Educational Resources Information Center
Selwyn, Neil
2011-01-01
Schools have long made use of digital technologies to support the co-ordination of management and administrative processes--not least "management information systems", "virtual learning environments" and other "institutional technologies". The last five years have seen the convergence of these technologies into…
[Formian 2 and a Formian Function for Processing Polyhedric Configurations
NASA Technical Reports Server (NTRS)
Nooshin, H.; Disney, P. L.; Champion, O. C.
1996-01-01
The work began in October 1994 with the following objectives: (1) to produce an improved version of the programming language Formian; and (2) to create a means for computer aided handling of polyhedric configurations including the geodesic forms of all kinds. A new version of Formian, referred to as Formian 2, is being implemented to operate in the Windows 95 environment. It is an ideal tool for configuration management in a convenient and user-friendly manner. The second objective was achieved by creating a standard Formian function that allows convenient handling of all types of polyhedric configurations. In particular, the focus of attention is on polyhedric configurations that are of importance in architectural and structural engineering fields. The natural medium for processing of polyhedric configurations is a programming language that incorporates the concepts of 'formex algebra'. Formian is such a programming language in which the processing of polyhedric configurations can be carried out using the standard elements of the language. A description of this function is included in a chapter for a book entitled 'Beyond the Cube: the Architecture of space Frames and Polyhedra'. A copy of this chapter is appended.
CM Process Improvement and the International Space Station Program (ISSP)
NASA Technical Reports Server (NTRS)
Stephenson, Ginny
2007-01-01
This viewgraph presentation reviews the Configuration Management (CM) process improvements planned and undertaken for the International Space Station Program (ISSP). It reviews the 2004 findings and recommendations and the progress towards their implementation.
OSD CALS Architecture Master Plan Study. Concept Paper. Configuration Management. Volume 28
DOT National Transportation Integrated Search
1989-10-01
The mission of CALS is to enhance operational readiness of DoD weapon systems through application of information technology to the management of technical information. CALS will automate the current paper-intensive processes involved in weapon system...
NASA Astrophysics Data System (ADS)
Xu, Boyi; Xu, Li Da; Fei, Xiang; Jiang, Lihong; Cai, Hongming; Wang, Shuai
2017-08-01
Facing the rapidly changing business environments, implementation of flexible business process is crucial, but difficult especially in data-intensive application areas. This study aims to provide scalable and easily accessible information resources to leverage business process management. In this article, with a resource-oriented approach, enterprise data resources are represented as data-centric Web services, grouped on-demand of business requirement and configured dynamically to adapt to changing business processes. First, a configurable architecture CIRPA involving information resource pool is proposed to act as a scalable and dynamic platform to virtualise enterprise information resources as data-centric Web services. By exposing data-centric resources as REST services in larger granularities, tenant-isolated information resources could be accessed in business process execution. Second, dynamic information resource pool is designed to fulfil configurable and on-demand data accessing in business process execution. CIRPA also isolates transaction data from business process while supporting diverse business processes composition. Finally, a case study of using our method in logistics application shows that CIRPA provides an enhanced performance both in static service encapsulation and dynamic service execution in cloud computing environment.
Integrated Modeling Environment
NASA Technical Reports Server (NTRS)
Mosier, Gary; Stone, Paul; Holtery, Christopher
2006-01-01
The Integrated Modeling Environment (IME) is a software system that establishes a centralized Web-based interface for integrating people (who may be geographically dispersed), processes, and data involved in a common engineering project. The IME includes software tools for life-cycle management, configuration management, visualization, and collaboration.
Space Shuttle aerothermodynamic data report, phase C
NASA Technical Reports Server (NTRS)
1985-01-01
Space shuttle aerothermodynamic data, collected from a continuing series of wind tunnel tests, are permanently stored with the Data Management Services (DMS) system. Information pertaining to current baseline configuration definition is also stored. Documentation of DMS processed data arranged sequentially and by space shuttle configuration are included. An up-to-date record of all applicable aerothermodynamic data collected, processed, or summarized during the space shuttle program is provided. Tables are designed to provide suvery information to the various space shuttle managerial and technical levels.
Aerothermodynamic data base. Data file contents report, phase C
NASA Technical Reports Server (NTRS)
Lutz, G. R.
1983-01-01
Space shuttle aerothermodynamic data, collected from a continuing series of wind tunnel tests, are permanently stored with the Data Management Services (DMS) system. Information pertaining to current baseline configuration definition is also stored. Documentation of DMS processed data arranged sequentially and by space shuttle configuration is listed to provide an up-to-date record of all applicable aerothermodynamic data collected, processed, or summarized during the space shuttle program. Tables provide survey information to the various space shuttle managerial and technical levels.
NASA Technical Reports Server (NTRS)
1984-01-01
Space shuttle aerothermodynamic data, collected from a continuing series of wind tunnel tests, are permanently stored with the Data Management Services (DMS) system. Information pertaining to current baseline configuration definition is also stored. A list of documentation of DMS processed data arranged sequentially and by space shuttle configuration is presented. The listing provides an up to date record of all applicable aerothermodynamic data collected, processed, or summarized during the space shuttle program. Tables are designed to provide survey information to the various space shuttle managerial and technical levels.
Tank waste remediation system configuration management implementation plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vann, J.M.
1998-03-31
The Tank Waste Remediation System (TWRS) Configuration Management Implementation Plan describes the actions that will be taken by Project Hanford Management Contract Team to implement the TWRS Configuration Management program defined in HNF 1900, TWRS Configuration Management Plan. Over the next 25 years, the TWRS Project will transition from a safe storage mission to an aggressive retrieval, storage, and disposal mission in which substantial Engineering, Construction, and Operations activities must be performed. This mission, as defined, will require a consolidated configuration management approach to engineering, design, construction, as-building, and operating in accordance with the technical baselines that emerge from themore » life cycles. This Configuration Management Implementation Plan addresses the actions that will be taken to strengthen the TWRS Configuration Management program.« less
Configuration Management, Capacity Planning Decision Support, Modeling and Simulation
1988-12-01
flow includes both top-down and bottom-up requirements. The flow also includes hardware, software and transfer acquisition, installation, operation ... management and upgrade as required. Satisfaction of a users needs and requirements is a difficult and detailed process. The key assumptions at this
Russom, Diana; Ahmed, Amira; Gonzalez, Nancy; Alvarnas, Joseph; DiGiusto, David
2012-01-01
Regulatory requirements for the manufacturing of cell products for clinical investigation require a significant level of record-keeping, starting early in process development and continuing through to the execution and requisite follow-up of patients on clinical trials. Central to record-keeping is the management of documentation related to patients, raw materials, processes, assays and facilities. To support these requirements, we evaluated several laboratory information management systems (LIMS), including their cost, flexibility, regulatory compliance, ongoing programming requirements and ability to integrate with laboratory equipment. After selecting a system, we performed a pilot study to develop a user-configurable LIMS for our laboratory in support of our pre-clinical and clinical cell-production activities. We report here on the design and utilization of this system to manage accrual with a healthy blood-donor protocol, as well as manufacturing operations for the production of a master cell bank and several patient-specific stem cell products. The system was used successfully to manage blood donor eligibility, recruiting, appointments, billing and serology, and to provide annual accrual reports. Quality management reporting features of the system were used to capture, report and investigate process and equipment deviations that occurred during the production of a master cell bank and patient products. Overall the system has served to support the compliance requirements of process development and phase I/II clinical trial activities for our laboratory and can be easily modified to meet the needs of similar laboratories.
Integrating policy-based management and SLA performance monitoring
NASA Astrophysics Data System (ADS)
Liu, Tzong-Jye; Lin, Chin-Yi; Chang, Shu-Hsin; Yen, Meng-Tzu
2001-10-01
Policy-based management system provides the configuration capability for the system administrators to focus on the requirements of customers. The service level agreement performance monitoring mechanism helps system administrators to verify the correctness of policies. However, it is difficult for a device to process the policies directly because the policies are the management concept. This paper proposes a mechanism to decompose a policy into rules that can be efficiently processed by a device. Thus, the device may process the rule and collect the performance statistics information efficiently; and the policy-based management system may collect these performance statistics information and report the service-level agreement performance monitoring information to the system administrator. The proposed policy-based management system achieves both the policy configuration and service-level agreement performance monitoring requirements. A policy consists of a condition part and an action part. The condition part is a Boolean expression of a source host IP group, a destination host IP group, etc. The action part is the parameters of services. We say that an address group is compact if it only consists of a range of IP address that can be denoted by a pair of IP address and corresponding IP mask. If the condition part of a policy only consists of the compact address group, we say that the policy is a rule. Since a device can efficiently process a compact address and a system administrator prefers to define a range of IP address, the policy-based management system has to translate policy into rules and supplements the gaps between policy and rules. The proposed policy-based management system builds the relationships between VPN and policies, policy and rules. Since the system administrator wants to monitor the system performance information of VPNs and policies, the proposed policy-based management system downloads the relationships among VPNs, policies and rules to the SNMP agents. The SNMP agents build the management information base (MIB) of all VPNs, policies and rules according to the relationships obtained from the management server. Thus, the proposed policy-based management system may get all performance monitoring information of VPNs and policies from agents. The proposed policy-based manager achieves two goals: a) provide a management environment for the system administrator to configure their network only considering the policy requirement issues and b) let the device have only to process the packet and then collect the required performance information. These two things make the proposed management system satisfy both the user and device requirements.
Understanding managerial behaviour during initial steps of a clinical information system adoption
2011-01-01
Background While the study of the information technology (IT) implementation process and its outcomes has received considerable attention, the examination of pre-adoption and pre-implementation stages of configurable IT uptake appear largely under-investigated. This paper explores managerial behaviour during the periods prior the effective implementation of a clinical information system (CIS) by two Canadian university multi-hospital centers. Methods Adopting a structurationist theoretical stance and a case study research design, the processes by which CIS managers' patterns of discourse contribute to the configuration of the new technology in their respective organizational contexts were longitudinally examined over 33 months. Results Although managers seemed to be aware of the risks and organizational impact of the adoption of a new clinical information system, their decisions and actions over the periods examined appeared rather to be driven by financial constraints and power struggles between different groups involved in the process. Furthermore, they largely emphasized technological aspects of the implementation, with organizational dimensions being put aside. In view of these results, the notion of 'rhetorical ambivalence' is proposed. Results are further discussed in relation to the significance of initial decisions and actions for the subsequent implementation phases of the technology being configured. Conclusions Theoretical and empirically grounded, the paper contributes to the underdeveloped body of literature on information system pre-implementation processes by revealing the crucial role played by managers during the initial phases of a CIS adoption. PMID:21682885
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing
NASA Technical Reports Server (NTRS)
Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.
2010-01-01
The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development and throughout the life of the Orion project.
TWRS Configuration management program plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vann, J.M.
The TWRS Configuration Management Program Plan (CMPP) integrates technical and administrative controls to establish and maintain consistency among requirements, product configuration, and product information for TWRS products during all life cycle phases. This CMPP will be used by TWRS management and configuration management personnel to establish and manage the technical and integrated baselines and controls and status changes to those baselines.
Plan for the Characterization of HIRF Effects on a Fault-Tolerant Computer Communication System
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.; Koppen, Sandra V.
2008-01-01
This report presents the plan for the characterization of the effects of high intensity radiated fields on a prototype implementation of a fault-tolerant data communication system. Various configurations of the communication system will be tested. The prototype system is implemented using off-the-shelf devices. The system will be tested in a closed-loop configuration with extensive real-time monitoring. This test is intended to generate data suitable for the design of avionics health management systems, as well as redundancy management mechanisms and policies for robust distributed processing architectures.
Requirements management and control
NASA Technical Reports Server (NTRS)
Robbins, Red
1993-01-01
The systems engineering process for thermal nuclear propulsion requirements and configuration definition is described in outline and graphic form. Functional analysis and mission attributes for a Mars exploration mission are also addressed.
Configuration Management File Manager Developed for Numerical Propulsion System Simulation
NASA Technical Reports Server (NTRS)
Follen, Gregory J.
1997-01-01
One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.
Seven Processes that Enable NASA Software Engineering Technologies
NASA Technical Reports Server (NTRS)
Housch, Helen; Godfrey, Sally
2011-01-01
This slide presentation reviews seven processes that NASA uses to ensure that software is developed, acquired and maintained as specified in the NPR 7150.2A requirement. The requirement is to ensure that all software be appraised for the Capability Maturity Model Integration (CMMI). The enumerated processes are: (7) Product Integration, (6) Configuration Management, (5) Verification, (4) Software Assurance, (3) Measurement and Analysis, (2) Requirements Management and (1) Planning & Monitoring. Each of these is described and the group(s) that are responsible is described.
Hybrid indirect/direct contactor for thermal management of counter-current processes
Hornbostel, Marc D.; Krishnan, Gopala N.; Sanjurjo, Angel
2018-03-20
The invention relates to contactors suitable for use, for example, in manufacturing and chemical refinement processes. In an aspect is a hybrid indirect/direct contactor for thermal management of counter-current processes, the contactor comprising a vertical reactor column, an array of interconnected heat transfer tubes within the reactor column, and a plurality of stream path diverters, wherein the tubes and diverters are configured to block all straight-line paths from the top to bottom ends of the reactor column.
NASA Technical Reports Server (NTRS)
1976-01-01
This redundant strapdown INS preliminary design study demonstrates the practicality of a skewed sensor system configuration by means of: (1) devising a practical system mechanization utilizing proven strapdown instruments, (2) thoroughly analyzing the skewed sensor redundancy management concept to determine optimum geometry, data processing requirements, and realistic reliability estimates, and (3) implementing the redundant computers into a low-cost, maintainable configuration.
NASA Technical Reports Server (NTRS)
Elden, N. C.; Winkler, H. E.; Price, D. F.; Reysa, R. P.
1983-01-01
Water recovery subsystems are being tested at the NASA Lyndon B. Johnson Space Center for Space Station use to process waste water generated from urine and wash water collection facilities. These subsystems are being integrated into a water management system that will incorporate wash water and urine processing through the use of hyperfiltration and vapor compression distillation subsystems. Other hardware in the water management system includes a whole body shower, a clothes washing facility, a urine collection and pretreatment unit, a recovered water post-treatment system, and a water quality monitor. This paper describes the integrated test configuration, pertinent performance data, and feasibility and design compatibility conclusions of the integrated water management system.
Microprocessors: Laboratory Simulation of Industrial Control Applications.
ERIC Educational Resources Information Center
Gedeon, David V.
1981-01-01
Describes a course to make technical managers more aware of computer technology and how data loggers, programmable controllers, and larger computer systems interact in a hierarchical configuration of manufacturing process control. (SK)
Comparison of DOE and NIRMA approaches to configuration management programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, E.Y.; Kulzick, K.C.
One of the major management programs used for commercial, laboratory, and defense nuclear facilities is configuration management. The safe and efficient operation of a nuclear facility requires constant vigilance in maintaining the facility`s design basis with its as-built condition. Numerous events have occurred that can be attributed to (either directly or indirectly) the extent to which configuration management principles have been applied. The nuclear industry, as a whole, has been addressing this management philosophy with efforts taken on by its constituent professional organizations. The purpose of this paper is to compare and contrast the implementation plans for enhancing a configurationmore » management program as outlined in the U.S. Department of Energy`s (DOE`s) DOE-STD-1073-93, {open_quotes}Guide for Operational Configuration Management Program,{close_quotes} with the following guidelines developed by the Nuclear Information and Records Management Association (NIRMA): 1. PP02-1994, {open_quotes}Position Paper on Configuration Management{close_quotes} 2. PP03-1992, {open_quotes}Position Paper for Implementing a Configuration Management Enhancement Program for a Nuclear Facility{close_quotes} 3. PP04-1994 {open_quotes}Position Paper for Configuration Management Information Systems.{close_quotes}« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laney, T.
The configuration management architecture presented in this Configuration Management Plan is based on the functional model established by DOE-STD-1073-93, ``Guide for Operational Configuration Management Program.`` The DOE Standard defines the configuration management program by the five basic program elements of ``program management,`` ``design requirements,`` ``document control,`` ``change control,`` and ``assessments,`` and the two adjunct recovery programs of ``design reconstitution,`` and ``material condition and aging management.`` The CM model of five elements and two adjunct programs strengthen the necessary technical and administrative control to establish and maintain a consistent technical relationship among the requirements, physical configuration, and documentation. Although the DOEmore » Standard was originally developed for the operational phase of nuclear facilities, this plan has the flexibility to be adapted and applied to all life-cycle phases of both nuclear and non-nuclear facilities. The configuration management criteria presented in this plan endorses the DOE Standard and has been tailored specifically to address the technical relationship of requirements, physical configuration, and documentation during the full life cycle of the Waste Tank Farms and 242-A Evaporator of Tank Waste Remediation System.« less
NASA Technical Reports Server (NTRS)
Gerber, C. R.
1972-01-01
The computation and logical functions which are performed by the data processing assembly of the modular space station are defined. The subjects discussed are: (1) requirements analysis, (2) baseline data processing assembly configuration, (3) information flow study, (4) throughput simulation, (5) redundancy study, (6) memory studies, and (7) design requirements specification.
Actuator digital interface unit (AIU). [control units for space shuttle data system
NASA Technical Reports Server (NTRS)
1973-01-01
Alternate versions of the actuator interface unit are presented. One alternate is a dual-failure immune configuration which feeds a look-and-switch dual-failure immune hydraulic system. The other alternate is a single-failure immune configuration which feeds a majority voting hydraulic system. Both systems communicate with the data bus through data terminals dedicated to each user subsystem. Both operational control data and configuration control information are processed in and out of the subsystem via the data terminal which yields the actuator interface subsystem, self-managing within its failure immunity capability.
NASA Technical Reports Server (NTRS)
Kennedy, J. R.; Fitzpatrick, W. S.
1971-01-01
The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.
Optimization process in helicopter design
NASA Technical Reports Server (NTRS)
Logan, A. H.; Banerjee, D.
1984-01-01
In optimizing a helicopter configuration, Hughes Helicopters uses a program called Computer Aided Sizing of Helicopters (CASH), written and updated over the past ten years, and used as an important part of the preliminary design process of the AH-64. First, measures of effectiveness must be supplied to define the mission characteristics of the helicopter to be designed. Then CASH allows the designer to rapidly and automatically develop the basic size of the helicopter (or other rotorcraft) for the given mission. This enables the designer and management to assess the various tradeoffs and to quickly determine the optimum configuration.
Electromagnetic spectrum management system
Seastrand, Douglas R.
2017-01-31
A system for transmitting a wireless countermeasure signal to disrupt third party communications is disclosed that include an antenna configured to receive wireless signals and transmit wireless counter measure signals such that the wireless countermeasure signals are responsive to the received wireless signals. A receiver processes the received wireless signals to create processed received signal data while a spectrum control module subtracts known source signal data from the processed received signal data to generate unknown source signal data. The unknown source signal data is based on unknown wireless signals, such as enemy signals. A transmitter is configured to process the unknown source signal data to create countermeasure signals and transmit a wireless countermeasure signal over the first antenna or a second antenna to thereby interfere with the unknown wireless signals.
Intelligent Resource Management for Local Area Networks: Approach and Evolution
NASA Technical Reports Server (NTRS)
Meike, Roger
1988-01-01
The Data Management System network is a complex and important part of manned space platforms. Its efficient operation is vital to crew, subsystems and experiments. AI is being considered to aid in the initial design of the network and to augment the management of its operation. The Intelligent Resource Management for Local Area Networks (IRMA-LAN) project is concerned with the application of AI techniques to network configuration and management. A network simulation was constructed employing real time process scheduling for realistic loads, and utilizing the IEEE 802.4 token passing scheme. This simulation is an integral part of the construction of the IRMA-LAN system. From it, a causal model is being constructed for use in prediction and deep reasoning about the system configuration. An AI network design advisor is being added to help in the design of an efficient network. The AI portion of the system is planned to evolve into a dynamic network management aid. The approach, the integrated simulation, project evolution, and some initial results are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaus, P.S.
This Configuration Management Implementation Plan (CMIP) was developed to assist in managing systems, structures, and components (SSCS), to facilitate the effective control and statusing of changes to SSCS, and to ensure technical consistency between design, performance, and operational requirements. Its purpose is to describe the approach Privatization Infrastructure will take in implementing a configuration management program, to identify the Program`s products that need configuration management control, to determine the rigor of control, and to identify the mechanisms for that control.
Caruso, Ronald D
2004-01-01
Proper configuration of software security settings and proper file management are necessary and important elements of safe computer use. Unfortunately, the configuration of software security options is often not user friendly. Safe file management requires the use of several utilities, most of which are already installed on the computer or available as freeware. Among these file operations are setting passwords, defragmentation, deletion, wiping, removal of personal information, and encryption. For example, Digital Imaging and Communications in Medicine medical images need to be anonymized, or "scrubbed," to remove patient identifying information in the header section prior to their use in a public educational or research environment. The choices made with respect to computer security may affect the convenience of the computing process. Ultimately, the degree of inconvenience accepted will depend on the sensitivity of the files and communications to be protected and the tolerance of the user. Copyright RSNA, 2004
[Requirements for the successful installation of an data management system].
Benson, M; Junger, A; Quinzio, L; Hempelmann, G
2002-08-01
Due to increasing requirements on medical documentation, especially with reference to the German Social Law binding towards quality management and introducing a new billing system (DRGs), an increasing number of departments consider to implement a patient data management system (PDMS). The installation should be professionally planned as a project in order to insure and complete a successful installation. The following aspects are essential: composition of the project group, definition of goals, finance, networking, space considerations, hardware, software, configuration, education and support. Project and finance planning must be prepared before beginning the project and the project process must be constantly evaluated. In selecting the software, certain characteristics should be considered: use of standards, configurability, intercommunicability and modularity. Our experience has taught us that vaguely defined goals, insufficient project planning and the existing management culture are responsible for the failure of PDMS installations. The software used tends to play a less important role.
Waste receiving and processing plant control system; system design description
DOE Office of Scientific and Technical Information (OSTI.GOV)
LANE, M.P.
1999-02-24
The Plant Control System (PCS) is a heterogeneous computer system composed of numerous sub-systems. The PCS represents every major computer system that is used to support operation of the Waste Receiving and Processing (WRAP) facility. This document, the System Design Description (PCS SDD), includes several chapters and appendices. Each chapter is devoted to a separate PCS sub-system. Typically, each chapter includes an overview description of the system, a list of associated documents related to operation of that system, and a detailed description of relevant system features. Each appendice provides configuration information for selected PCS sub-systems. The appendices are designed asmore » separate sections to assist in maintaining this document due to frequent changes in system configurations. This document is intended to serve as the primary reference for configuration of PCS computer systems. The use of this document is further described in the WRAP System Configuration Management Plan, WMH-350, Section 4.1.« less
2017-02-15
Charles Spern, at right, project manager on the Engineering Services Contract (ESC), and Glenn Washington, ESC quality assurance specialist, perform final inspections of the Veggie Series 1 plant experiment inside a laboratory in the Space Station Processing Facility at NASA's Kennedy Space Center in Florida. At far left is Dena Richmond, ESC configuration management. The Series 1 experiment is being readied for flight aboard Orbital ATK's Cygnus module on its seventh (OA-7) Commercial Resupply Services mission to the International Space Station. The Veggie system is on the space station.
[Lean thinking and brain-dead patient assistance in the organ donation process].
Pestana, Aline Lima; dos Santos, José Luís Guedes; Erdmann, Rolf Hermann; da Silva, Elza Lima; Erdmann, Alacoque Lorenzini
2013-02-01
Organ donation is a complex process that challenges health system professionals and managers. This study aimed to introduce a theoretical model to organize brain-dead patient assistance and the organ donation process guided by the main lean thinking ideas, which enable production improvement through planning cycles and the development of a proper environment for successful implementation. Lean thinking may make the process of organ donation more effective and efficient and may contribute to improvements in information systematization and professional qualifications for excellence of assistance. The model is configured as a reference that is available for validation and implementation by health and nursing professionals and managers in the management of potential organ donors after brain death assistance and subsequent transplantation demands.
Managing crises through organisational development: a conceptual framework.
Lalonde, Carole
2011-04-01
This paper presents a synthesis of the guiding principles in crisis management in accordance with the four configurational imperatives (strategy, structure, leadership and environment) defined by Miller (1987) and outlines interventions in organisational development (OD) that may contribute to their achievement. The aim is to build a conceptual framework at the intersection of these two fields that could help to strengthen the resilient capabilities of individuals, organisations and communities to face crises. This incursion into the field of OD--to generate more efficient configurations of practices in crisis management--seems particularly fruitful considering the system-wide application of OD, based on open-systems theory (Burke, 2008). Various interventions proposed by OD in terms of human processes, structural designs and human resource management, as well as strategy, may help leaders, members of organisations and civil society apply effectively, and in a more sustainable way, the crisis management guiding principles defined by researchers. © 2011 The Author(s). Disasters © Overseas Development Institute, 2011.
Team table: a framework and tool for continuous factory planning
NASA Astrophysics Data System (ADS)
Sihn, Wilfried; Bischoff, Juergen; von Briel, Ralf; Josten, Marcus
2000-10-01
Growing market turbulences and shorter product life cycles require a continuous adaptation of factory structures resulting in a continuous factory planning process. Therefore a new framework is developed which focuses on configuration and data management process integration. This enable an online system performance evaluation based on continuous availability of current data. The use of this framework is especially helpful and will guarantee high cost and time savings, when used in the early stages of the planning, called the concept or rough planning phase. The new framework is supported by a planning round table as a tool for team-based configuration processes integrating the knowledge of all persons involved in planning processes. A case study conducted at a German company shows the advantages which can be achieved by implementing the new framework and methods.
Electromagnetic spectrum management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seastrand, Douglas R.
A system for transmitting a wireless countermeasure signal to disrupt third party communications is disclosed that include an antenna configured to receive wireless signals and transmit wireless counter measure signals such that the wireless countermeasure signals are responsive to the received wireless signals. A receiver processes the received wireless signals to create processed received signal data while a spectrum control module subtracts known source signal data from the processed received signal data to generate unknown source signal data. The unknown source signal data is based on unknown wireless signals, such as enemy signals. A transmitter is configured to process themore » unknown source signal data to create countermeasure signals and transmit a wireless countermeasure signal over the first antenna or a second antenna to thereby interfere with the unknown wireless signals.« less
A resilient and secure software platform and architecture for distributed spacecraft
NASA Astrophysics Data System (ADS)
Otte, William R.; Dubey, Abhishek; Karsai, Gabor
2014-06-01
A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.
Iteration and Prototyping in Creating Technical Specifications.
ERIC Educational Resources Information Center
Flynt, John P.
1994-01-01
Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)
Benefits Assessment for Single-Airport Tactical Runway Configuration Management Tool (TRCM)
NASA Technical Reports Server (NTRS)
Oseguera-Lohr, Rosa; Phojanamonogkolkij, Nipa; Lohr, Gary W.
2015-01-01
The System-Oriented Runway Management (SORM) concept was developed as part of the Airspace Systems Program (ASP) Concepts and Technology Development (CTD) Project, and is composed of two basic capabilities: Runway Configuration Management (RCM), and Combined Arrival/Departure Runway Scheduling (CADRS). RCM is the process of designating active runways, monitoring the active runway configuration for suitability given existing factors, and predicting future configuration changes; CADRS is the process of distributing arrivals and departures across active runways based on local airport and National Airspace System (NAS) goals. The central component in the SORM concept is a tool for taking into account all the various factors and producing a recommendation for what would be the optimal runway configuration, runway use strategy, and aircraft sequence, considering as many of the relevant factors required in making this type of decision, and user preferences, if feasible. Three separate tools were initially envisioned for this research area, corresponding to the time scale in which they would operate: Strategic RCM (SRCM), with a planning horizon on the order of several hours, Tactical RCM (TRCM), with a planning horizon on the order of 90 minutes, and CADRS, with a planning horizon on the order of 15-30 minutes[1]. Algorithm development was initiated in all three of these areas, but the most fully developed to date is the TRCM algorithm. Earlier studies took a high-level approach to benefits, estimating aggregate benefits across most of the major airports in the National Airspace Systems (NAS), for both RCM and CADRS [2]. Other studies estimated the benefit of RCM and CADRS using various methods of re-sequencing arrivals to reduce delays3,4, or better balancing of arrival fixes5,6. Additional studies looked at different methods for performing the optimization involved in selecting the best Runway Configuration Plan (RCP) to use7-10. Most of these previous studies were high-level or generic in nature (not focusing on specific airports), and benefits were aggregated for the entire NAS, with relatively low fidelity simulation of SORM functions and aircraft trajectories. For SORM research, a more detailed benefits assessment of RCM and CADRS for specific airports or metroplexes is needed.
Conceptual design study: Forest Fire Advanced System Technology (FFAST)
NASA Technical Reports Server (NTRS)
Nichols, J. D.; Warren, J. R.
1986-01-01
An integrated forest fire detection and mapping system that will be based upon technology available in the 1990s was defined. Uncertainties in emerging and advanced technologies related to the conceptual design were identified and recommended for inclusion as preferred system components. System component technologies identified for an end-to-end system include thermal infrared, linear array detectors, automatic georeferencing and signal processing, geosynchronous satellite communication links, and advanced data integration and display. Potential system configuration options were developed and examined for possible inclusion in the preferred system configuration. The preferred system configuration will provide increased performance and be cost effective over the system currently in use. Forest fire management user requirements and the system component emerging technologies were the basis for the system configuration design. A preferred system configuration was defined that warrants continued refinement and development, examined economic aspects of the current and preferred system, and provided preliminary cost estimates for follow-on system prototype development.
Space shuttle configuration accounting functional design specification
NASA Technical Reports Server (NTRS)
1974-01-01
An analysis is presented of the requirements for an on-line automated system which must be capable of tracking the status of requirements and engineering changes and of providing accurate and timely records. The functional design specification provides the definition, description, and character length of the required data elements and the interrelationship of data elements to adequately track, display, and report the status of active configuration changes. As changes to the space shuttle program levels II and III configuration are proposed, evaluated, and dispositioned, it is the function of the configuration management office to maintain records regarding changes to the baseline and to track and report the status of those changes. The configuration accounting system will consist of a combination of computers, computer terminals, software, and procedures, all of which are designed to store, retrieve, display, and process information required to track proposed and proved engineering changes to maintain baseline documentation of the space shuttle program levels II and III.
Space Station Freedom pressurized element interior design process
NASA Technical Reports Server (NTRS)
Hopson, George D.; Aaron, John; Grant, Richard L.
1990-01-01
The process used to develop the on-orbit working and living environment of the Space Station Freedom has some very unique constraints and conditions to satisfy. The goal is to provide maximum efficiency and utilization of the available space, in on-orbit, zero G conditions that establishes a comfortable, productive, and safe working environment for the crew. The Space Station Freedom on-orbit living and working space can be divided into support for three major functions: (1) operations, maintenance, and management of the station; (2) conduct of experiments, both directly in the laboratories and remotely for experiments outside the pressurized environment; and (3) crew related functions for food preparation, housekeeping, storage, personal hygiene, health maintenance, zero G environment conditioning, and individual privacy, and rest. The process used to implement these functions, the major requirements driving the design, unique considerations and constraints that influence the design, and summaries of the analysis performed to establish the current configurations are described. Sketches and pictures showing the layout and internal arrangement of the Nodes, U.S. Laboratory and Habitation modules identify the current design relationships of the common and unique station housekeeping subsystems. The crew facilities, work stations, food preparation and eating areas (galley and wardroom), and exercise/health maintenance configurations, waste management and personal hygiene area configuration are shown. U.S. Laboratory experiment facilities and maintenance work areas planned to support the wide variety and mixtures of life science and materials processing payloads are described.
Process antecedents of challenging, under-cover and readily-adopted innovations.
Adams, Richard; Tranfield, David; Denyer, David
2013-01-01
The purpose of the study is to test the utility of a taxonomy of innovation based on perceived characteristics in the context of healthcare by exploring the extent to which discrete innovation types could be distinguished from each other in terms of process antecedents. A qualitative approach was adopted to explore the process antecedents of nine exemplar cases of "challenging", "under-cover" and "readily-adopted" healthcare innovations. Data were collected by semi-structured interview and from secondary sources, and content analysed according to a theoretically informed framework of innovation process. Cluster analysis was applied to determine whether innovation types could be distinguished on the basis of process characteristics. The findings provide moderate support for the proposition that innovations differentiated on the basis of the way they are perceived by potential users exhibit different process characteristics. Innovations exhibiting characteristics previously believed negatively to impact adoption may be successfully adopted but by a different configuration of processes than by innovations exhibiting a different set of characteristics. The findings must be treated with caution because the sample consists of self-selected cases of successful innovation and is limited by sample size. Nevertheless, the study sheds new light on important process differences in healthcare innovation. The paper offers a heuristic device to aid clinicians and managers to better understand the relatively novel task of promoting and managing innovation in healthcare. The paper advances the argument that there is under-exploited opportunity for cross-disciplinary organisational learning for innovation management in the NHS. If efficiency and quality improvement targets are to be met through a strategy of encouraging innovation, it may be advantageous for clinicians and managers to reflect on what this study found mostly to be absent from the processes of the innovations studied, notably management commitment in the form of norms, resource allocation and top management support. This paper is based on original empirical work. It extends previous adoption related studies by applying a configurational approach to innovation attributes to offer new insights on healthcare innovation and highlight the importance of attention to process.
TWRS authorization basis configuration control summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendoza, D.P.
This document was developed to define the Authorization Basis management functional requirements for configuration control, to evaluate the management control systems currently in place, and identify any additional controls that may be required until the TWRS [Tank Waste Remediation System] Configuration Management system is fully in place.
DOT National Transportation Integrated Search
1997-01-01
Prepared ca. 1997. The Configuration Management Plan (CMP) provides configuration management instructions and guidance for the Vessel Traffic Service (VTS) system of the Ports and Waterways Safety System (PAWSS) project. The CMP describes in detail t...
Process management using component thermal-hydraulic function classes
Morman, James A.; Wei, Thomas Y. C.; Reifman, Jaques
1999-01-01
A process management expert system where following malfunctioning of a component, such as a pump, for determining system realignment procedures such as for by-passing the malfunctioning component with on-line speeds to maintain operation of the process at full or partial capacity or to provide safe shut down of the system while isolating the malfunctioning component. The expert system uses thermal-hydraulic function classes at the component level for analyzing unanticipated as well as anticipated component malfunctions to provide recommended sequences of operator actions. Each component is classified according to its thermal-hydraulic function, and the generic and component-specific characteristics for that function. Using the diagnosis of the malfunctioning component and its thermal hydraulic class, the expert system analysis is carried out using generic thermal-hydraulic first principles. One aspect of the invention employs a qualitative physics-based forward search directed primarily downstream from the malfunctioning component in combination with a subsequent backward search directed primarily upstream from the serviced component. Generic classes of components are defined in the knowledge base according to the three thermal-hydraulic functions of mass, momentum and energy transfer and are used to determine possible realignment of component configurations in response to thermal-hydraulic function imbalance caused by the malfunctioning component. Each realignment to a new configuration produces the accompanying sequence of recommended operator actions. All possible new configurations are examined and a prioritized list of acceptable solutions is produced.
Process management using component thermal-hydraulic function classes
Morman, J.A.; Wei, T.Y.C.; Reifman, J.
1999-07-27
A process management expert system where following malfunctioning of a component, such as a pump, for determining system realignment procedures such as for by-passing the malfunctioning component with on-line speeds to maintain operation of the process at full or partial capacity or to provide safe shut down of the system while isolating the malfunctioning component. The expert system uses thermal-hydraulic function classes at the component level for analyzing unanticipated as well as anticipated component malfunctions to provide recommended sequences of operator actions. Each component is classified according to its thermal-hydraulic function, and the generic and component-specific characteristics for that function. Using the diagnosis of the malfunctioning component and its thermal hydraulic class, the expert system analysis is carried out using generic thermal-hydraulic first principles. One aspect of the invention employs a qualitative physics-based forward search directed primarily downstream from the malfunctioning component in combination with a subsequent backward search directed primarily upstream from the serviced component. Generic classes of components are defined in the knowledge base according to the three thermal-hydraulic functions of mass, momentum and energy transfer and are used to determine possible realignment of component configurations in response to thermal-hydraulic function imbalance caused by the malfunctioning component. Each realignment to a new configuration produces the accompanying sequence of recommended operator actions. All possible new configurations are examined and a prioritized list of acceptable solutions is produced. 5 figs.
2013-06-01
quantity, the lead time, the process quality and the number of deliveries (Yang & Pan, 2004). Inventory management systems are classified as either...22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank... managed by the Defense Logistics Agency (DLA), Edgewood Chemical Biological Center (ECBC) must be able to complete reviews of all procurement
Automated lattice data generation
NASA Astrophysics Data System (ADS)
Ayyar, Venkitesh; Hackett, Daniel C.; Jay, William I.; Neil, Ethan T.
2018-03-01
The process of generating ensembles of gauge configurations (and measuring various observables over them) can be tedious and error-prone when done "by hand". In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.
Lighting system with thermal management system
Arik, Mehmet; Weaver, Stanton Earl; Stecher, Thomas Elliot; Seeley, Charles Erklin; Kuenzler, Glenn Howard; Wolfe, Jr., Charles Franklin; Utturkar, Yogen Vishwas; Sharma, Rajdeep; Prabhakaran, Satish; Icoz, Tunc
2015-02-24
Lighting systems having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system is configured to provide an air flow, such as a unidirectional air flow, through the housing structure in order to cool the light source. The driver electronics are configured to provide power to each of the light source and the thermal management system.
Lighting system with thermal management system
Arik, Mehmet; Weaver, Stanton Earl; Stecher, Thomas Elliot; Seeley, Charles Erklin; Kuenzler, Glenn Howard; Wolfe, Jr., Charles Franklin; Utturkar, Yogen Vishwas; Sharma, Rajdeep; Prabhakaran, Satish; Icoz, Tunc
2015-08-25
Lighting systems having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system is configured to provide an air flow, such as a unidirectional air flow, through the housing structure in order to cool the light source. The driver electronics are configured to provide power to each of the light source and the thermal management system.
Lighting system with thermal management system
Arik, Mehmet; Weaver, Stanton; Stecher, Thomas; Seeley, Charles; Kuenzler, Glenn; Wolfe, Jr., Charles; Utturkar, Yogen; Sharma, Rajdeep; Prabhakaran, Satish; Icoz, Tunc
2013-05-07
Lighting systems having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system is configured to provide an air flow, such as a unidirectional air flow, through the housing structure in order to cool the light source. The driver electronics are configured to provide power to each of the light source and the thermal management system.
Lighting system with thermal management system
Arik, Mehmet; Weaver, Stanton Earl; Stecher, Thomas Elliot; Seeley, Charles Erklin; Kuenzler, Glenn Howard; Wolfe, Jr, Charles Franklin; Utturkar, Yogen Vishwas; Sharma, Rajdeep; Prabhakaran, Satish; Icoz, Tunc
2016-10-11
Lighting systems having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system is configured to provide an air flow, such as a unidirectional air flow, through the housing structure in order to cool the light source. The driver electronics are configured to provide power to each of the light source and the thermal management system.
To Meet or Not To Meet Physical vs. Virtual Configuration Control Board
NASA Technical Reports Server (NTRS)
Rice, Shelley
2017-01-01
This presentation will define the CCB, discuss its functions and members. We will look into traditional processes of managing change control via the CCB meeting and advanced practices utilizing enhanced product tools and technologies. Well step through a summary of the feedback from the community of CM professionals at NASA Goddard Space Flight Center of best practices as well as pros and cons for facilitating both a physical CCB and managing stakeholder approvals in a virtual environment. Attendees will come away with current industry strategies to determine if process for managing change control and approvals can be streamlined within their local work environments.
User's Manual for the Naval Interactive Data Analysis System-Climatologies (NIDAS-C), Version 2.0
NASA Technical Reports Server (NTRS)
Abbott, Clifton
1996-01-01
This technical note provides the user's manual for the NIDAS-C system developed for the naval oceanographic office. NIDAS-C operates using numerous oceanographic data categories stored in an installed version of the Naval Environmental Operational Nowcast System (NEONS), a relational database management system (rdbms) which employs the ORACLE proprietary rdbms engine. Data management, configuration, and control functions for the supporting rdbms are performed externally. NIDAS-C stores and retrieves data to/from the rdbms but exercises no direct internal control over the rdbms or its configuration. Data is also ingested into the rdbms, for use by NIDAS-C, by external data acquisition processes. The data categories employed by NIDAS-C are as follows: Bathymetry - ocean depth at
An Evaluation Method of Equipment Reliability Configuration Management
NASA Astrophysics Data System (ADS)
Wang, Wei; Feng, Weijia; Zhang, Wei; Li, Yuan
2018-01-01
At present, many equipment development companies have been aware of the great significance of reliability of the equipment development. But, due to the lack of effective management evaluation method, it is very difficult for the equipment development company to manage its own reliability work. Evaluation method of equipment reliability configuration management is to determine the reliability management capabilities of equipment development company. Reliability is not only designed, but also managed to achieve. This paper evaluates the reliability management capabilities by reliability configuration capability maturity model(RCM-CMM) evaluation method.
The UNIX/XENIX Advantage: Applications in Libraries.
ERIC Educational Resources Information Center
Gordon, Kelly L.
1988-01-01
Discusses the application of the UNIX/XENIX operating system to support administrative office automation functions--word processing, spreadsheets, database management systems, electronic mail, and communications--at the Central Michigan University Libraries. Advantages and disadvantages of the XENIX operating system and system configuration are…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgard, K.G.
This Configuration Management Implementation Plan was developed to assist in the management of systems, structures, and components, to facilitate the effective control and statusing of changes to systems, structures, and components; and to ensure technical consistency between design, performance, and operational requirements. Its purpose is to describe the approach Project W-464 will take in implementing a configuration management control, to determine the rigor of control, and to identify the mechanisms for imposing that control.This Configuration Management Implementation Plan was developed to assist in the management of systems, structures, and components, to facilitate the effective control and statusing of changes tomore » systems, structures, and components; and to ensure technical consistency between design, performance, and operational requirements. Its purpose is to describe the approach Project W-464 will take in implementing a configuration management control, to determine the rigor of control, and to identify the mechanisms for imposing that control.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This standard presents program criteria and implementation guidance for an operational configuration management program for DOE nuclear and non-nuclear facilities. This Part 2 includes chapters on implementation guidance for operational configuration management, implementation guidance for design reconstitution, and implementation guidance for material condition and aging management. Appendices are included on design control, examples of design information, conduct of walkdowns, and content of design information summaries.
Sens, Brigitte
2010-01-01
The concept of general process orientation as an instrument of organisation development is the core principle of quality management philosophy, i.e. the learning organisation. Accordingly, prestigious quality awards and certification systems focus on process configuration and continual improvement. In German health care organisations, particularly in hospitals, this general process orientation has not been widely implemented yet - despite enormous change dynamics and the requirements of both quality and economic efficiency of health care processes. But based on a consistent process architecture that considers key processes as well as management and support processes, the strategy of excellent health service provision including quality, safety and transparency can be realised in daily operative work. The core elements of quality (e.g., evidence-based medicine), patient safety and risk management, environmental management, health and safety at work can be embedded in daily health care processes as an integrated management system (the "all in one system" principle). Sustainable advantages and benefits for patients, staff, and the organisation will result: stable, high-quality, efficient, and indicator-based health care processes. Hospitals with their broad variety of complex health care procedures should now exploit the full potential of total process orientation. Copyright © 2010. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Lim, Theodore C.; Welty, Claire
2017-09-01
Green infrastructure (GI) is an approach to stormwater management that promotes natural processes of infiltration and evapotranspiration, reducing surface runoff to conventional stormwater drainage infrastructure. As more urban areas incorporate GI into their stormwater management plans, greater understanding is needed on the effects of spatial configuration of GI networks on hydrological performance, especially in the context of potential subsurface and lateral interactions between distributed facilities. In this research, we apply a three-dimensional, coupled surface-subsurface, land-atmosphere model, ParFlow.CLM, to a residential urban sewershed in Washington DC that was retrofitted with a network of GI installations between 2009 and 2015. The model was used to test nine additional GI and imperviousness spatial network configurations for the site and was compared with monitored pipe-flow data. Results from the simulations show that GI located in higher flow-accumulation areas of the site intercepted more surface runoff, even during wetter and multiday events. However, a comparison of the differences between scenarios and levels of variation and noise in monitored data suggests that the differences would only be detectable between the most and least optimal GI/imperviousness configurations.
Parallel processing for scientific computations
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.
1991-01-01
The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.
Science and Technology in Development Environments - Industry and Department of Defense Case Studies
2003-11-01
an acronym for Product and Cycle Time Excellence, is designed to manage product development , business development , and business alliance processes...experiments. 3. Test Practicality—Pilot development with limited production. 4. Prove Profitability—Pilot production. 5. Manage Life Cycle —Manufacturing and...compressor, particularly in a turbofan configuration, was developed primarily under a CIP for these two engines. The TF30 and F100 experiences provide
Pattern Driven Selection and Configuration of S&D Mechanisms at Runtime
NASA Astrophysics Data System (ADS)
Crespo, Beatriz Gallego-Nicasio; Piñuela, Ana; Soria-Rodriguez, Pedro; Serrano, Daniel; Maña, Antonio
In order to satisfy the requests of SERENITY-aware applications, the SERENITY Runtime Framework’s main task is to perform pattern selection, to provide the application with the most suitable S&D Solution that satisfies the request. The result of this selection process depends on two main factors: the content of the S&D Library and the information stored and managed by the Context Manager. Three processes are involved: searching of the S&D Library to get the initial set of candidates to be selected; filtering and ordering the collection, based on the SRF configuration; and perform a loop to check S&D Pattern preconditions over the remaining S&D Artifacts in order to select the most suitable S&D Pattern first, and later the appropriate S&D Implementation for the environment conditions. Once the S&D Implementation is selected, the SERENITY Runtime Framework instantiates an Executable Component (EC) and provides the application with the necessary information and mechanism to make use of the EC.
An Analysis of Naval Aviation Configuration Status Accounting.
1983-12-01
Audit Service Report T30211, Multilocation Audit of Configuration Management of Aeronautical Equipment, 17 August 1982. 18. United States General... Audit and Review .......... 27 III. CONFIGURATION MANAGEMENT STATUS ACCOUNTING WITHIN THE DEPARTMENT OF DEFENSE ........................... 29 A. DOD...included published ar- ticles written by both military and private industry managers, technical papers delivered at symposia and conferences, Naval Audit
SPS Energy Conversion Power Management Workshop
NASA Technical Reports Server (NTRS)
1980-01-01
Energy technology concerning photovoltaic conversion, solar thermal conversion systems, and electrical power distribution processing is discussed. The manufacturing processes involving solar cells and solar array production are summarized. Resource issues concerning gallium arsenides and silicon alternatives are reported. Collector structures for solar construction are described and estimates in their service life, failure rates, and capabilities are presented. Theories of advanced thermal power cycles are summarized. Power distribution system configurations and processing components are presented.
NASA Astrophysics Data System (ADS)
Lisio, Giovanni; Candia, Sante; Campolo, Giovanni; Pascucci, Dario
2011-08-01
Thales Alenia Space Italy has carried out the definition of a configurable (on mission basis) PUS ECSS-E_70- 41A see [3] Centralised Services Layer, characterised by:- a mission-independent set of 'classes' implementing the services logic.- a mission-dependent set of configuration data and selection flags.The software components belonging to this layer implement the PUS standard services ECSS-E_70-41A and a set of mission-specific services. The design of this layer has been performed by separating the services mechanisms (mission-independent execution logic) from the services configuration information (mission-dependent data). Once instantiated for a specific mission, the PUS Centralised Services Layer offers a large set of capabilities available to the CSCI's Applications Layer. This paper describes the building blocks PUS architectural solution developed by Thales Alenia Space Italy, emphasizing the mechanisms which allow easy configuration of the Scalable PUS library to fulfill the requirements of different missions. This paper also focus the Thales Alenia Space solution to automatically generate the mission-specific "PUS Services" flight software based on mission specific requirements. Building the PUS services mechanisms, which are configurable on mission basis is part of the PRIMA (Multipurpose Spacecraft Bus ) 'missionisation' process improvement. PRIMA Platform Avionics Software (ASW) is continuously evolving to improve modularity and standardization of interfaces and of SW components (see references in [1]).
NASA Technical Reports Server (NTRS)
1981-01-01
The Kennedy Space Center (KSC) Management System for the Inertial Upper Stage (IUS) - spacecraft processing from KSC arrival through launch is described. The roles and responsibilities of the agencies and test team organizations involved in IUS-S/C processing at KSC for non-Department of Defense missions are described. Working relationships are defined with respect to documentation preparation, coordination and approval, schedule development and maintenance, test conduct and control, configuration management, quality control and safety. The policy regarding the use of spacecraft contractor test procedures, IUS contractor detailed operating procedures and KSC operations and maintenance instructions is defined. Review and approval requirements for each documentation system are described.
Identification and Description of Alternative Means of Accomplishing IMS Operational Features.
ERIC Educational Resources Information Center
Dave, Ashok
The operational features of feasible alternative configurations for a computer-based instructional management system are identified. Potential alternative means and components of accomplishing these features are briefly described. Included are aspects of data collection, data input, data transmission, data reception, scanning and processing,…
STS-121 Space Shuttle Processing Update
2006-04-27
NASA Administrator Michael Griffin, left, and Associate Administrator for Space Operations William Gerstenmaier, right, look on as Space Shuttle Program Manager Wayne Hale from NASA's Marshall Space Flight Center, holds a test configuration of an ice frost ramp during a media briefing about the space shuttle program and processing for the STS-121 mission, Friday, April 28, 2006, at NASA Headquarters in Washington. Photo Credit (NASA/Bill Ingalls)
2005-01-01
developed a partnership with the Defense Acquisition University to in- tegrate DISA’s systems engineering processes, software , and network...in place, with processes being implemented: deployment management; systems engineering ; software engineering ; configuration man- agement; test and...CSS systems engineering is a transition partner with Carnegie Mellon University’s Software Engineering Insti- tute and its work on the capability
Scientific Cluster Deployment and Recovery - Using puppet to simplify cluster management
NASA Astrophysics Data System (ADS)
Hendrix, Val; Benjamin, Doug; Yao, Yushu
2012-12-01
Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited time for and knowledge of the administration of such clusters can be strained by such maintenance tasks. This current work is the result of an effort to maintain a data analysis cluster (DAC) with minimal effort by a local system administrator. The realized benefit is the scientist, who is the local system administrator, is able to focus on the data analysis instead of the intricacies of managing a cluster. Our work provides a cluster deployment and recovery process (CDRP) based on the puppet configuration engine allowing a part-time FTE to easily deploy and recover entire clusters with minimal effort. Puppet is a configuration management system (CMS) used widely in computing centers for the automatic management of resources. Domain experts use Puppet's declarative language to define reusable modules for service configuration and deployment. Our CDRP has three actors: domain experts, a cluster designer and a cluster manager. The domain experts first write the puppet modules for the cluster services. A cluster designer would then define a cluster. This includes the creation of cluster roles, mapping the services to those roles and determining the relationships between the services. Finally, a cluster manager would acquire the resources (machines, networking), enter the cluster input parameters (hostnames, IP addresses) and automatically generate deployment scripts used by puppet to configure it to act as a designated role. In the event of a machine failure, the originally generated deployment scripts along with puppet can be used to easily reconfigure a new machine. The cluster definition produced in our CDRP is an integral part of automating cluster deployment in a cloud environment. Our future cloud efforts will further build on this work.
RECERTIFICATION OF THE MODEL 9977 RADIOACTIVE MATERIAL PACKAGING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abramczyk, G.; Bellamy, S.; Loftin, B.
2013-06-05
The Model 9977 Packaging was initially issued a Certificate of Compliance (CoC) by the Department of Energy’s Office of Environmental Management (DOE-EM) for the transportation of radioactive material (RAM) in the Fall of 2007. This first CoC was for a single radioactive material and two packing configurations. In the five years since that time, seven Addendums have been written to the Safety Analysis Report for Packaging (SARP) and five Letter Amendments have been written that have authorized either new RAM contents or packing configurations, or both. This paper will discuss the process of updating the 9977 SARP to include allmore » the contents and configurations, including the addition of a new content, and its submittal for recertification.« less
NASA Technical Reports Server (NTRS)
Gavert, Raymond B.
1990-01-01
Some experiences of NASA configuration management in providing concurrent engineering support to the Space Station Freedom program for the achievement of life cycle benefits and total quality are discussed. Three change decision experiences involving tracing requirements and automated information systems of the electrical power system are described. The potential benefits of concurrent engineering and total quality management include improved operational effectiveness, reduced logistics and support requirements, prevention of schedule slippages, and life cycle cost savings. It is shown how configuration management can influence the benefits attained through disciplined approaches and innovations that compel consideration of all the technical elements of engineering and quality factors that apply to the program development, transition to operations and in operations. Configuration management experiences involving the Space Station program's tiered management structure, the work package contractors, international partners, and the participating NASA centers are discussed.
Model-Driven Configuration of SELinux Policies
NASA Astrophysics Data System (ADS)
Agreiter, Berthold; Breu, Ruth
The need for access control in computer systems is inherent. However, the complexity to configure such systems is constantly increasing which affects the overall security of a system negatively. We think that it is important to define security requirements on a non-technical level while taking the application domain into respect in order to have a clear and separated view on security configuration (i.e. unblurred by technical details). On the other hand, security functionality has to be tightly integrated with the system and its development process in order to provide comprehensive means of enforcement. In this paper, we propose a systematic approach based on model-driven security configuration to leverage existing operating system security mechanisms (SELinux) for realising access control. We use UML models and develop a UML profile to satisfy these needs. Our goal is to exploit a comprehensive protection mechanism while rendering its security policy manageable by a domain specialist.
System and method for motor speed estimation of an electric motor
Lu, Bin [Kenosha, WI; Yan, Ting [Brookfield, WI; Luebke, Charles John [Sussex, WI; Sharma, Santosh Kumar [Viman Nagar, IN
2012-06-19
A system and method for a motor management system includes a computer readable storage medium and a processing unit. The processing unit configured to determine a voltage value of a voltage input to an alternating current (AC) motor, determine a frequency value of at least one of a voltage input and a current input to the AC motor, determine a load value from the AC motor, and access a set of motor nameplate data, where the set of motor nameplate data includes a rated power, a rated speed, a rated frequency, and a rated voltage of the AC motor. The processing unit is also configured to estimate a motor speed based on the voltage value, the frequency value, the load value, and the set of nameplate data and also store the motor speed on the computer readable storage medium.
Design distributed simulation platform for vehicle management system
NASA Astrophysics Data System (ADS)
Wen, Zhaodong; Wang, Zhanlin; Qiu, Lihua
2006-11-01
Next generation military aircraft requires the airborne management system high performance. General modules, data integration, high speed data bus and so on are needed to share and manage information of the subsystems efficiently. The subsystems include flight control system, propulsion system, hydraulic power system, environmental control system, fuel management system, electrical power system and so on. The unattached or mixed architecture is changed to integrated architecture. That means the whole airborne system is regarded into one system to manage. So the physical devices are distributed but the system information is integrated and shared. The process function of each subsystem are integrated (including general process modules, dynamic reconfiguration), furthermore, the sensors and the signal processing functions are shared. On the other hand, it is a foundation for power shared. Establish a distributed vehicle management system using 1553B bus and distributed processors which can provide a validation platform for the research of airborne system integrated management. This paper establishes the Vehicle Management System (VMS) simulation platform. Discuss the software and hardware configuration and analyze the communication and fault-tolerant method.
NASA Technical Reports Server (NTRS)
1973-01-01
Contractor and NASA technical management for the development and manufacture of the Skylab modules is reviewed with emphasis on the following management controls: configuration and interface management; vendor control; and quality control of workmanship. A review of the modified two-stage Saturn V launch vehicle which focused on modifications to accommodate the Skylab payload; resolution of prior flight anomalies; and changes in personnel and management systems is presented along with an evaluation of the possible age-life and storage problems for the Saturn 1-B launch vehicle. The NASA program management's visibility and control of contractor operations, systems engineering and integration, the review process for the evaluation of design and flight hardware, and the planning process for mission operations are investigated. It is concluded that the technical management system for development and fabrication of the modules, spacecraft, and launch vehicles, the process of design and hardware acceptance reviews, and the risk assessment activities are satisfactory. It is indicated that checkout activity, integrated testing, and preparations for and execution of mission operation require management attention.
Digital data processing system dynamic loading analysis
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Tucker, A. E.
1976-01-01
Simulation and analysis of the Space Shuttle Orbiter Digital Data Processing System (DDPS) are reported. The mated flight and postseparation flight phases of the space shuttle's approach and landing test configuration were modeled utilizing the Information Management System Interpretative Model (IMSIM) in a computerized simulation modeling of the ALT hardware, software, and workload. System requirements simulated for the ALT configuration were defined. Sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and the sensitivity analyses, a test design is described for adapting, parameterizing, and executing the IMSIM. Varying load and stress conditions for the model execution are given. The analyses of the computer simulation runs were documented as results, conclusions, and recommendations for DDPS improvements.
Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Becker, D. A.
1977-01-01
Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.
Systems engineering implementation in the preliminary design phase of the Giant Magellan Telescope
NASA Astrophysics Data System (ADS)
Maiten, J.; Johns, M.; Trancho, G.; Sawyer, D.; Mady, P.
2012-09-01
Like many telescope projects today, the 24.5-meter Giant Magellan Telescope (GMT) is truly a complex system. The primary and secondary mirrors of the GMT are segmented and actuated to support two operating modes: natural seeing and adaptive optics. GMT is a general-purpose telescope supporting multiple science instruments operated in those modes. GMT is a large, diverse collaboration and development includes geographically distributed teams. The need to implement good systems engineering processes for managing the development of systems like GMT becomes imperative. The management of the requirements flow down from the science requirements to the component level requirements is an inherently difficult task in itself. The interfaces must also be negotiated so that the interactions between subsystems and assemblies are well defined and controlled. This paper will provide an overview of the systems engineering processes and tools implemented for the GMT project during the preliminary design phase. This will include requirements management, documentation and configuration control, interface development and technical risk management. Because of the complexity of the GMT system and the distributed team, using web-accessible tools for collaboration is vital. To accomplish this GMTO has selected three tools: Cognition Cockpit, Xerox Docushare, and Solidworks Enterprise Product Data Management (EPDM). Key to this is the use of Cockpit for managing and documenting the product tree, architecture, error budget, requirements, interfaces, and risks. Additionally, drawing management is accomplished using an EPDM vault. Docushare, a documentation and configuration management tool is used to manage workflow of documents and drawings for the GMT project. These tools electronically facilitate collaboration in real time, enabling the GMT team to track, trace and report on key project metrics and design parameters.
Criteria Underlying the Formation of Alternative IMS Configurations.
ERIC Educational Resources Information Center
Dave, Ashok
To assist the formation of IMS (Instructional Management System) configurations, three categories of characteristics are developed and explained. Categories 1 and 2 emphasize automation, and the necessity of forming workable configurations to carry out instructional management for Southwest Regional Laboratory developed instructional and/or…
NASA Technical Reports Server (NTRS)
Ryan, Harry; Junell, Justin; Albasini, Colby; O'Rourke, William; Le, Thang; Strain, Ted; Stiglets, Tim
2011-01-01
A package for the automation of the Engineering Analysis (EA) process at the Stennis Space Center has been customized. It provides the ability to assign and track analysis tasks electronically, and electronically route a task for approval. It now provides a mechanism to keep these analyses under configuration management. It also allows the analysis to be stored and linked to the engineering data that is needed to perform the analysis (drawings, etc.). PTC s (Parametric Technology Corp o ration) Windchill product was customized to allow the EA to be created, routed, and maintained under configuration management. Using Infoengine Tasks, JSP (JavaServer Pages), Javascript, a user interface was created within the Windchill product that allows users to create EAs. Not only does this interface allow users to create and track EAs, but it plugs directly into the out-ofthe- box ability to associate these analyses with other relevant engineering data such as drawings. Also, using the Windchill workflow tool, the Design and Data Management System (DDMS) team created an electronic routing process based on the manual/informal approval process. The team also added the ability for users to notify and track notifications to individuals about the EA. Prior to the Engineering Analysis creation, there was no electronic way of creating and tracking these analyses. There was also a feature that was added that would allow users to track/log e-mail notifications of the EA.
Integrated Autonomous Network Management (IANM) Multi-Topology Route Manager and Analyzer
2008-02-01
zebra tmg mtrcli xinetd (tftp) mysql configuration file (mtrrm.conf) configuration file (mtrrmAggregator.properties) tftp files /tftpboot NetFlow PDUs...configuration upload/download snmp, telnet OSPFv2 user interface tmg Figure 6-2. Internal software organization Figure 6-2 illustrates the main
2013-09-01
processes used in space system acquisitions, simply implementing a data exchange specification would not fundamentally improve how information is...instruction, searching existing data sources , gathering and maintaining the data needed, and completing and reviewing the collection of information ...and manage the configuration of all critical program models, processes , and tools used throughout the DoD. Second, mandate a data exchange
Team Software Process (TSP) Body of Knowledge (BOK)
2010-07-01
styles that correspond stereotypical extremes of group control and coordination, as shown in Figure 5. closed, random, open, and synchronous group ...and confirming the resolutions • managing the design change process and coordinating changes with the configuration control board • reporting...members. 123 | CMU/SEI-2010-TR-020 4. Coaching – Obtain a lead coach and the coaches for each team. 5. Conceptual design – Form a working group of
Router Agent Technology for Policy-Based Network Management
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Sudhir, Gurusham; Chang, Hsin-Ping; James, Mark; Liu, Yih-Chiao J.; Chiang, Winston
2011-01-01
This innovation can be run as a standalone network application on any computer in a networked environment. This design can be configured to control one or more routers (one instance per router), and can also be configured to listen to a policy server over the network to receive new policies based on the policy- based network management technology. The Router Agent Technology transforms the received policies into suitable Access Control List syntax for the routers it is configured to control. It commits the newly generated access control lists to the routers and provides feedback regarding any errors that were faced. The innovation also automatically generates a time-stamped log file regarding all updates to the router it is configured to control. This technology, once installed on a local network computer and started, is autonomous because it has the capability to keep listening to new policies from the policy server, transforming those policies to router-compliant access lists, and committing those access lists to a specified interface on the specified router on the network with any error feedback regarding commitment process. The stand-alone application is named RouterAgent and is currently realized as a fully functional (version 1) implementation for the Windows operating system and for CISCO routers.
Croft, M G; Fraser, G C; Gaul, W N
2011-07-01
A Laboratory Information Management System (LIMS) was used to manage the laboratory data and support planning and field activities as part of the response to the equine influenza outbreak in Australia in 2007. The database structure of the LIMS and the system configurations that were made to best handle the laboratory implications of the disease response are discussed. The operational aspects of the LIMS and the related procedures used at the laboratory to process the increased sample throughput are reviewed, as is the interaction of the LIMS with other corporate systems used in the management of the response. Outcomes from this tailored configuration and operation of the LIMS resulted in effective provision and control of the laboratory and laboratory information aspects of the response. The extent and immediate availability of the information provided from the LIMS was critical to some of the activities of key operatives involved in controlling the response. © 2011 The Authors. Australian Veterinary Journal © 2011 Australian Veterinary Association.
Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha
2016-02-27
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-03-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-01-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335
The Large Synoptic Survey Telescope project management control system
NASA Astrophysics Data System (ADS)
Kantor, Jeffrey P.
2012-09-01
The Large Synoptic Survey Telescope (LSST) program is jointly funded by the NSF, the DOE, and private institutions and donors. From an NSF funding standpoint, the LSST is a Major Research Equipment and Facilities (MREFC) project. The NSF funding process requires proposals and D&D reviews to include activity-based budgets and schedules; documented basis of estimates; risk-based contingency analysis; cost escalation and categorization. "Out-of-the box," the commercial tool Primavera P6 contains approximately 90% of the planning and estimating capability needed to satisfy R&D phase requirements, and it is customizable/configurable for remainder with relatively little effort. We describe the customization/configuration and use of Primavera for the LSST Project Management Control System (PMCS), assess our experience to date, and describe future directions. Examples in this paper are drawn from the LSST Data Management System (DMS), which is one of three main subsystems of the LSST and is funded by the NSF. By astronomy standards the LSST DMS is a large data management project, processing and archiving over 70 petabyes of image data, producing over 20 petabytes of catalogs annually, and generating 2 million transient alerts per night. Over the 6-year construction and commissioning phase, the DM project is estimated to require 600,000 hours of engineering effort. In total, the DMS cost is approximately 60% hardware/system software and 40% labor.
Employing a Modified Diffuser Momentum Model to Simulate Ventilation of the Orion CEV
NASA Technical Reports Server (NTRS)
Straus, John; Lewis, John F.
2011-01-01
The Ansys CFX CFD modeling tool was used to support the design efforts of the ventilation system for the Orion CEV. CFD modeling was used to establish the flow field within the cabin for several supply configurations. A mesh and turbulence model sensitivity study was performed before the design studies. Results were post-processed for comparison with performance requirements. Most configurations employed straight vaned diffusers to direct and throw the flow. To manage the size of the models, the diffuser vanes were not resolved. Instead, a momentum model was employed to account for the effect of the diffusers. The momentum model was tested against a separate, vane-resolved side study. Results are presented for a single diffuser configuration for a low supply flow case.
Policy and Workforce Reform in England
ERIC Educational Resources Information Center
Gunter, Helen M.
2008-01-01
Current workforce reform, known as Remodelling the School Workforce, is part of an enduring policy process where there have been tensions between public and private sector structures and cultures. I show that the New Right and New Labour governments who have built and configured site based performance management over the past quarter of a century…
Launch Vehicle Control Center Architectures
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Williams, Randall; McLaughlin, Tom
2014-01-01
This analysis is a survey of control center architectures of the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures have similarities in basic structure, and differences in functional distribution of responsibilities for the phases of operations: (a) Launch vehicles in the international community vary greatly in configuration and process; (b) Each launch site has a unique processing flow based on the specific configurations; (c) Launch and flight operations are managed through a set of control centers associated with each launch site, however the flight operations may be a different control center than the launch center; and (d) The engineering support centers are primarily located at the design center with a small engineering support team at the launch site.
ERIC Educational Resources Information Center
Hofman, W. H. Adriaan; Hofman, Roelande H.
2011-01-01
Purpose: In this study the authors focus on different (configurations of) leadership or management styles in schools for general and vocational education. Findings: Using multilevel (students and schools) analyses, strong differences in effective management styles between schools with different student populations were observed. Conclusions: The…
Forest fire advanced system technology (FFAST) conceptual design study
NASA Technical Reports Server (NTRS)
Nichols, J. David; Warren, John R.
1987-01-01
The National Aeronautics and Space Administration's Jet Propulsion Laboratory (JPL) and the U.S. Department of Agriculture (USDA) Forest Service completed a conceptual design study that defined an integrated forest fire detection and mapping system that will be based upon technology available in the 1990s. Potential system configuration options in emerging and advanced technologies related to the conceptual design were identified and recommended for inclusion as preferred system components. System component technologies identified for an end-to-end system include airborne mounted, thermal infrared (IR) linear array detectors, automatic onboard georeferencing and signal processing, geosynchronous satellite communications links, and advanced data integration and display. Potential system configuration options were developed and examined for possible inclusion in the preferred system configuration. The preferred system configuration will provide increased performance and be cost effective over the system currently in use. Forest fire management user requirements and the system component emerging technologies were the basis for the system configuration design. The conceptual design study defined the preferred system configuration that warrants continued refinement and development, examined economic aspects of the current and preferred system, and provided preliminary cost estimates for follow-on system prototype development.
Network configuration management : paving the way to network agility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maestas, Joseph H.
2007-08-01
Sandia networks consist of nearly nine hundred routers and switches and nearly one million lines of command code, and each line ideally contributes to the capabilities of the network to convey information from one location to another. Sandia's Cyber Infrastructure Development and Deployment organizations recognize that it is therefore essential to standardize network configurations and enforce conformance to industry best business practices and documented internal configuration standards to provide a network that is agile, adaptable, and highly available. This is especially important in times of constrained budgets as members of the workforce are called upon to improve efficiency, effectiveness, andmore » customer focus. Best business practices recommend using the standardized configurations in the enforcement process so that when root cause analysis results in recommended configuration changes, subsequent configuration auditing will improve compliance to the standard. Ultimately, this minimizes mean time to repair, maintains the network security posture, improves network availability, and enables efficient transition to new technologies. Network standardization brings improved network agility, which in turn enables enterprise agility, because the network touches all facets of corporate business. Improved network agility improves the business enterprise as a whole.« less
Managing a Real-Time Embedded Linux Platform with Buildroot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diamond, J.; Martin, K.
2015-01-01
Developers of real-time embedded software often need to build the operating system, kernel, tools and supporting applications from source to work with the differences in their hardware configuration. The first attempts to introduce Linux-based real-time embedded systems into the Fermilab accelerator controls system used this approach but it was found to be time-consuming, difficult to maintain and difficult to adapt to different hardware configurations. Buildroot is an open source build system with a menu-driven configuration tool (similar to the Linux kernel build system) that automates this process. A customized Buildroot [1] system has been developed for use in the Fermilabmore » accelerator controls system that includes several hardware configuration profiles (including Intel, ARM and PowerPC) and packages for Fermilab support software. A bootable image file is produced containing the Linux kernel, shell and supporting software suite that varies from 3 to 20 megabytes large – ideal for network booting. The result is a platform that is easier to maintain and deploy in diverse hardware configurations« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vargo, G.F. Jr.
1994-10-11
The DOE Standard defines the configuration management program by the five basic program elements of ``program management,`` ``design requirements,`` ``document control,`` ``change control,`` and ``assessments,`` and the two adjunct recovery programs of ``design reconstitution,`` and ``material condition and aging management. The C-M model of five elements and two adjunct programs strengthen the necessary technical and administrative control to establish and maintain a consistent technical relationship among the requirements, physical configuration, and documentation. Although the DOE Standard was originally developed for the operational phase of nuclear facilities, this plan has the flexibility to be adapted and applied to all life-cycle phasesmore » of both nuclear and non-nuclear facilities. The configuration management criteria presented in this plan endorses the DOE Standard and has been tailored specifically to address the technical relationship of requirements, physical configuration, and documentation during the full life-cycle of the 101-SY Hydrogen Mitigation Test Project Mini-Data Acquisition and Control System of Tank Waste Remediation System.« less
Process-driven selection of information systems for healthcare
NASA Astrophysics Data System (ADS)
Mills, Stephen F.; Yeh, Raymond T.; Giroir, Brett P.; Tanik, Murat M.
1995-05-01
Integration of networking and data management technologies such as PACS, RIS and HIS into a healthcare enterprise in a clinically acceptable manner is a difficult problem. Data within such a facility are generally managed via a combination of manual hardcopy systems and proprietary, special-purpose data processing systems. Process modeling techniques have been successfully applied to engineering and manufacturing enterprises, but have not generally been applied to service-based enterprises such as healthcare facilities. The use of process modeling techniques can provide guidance for the placement, configuration and usage of PACS and other informatics technologies within the healthcare enterprise, and thus improve the quality of healthcare. Initial process modeling activities conducted within the Pediatric ICU at Children's Medical Center in Dallas, Texas are described. The ongoing development of a full enterprise- level model for the Pediatric ICU is also described.
Configuring a Context-Aware Middleware for Wireless Sensor Networks
Gámez, Nadia; Cubo, Javier; Fuentes, Lidia; Pimentel, Ernesto
2012-01-01
In the Future Internet, applications based on Wireless Sensor Networks will have to support reconfiguration with minimum human intervention, depending on dynamic context changes in their environment. These situations create a need for building these applications as adaptive software and including techniques that allow the context acquisition and decisions about adaptation. However, contexts use to be made up of complex information acquired from heterogeneous devices and user characteristics, making them difficult to manage. So, instead of building context-aware applications from scratch, we propose to use FamiWare, a family of middleware for Ambient Intelligence specifically designed to be aware of contexts in sensor and smartphone devices. It provides both, several monitoring services to acquire contexts from devices and users, and a context-awareness service to analyze and detect context changes. However, the current version of FamiWare does not allow the automatic incorporation related to the management of new contexts into the FamiWare family. To overcome this shortcoming, in this work, we first present how to model the context using a metamodel to define the contexts that must to be taken into account in an instantiation of FamiWare for a certain Ambient Intelligence system. Then, to configure a new context-aware version of FamiWare and to generate code ready-to-install within heterogeneous devices, we define a mapping that automatically transforms metamodel elements defining contexts into elements of the FamiWare family, and we also use the FamiWare configuration process to customize the new context-aware variant. Finally, we evaluate the benefits of our process, and we analyze both that the new version of the middleware works as expected and that it manages the contexts in an efficient way. PMID:23012505
Centralized Fabric Management Using Puppet, Git, and GLPI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William
2012-12-01
Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).
NASA Technical Reports Server (NTRS)
1987-01-01
Potential applications of robots for cost effective commercial microelectronic processes in space were studied and the associated robotic requirements were defined. Potential space application areas include advanced materials processing, bulk crystal growth, and epitaxial thin film growth and related processes. All possible automation of these processes was considered, along with energy and environmental requirements. Aspects of robot capabilities considered include system intelligence, ROM requirements, kinematic and dynamic specifications, sensor design and configuration, flexibility and maintainability. Support elements discussed included facilities, logistics, ground support, launch and recovery, and management systems.
Achieving performance breakthroughs in an HMO business process through quality planning.
Hanan, K B
1993-01-01
Kaiser Permanente's Georgia Region commissioned a quality planning team to design a new process to improve payments to its suppliers and vendors. The result of the team's effort was a 73 percent reduction in cycle time. This team's experiences point to the advantages of process redesign as a quality planning model, as well as some general guidelines for its most effective use in teams. If quality planning project teams are carefully configured, sufficiently expert in the existing process, and properly supported by management, organizations can achieve potentially dramatic improvements in process performance using this approach.
Process Management inside ATLAS DAQ
NASA Astrophysics Data System (ADS)
Alexandrov, I.; Amorim, A.; Badescu, E.; Burckhart-Chromek, D.; Caprini, M.; Dobson, M.; Duval, P. Y.; Hart, R.; Jones, R.; Kazarov, A.; Kolos, S.; Kotov, V.; Liko, D.; Lucio, L.; Mapelli, L.; Mineev, M.; Moneta, L.; Nassiakou, M.; Pedro, L.; Ribeiro, A.; Roumiantsev, V.; Ryabov, Y.; Schweiger, D.; Soloviev, I.; Wolters, H.
2002-10-01
The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.
Controlling changes - lessons learned from waste management facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, B.M.; Koplow, A.S.; Stoll, F.E.
This paper discusses lessons learned about change control at the Waste Reduction Operations Complex (WROC) and Waste Experimental Reduction Facility (WERF) of the Idaho National Engineering Laboratory (INEL). WROC and WERF have developed and implemented change control and an as-built drawing process and have identified structures, systems, and components (SSCS) for configuration management. The operations have also formed an Independent Review Committee to minimize costs and resources associated with changing documents. WROC and WERF perform waste management activities at the INEL. WROC activities include storage, treatment, and disposal of hazardous and mixed waste. WERF provides volume reduction of solid low-levelmore » waste through compaction, incineration, and sizing operations. WROC and WERF`s efforts aim to improve change control processes that have worked inefficiently in the past.« less
Survey of piloting factors in V/STOL aircraft with implications for flight control system design
NASA Technical Reports Server (NTRS)
Ringland, R. F.; Craig, S. J.
1977-01-01
Flight control system design factors involved for pilot workload relief are identified. Major contributors to pilot workload include configuration management and control and aircraft stability and response qualities. A digital fly by wire stability augmentation, configuration management, and configuration control system is suggested for reduction of pilot workload during takeoff, hovering, and approach.
How Configuration Management Helps Projects Innovate and Communicate
NASA Technical Reports Server (NTRS)
Cioletti, Louis A.; Guidry, Carla F.
2009-01-01
This slide presentation reviews the concept of Configuration Management (CM) and compares it to the standard view of Project management (PM). It presents two PM models: (1) Kepner-Tregoe,, and the Deming models, describes why projects fail, and presents methods of how CM helps projects innovate and communicate.
76 FR 12617 - Airworthiness Directives; The Boeing Company Model 777-200 and -300 Series Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-08
... installing new operational software for the electrical load management system and configuration database... the electrical load management system operational software and configuration database software, in... Management, P.O. Box 3707, MC 2H-65, Seattle, Washington 98124-2207; telephone 206- 544-5000, extension 1...
Configuration Management of an Optimization Application in a Research Environment
NASA Technical Reports Server (NTRS)
Townsend, James C.; Salas, Andrea O.; Schuler, M. Patricia
1999-01-01
Multidisciplinary design optimization (MDO) research aims to increase interdisciplinary communication and reduce design cycle time by combining system analyses (simulations) with design space search and decision making. The High Performance Computing and Communication Program's current High Speed Civil Transport application, HSCT4.0, at NASA Langley Research Center involves a highly complex analysis process with high-fidelity analyses that are more realistic than previous efforts at the Center. The multidisciplinary processes have been integrated to form a distributed application by using the Java language and Common Object Request Broker Architecture (CORBA) software techniques. HSCT4.0 is a research project in which both the application problem and the implementation strategy have evolved as the MDO and integration issues became better understood. Whereas earlier versions of the application and integrated system were developed with a simple, manual software configuration management (SCM) process, it was evident that this larger project required a more formal SCM procedure. This report briefly describes the HSCT4.0 analysis and its CORBA implementation and then discusses some SCM concepts and their application to this project. In anticipation that SCM will prove beneficial for other large research projects, the report concludes with some lessons learned in overcoming SCM implementation problems for HSCT4.0.
Configural face processing impacts race disparities in humanization and trust
Cassidy, Brittany S.; Krendl, Anne C.; Stanko, Kathleen A.; Rydell, Robert J.; Young, Steven G.; Hugenberg, Kurt
2018-01-01
The dehumanization of Black Americans is an ongoing societal problem. Reducing configural face processing, a well-studied aspect of typical face encoding, decreases the activation of human-related concepts to White faces, suggesting that the extent that faces are configurally processed contributes to dehumanization. Because Black individuals are more dehumanized relative to White individuals, the current work examined how configural processing might contribute to their greater dehumanization. Study 1 showed that inverting faces (which reduces configural processing) reduced the activation of human-related concepts toward Black more than White faces. Studies 2a and 2b showed that reducing configural processing affects dehumanization by decreasing trust and increasing homogeneity among Black versus White faces. Studies 3a–d showed that configural processing effects emerge in racial outgroups for whom untrustworthiness may be a more salient group stereotype (i.e., Black, but not Asian, faces). Study 4 provided evidence that these effects are specific to reduced configural processing versus more general perceptual disfluency. Reduced configural processing may thus contribute to the greater dehumanization of Black relative to White individuals. PMID:29910510
Furberg, Robert D; Ortiz, Alexa M; Zulkiewicz, Brittany A; Hudson, Jordan P; Taylor, Olivia M; Lewis, Megan A
2016-06-27
Tablet-based health care interventions have the potential to encourage patient care in a timelier manner, allow physicians convenient access to patient records, and provide an improved method for patient education. However, along with the continued adoption of tablet technologies, there is a concomitant need to develop protocols focusing on the configuration, management, and maintenance of these devices within the health care setting to support the conduct of clinical research. Develop three protocols to support tablet configuration, tablet management, and tablet maintenance. The Configurator software, Tile technology, and current infection control recommendations were employed to develop three distinct protocols for tablet-based digital health interventions. Configurator is a mobile device management software specifically for iPhone operating system (iOS) devices. The capabilities and current applications of Configurator were reviewed and used to develop the protocol to support device configuration. Tile is a tracking tag associated with a free mobile app available for iOS and Android devices. The features associated with Tile were evaluated and used to develop the Tile protocol to support tablet management. Furthermore, current recommendations on preventing health care-related infections were reviewed to develop the infection control protocol to support tablet maintenance. This article provides three protocols: the Configurator protocol, the Tile protocol, and the infection control protocol. These protocols can help to ensure consistent implementation of tablet-based interventions, enhance fidelity when employing tablets for research purposes, and serve as a guide for tablet deployments within clinical settings.
A cloud-based semantic wiki for user training in healthcare process management.
Papakonstantinou, D; Poulymenopoulou, M; Malamateniou, F; Vassilacopoulos, G
2011-01-01
Successful healthcare process design requires active participation of users who are familiar with the cooperative and collaborative nature of healthcare delivery, expressed in terms of healthcare processes. Hence, a reusable, flexible, agile and adaptable training material is needed with the objective to enable users instill their knowledge and expertise in healthcare process management and (re)configuration activities. To this end, social software, such as a wiki, could be used as it supports cooperation and collaboration anytime, anywhere and combined with semantic web technology that enables structuring pieces of information for easy retrieval, reuse and exchange between different systems and tools. In this paper a semantic wiki is presented as a means for developing training material for healthcare providers regarding healthcare process management. The semantic wiki should act as a collective online memory containing training material that is accessible to authorized users, thus enhancing the training process with collaboration and cooperation capabilities. It is proposed that the wiki is stored in a secure virtual private cloud that is accessible from anywhere, be it an excessively open environment, while meeting the requirements of redundancy, high performance and autoscaling.
Oweis, Salah; D'Ussel, Louis; Chagnon, Guy; Zuhowski, Michael; Sack, Tim; Laucournet, Gaullume; Jackson, Edward J.
2002-06-04
A stand alone battery module including: (a) a mechanical configuration; (b) a thermal management configuration; (c) an electrical connection configuration; and (d) an electronics configuration. Such a module is fully interchangeable in a battery pack assembly, mechanically, from the thermal management point of view, and electrically. With the same hardware, the module can accommodate different cell sizes and, therefore, can easily have different capacities. The module structure is designed to accommodate the electronics monitoring, protection, and printed wiring assembly boards (PWAs), as well as to allow airflow through the module. A plurality of modules may easily be connected together to form a battery pack. The parts of the module are designed to facilitate their manufacture and assembly.
NASA Technical Reports Server (NTRS)
Nichols, J. D.; Britten, R. A.; Parks, G. S.; Voss, J. M.
1990-01-01
NASA's JPL has completed a feasibility study using infrared technologies for wildland fire suppression and management. The study surveyed user needs, examined available technologies, matched the user needs with technologies, and defined an integrated infrared wildland fire mapping concept system configuration. System component trade-offs were presented for evaluation in the concept system configuration. The economic benefits of using infrared technologies in fire suppression and management were examined. Follow-on concept system configuration development and implementation were proposed.
Human-Technology Centric In Cyber Security Maintenance For Digital Transformation Era
NASA Astrophysics Data System (ADS)
Ali, Firkhan Ali Bin Hamid; Zalisham Jali, Mohd, Dr
2018-05-01
The development of the digital transformation in the organizations has become more expanding in these present and future years. This is because of the active demand to use the ICT services among all the organizations whether in the government agencies or private sectors. While digital transformation has led manufacturers to incorporate sensors and software analytics into their offerings, the same innovation has also brought pressure to offer clients more accommodating appliance deployment options. So, their needs a well plan to implement the cyber infrastructures and equipment. The cyber security play important role to ensure that the ICT components or infrastructures execute well along the organization’s business successful. This paper will present a study of security management models to guideline the security maintenance on existing cyber infrastructures. In order to perform security model for the currently existing cyber infrastructures, combination of the some security workforces and security process of extracting the security maintenance in cyber infrastructures. In the assessment, the focused on the cyber security maintenance within security models in cyber infrastructures and presented a way for the theoretical and practical analysis based on the selected security management models. Then, the proposed model does evaluation for the analysis which can be used to obtain insights into the configuration and to specify desired and undesired configurations. The implemented cyber security maintenance within security management model in a prototype and evaluated it for practical and theoretical scenarios. Furthermore, a framework model is presented which allows the evaluation of configuration changes in the agile and dynamic cyber infrastructure environments with regard to properties like vulnerabilities or expected availability. In case of a security perspective, this evaluation can be used to monitor the security levels of the configuration over its lifetime and to indicate degradations.
Cooperative optimization of reconfigurable machine tool configurations and production process plan
NASA Astrophysics Data System (ADS)
Xie, Nan; Li, Aiping; Xue, Wei
2012-09-01
The production process plan design and configurations of reconfigurable machine tool (RMT) interact with each other. Reasonable process plans with suitable configurations of RMT help to improve product quality and reduce production cost. Therefore, a cooperative strategy is needed to concurrently solve the above issue. In this paper, the cooperative optimization model for RMT configurations and production process plan is presented. Its objectives take into account both impacts of process and configuration. Moreover, a novel genetic algorithm is also developed to provide optimal or near-optimal solutions: firstly, its chromosome is redesigned which is composed of three parts, operations, process plan and configurations of RMTs, respectively; secondly, its new selection, crossover and mutation operators are also developed to deal with the process constraints from operation processes (OP) graph, otherwise these operators could generate illegal solutions violating the limits; eventually the optimal configurations for RMT under optimal process plan design can be obtained. At last, a manufacturing line case is applied which is composed of three RMTs. It is shown from the case that the optimal process plan and configurations of RMT are concurrently obtained, and the production cost decreases 6.28% and nonmonetary performance increases 22%. The proposed method can figure out both RMT configurations and production process, improve production capacity, functions and equipment utilization for RMT.
Flexible medical image management using service-oriented architecture.
Shaham, Oded; Melament, Alex; Barak-Corren, Yuval; Kostirev, Igor; Shmueli, Noam; Peres, Yardena
2012-01-01
Management of medical images increasingly involves the need for integration with a variety of information systems. To address this need, we developed Content Management Offering (CMO), a platform for medical image management supporting interoperability through compliance with standards. CMO is based on the principles of service-oriented architecture, implemented with emphasis on three areas: clarity of business process definition, consolidation of service configuration management, and system scalability. Owing to the flexibility of this platform, a small team is able to accommodate requirements of customers varying in scale and in business needs. We describe two deployments of CMO, highlighting the platform's value to customers. CMO represents a flexible approach to medical image management, which can be applied to a variety of information technology challenges in healthcare and life sciences organizations.
Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing
Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge
2011-01-01
This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739
Pursley, Randall H.; Salem, Ghadi; Devasahayam, Nallathamby; Subramanian, Sankaran; Koscielniak, Janusz; Krishna, Murali C.; Pohida, Thomas J.
2006-01-01
The integration of modern data acquisition and digital signal processing (DSP) technologies with Fourier transform electron paramagnetic resonance (FT-EPR) imaging at radiofrequencies (RF) is described. The FT-EPR system operates at a Larmor frequency (Lf) of 300 MHz to facilitate in vivo studies. This relatively low frequency Lf, in conjunction with our ~10 MHz signal bandwidth, enables the use of direct free induction decay time-locked subsampling (TLSS). This particular technique provides advantages by eliminating the traditional analog intermediate frequency downconversion stage along with the corresponding noise sources. TLSS also results in manageable sample rates that facilitate the design of DSP-based data acquisition and image processing platforms. More specifically, we utilize a high-speed field programmable gate array (FPGA) and a DSP processor to perform advanced real-time signal and image processing. The migration to a DSP-based configuration offers the benefits of improved EPR system performance, as well as increased adaptability to various EPR system configurations (i.e., software configurable systems instead of hardware reconfigurations). The required modifications to the FT-EPR system design are described, with focus on the addition of DSP technologies including the application-specific hardware, software, and firmware developed for the FPGA and DSP processor. The first results of using real-time DSP technologies in conjunction with direct detection bandpass sampling to implement EPR imaging at RF frequencies are presented. PMID:16243552
The Launch Processing System for Space Shuttle.
NASA Technical Reports Server (NTRS)
Springer, D. A.
1973-01-01
In order to reduce costs and accelerate vehicle turnaround, a single automated system will be developed to support shuttle launch site operations, replacing a multiplicity of systems used in previous programs. The Launch Processing System will provide real-time control, data analysis, and information display for the checkout, servicing, launch, landing, and refurbishment of the launch vehicles, payloads, and all ground support systems. It will also provide real-time and historical data retrieval for management and sustaining engineering (test records and procedures, logistics, configuration control, scheduling, etc.).
Issues and Techniques of CASE Integration With Configuration Management
1992-03-01
all four!) process architecture classes. For example, Frame Technology’s FrameMaker is a client/server tool because it provides server functions for... FrameMaker clients; it is a parent/child tool since a top-level control panel is used to "fork" child FrameMaker sessions; the "forked" FrameMaker ...sessions are persistent tools since they may be reused to create and modify any number of FrameMaker documents. Despite this, however, these process
Static and dynamic high power, space nuclear electric generating systems
NASA Technical Reports Server (NTRS)
Wetch, J. R.; Begg, L. L.; Koester, J. K.
1985-01-01
Space nuclear electric generating systems concepts have been assessed for their potential in satisfying future spacecraft high power (several megawatt) requirements. Conceptual designs have been prepared for reactor power systems using the most promising static (thermionic) and the most promising dynamic conversion processes. Component and system layouts, along with system mass and envelope requirements have been made. Key development problems have been identified and the impact of the conversion process selection upon thermal management and upon system and vehicle configuration is addressed.
1981-06-01
of standard reports are: 1. Activity Locator; 2. Report of Active Duty Obligations and Projected Rotation Date; and 3. Enlisted Personnel Advancement ... advancement in rate, etc.), currently used forms are listed and analyzed to determine how such transactions are processed under the existing system...to suport an SDS functional organization. It is emphasized that configuration design should consider the requirements which will be imposed under all
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Edward J., Jr.; Henry, Karen Lynne
Sandia National Laboratories develops technologies to: (1) sustain, modernize, and protect our nuclear arsenal (2) Prevent the spread of weapons of mass destruction; (3) Provide new capabilities to our armed forces; (4) Protect our national infrastructure; (5) Ensure the stability of our nation's energy and water supplies; and (6) Defend our nation against terrorist threats. We identified the need for a single overarching Integrated Workplace Management System (IWMS) that would enable us to focus on customer missions and improve FMOC processes. Our team selected highly configurable commercial-off-the-shelf (COTS) software with out-of-the-box workflow processes that integrate strategic planning, project management, facilitymore » assessments, and space management, and can interface with existing systems, such as Oracle, PeopleSoft, Maximo, Bentley, and FileNet. We selected the Integrated Workplace Management System (IWMS) from Tririga, Inc. Facility Management System (FMS) Benefits are: (1) Create a single reliable source for facility data; (2) Improve transparency with oversight organizations; (3) Streamline FMOC business processes with a single, integrated facility-management tool; (4) Give customers simple tools and real-time information; (5) Reduce indirect costs; (6) Replace approximately 30 FMOC systems and 60 homegrown tools (such as Microsoft Access databases); and (7) Integrate with FIMS.« less
Autonomic Management in a Distributed Storage System
NASA Astrophysics Data System (ADS)
Tauber, Markus
2010-07-01
This thesis investigates the application of autonomic management to a distributed storage system. Effects on performance and resource consumption were measured in experiments, which were carried out in a local area test-bed. The experiments were conducted with components of one specific distributed storage system, but seek to be applicable to a wide range of such systems, in particular those exposed to varying conditions. The perceived characteristics of distributed storage systems depend on their configuration parameters and on various dynamic conditions. For a given set of conditions, one specific configuration may be better than another with respect to measures such as resource consumption and performance. Here, configuration parameter values were set dynamically and the results compared with a static configuration. It was hypothesised that under non-changing conditions this would allow the system to converge on a configuration that was more suitable than any that could be set a priori. Furthermore, the system could react to a change in conditions by adopting a more appropriate configuration. Autonomic management was applied to the peer-to-peer (P2P) and data retrieval components of ASA, a distributed storage system. The effects were measured experimentally for various workload and churn patterns. The management policies and mechanisms were implemented using a generic autonomic management framework developed during this work. The experimental evaluations of autonomic management show promising results, and suggest several future research topics. The findings of this thesis could be exploited in building other distributed storage systems that focus on harnessing storage on user workstations, since these are particularly likely to be exposed to varying, unpredictable conditions.
Completion of a Hospital-Wide Comprehensive Image Management and Communication System
NASA Astrophysics Data System (ADS)
Mun, Seong K.; Benson, Harold R.; Horii, Steven C.; Elliott, Larry P.; Lo, Shih-Chung B.; Levine, Betty A.; Braudes, Robert E.; Plumlee, Gabriel S.; Garra, Brian S.; Schellinger, Dieter; Majors, Bruce; Goeringer, Fred; Kerlin, Barbara D.; Cerva, John R.; Ingeholm, Mary-Lou; Gore, Tim
1989-05-01
A comprehensive image management and communication (IMAC) network has been installed at Georgetown University Hospital for an extensive clinical evaluation. The network is based on the AT&T CommView system and it includes interfaces to 12 imaging devices, 15 workstations (inside and outside of the radiology department), a teleradiology link to an imaging center, an optical jukebox and a number of advanced image display and processing systems such as Sun workstations, PIXAR, and PIXEL. Details of network configuration and its role in the evaluation project are discussed.
NASA Astrophysics Data System (ADS)
Terminanto, A.; Swantoro, H. A.; Hidayanto, A. N.
2017-12-01
Enterprise Resource Planning (ERP) is an integrated information system to manage business processes of companies of various business scales. Because of the high cost of ERP investment, ERP implementation is usually done in large-scale enterprises, Due to the complexity of implementation problems, the success rate of ERP implementation is still low. Open Source System ERP becomes an alternative choice of ERP application to SME companies in terms of cost and customization. This study aims to identify characteristics and configure the implementation of OSS ERP Payroll module in KKPS (Employee Cooperative PT SRI) using OSS ERP Odoo and using ASAP method. This study is classified into case study research and action research. Implementation of OSS ERP Payroll module is done because the HR section of KKPS has not been integrated with other parts. The results of this study are the characteristics and configuration of OSS ERP payroll module in KKPS.
Three alternative structural configurations for phlebotomy: a comparison of effectiveness.
Mannion, Heidi; Nadder, Teresa
2007-01-01
This study was designed to compare the effectiveness of three alternative structural configurations for inpatient phlebotomy. It was hypothesized that decentralized was less effective when compared to centralized inpatient phlebotomy. A non-experimental prospective survey design was conducted at the institution level. Laboratory managers completed an organizational survey and collected data on inpatient blood specimens during a 30-day data collection period. A random sample (n=31) of hospitals with onsite laboratories in the United States was selected from a database purchased from the Joint Commission on Accreditations of Healthcare Organizations (JCAHO). Effectiveness of the blood collection process was measured by the percentage of specimens rejected during the data collection period. Analysis of variance showed a statistically significant difference in the percentage of specimens rejected for centralized, hybrid, and decentralized phlebotomy configurations [F (2, 28) = 4.27, p = .02] with an effect size of .23. Post-hoc comparison using Tukey's HSD indicated that mean percentage of specimens rejected for centralized phlebotomy (M = .045, SD = 0.36) was significantly different from the decentralized configuration (M = 1.42, SD = 0.92, p = .03). found to be more effective when compared to the decentralized configuration.
Furberg, Robert D; Zulkiewicz, Brittany A; Hudson, Jordan P; Taylor, Olivia M; Lewis, Megan A
2016-01-01
Background Tablet-based health care interventions have the potential to encourage patient care in a timelier manner, allow physicians convenient access to patient records, and provide an improved method for patient education. However, along with the continued adoption of tablet technologies, there is a concomitant need to develop protocols focusing on the configuration, management, and maintenance of these devices within the health care setting to support the conduct of clinical research. Objective Develop three protocols to support tablet configuration, tablet management, and tablet maintenance. Methods The Configurator software, Tile technology, and current infection control recommendations were employed to develop three distinct protocols for tablet-based digital health interventions. Configurator is a mobile device management software specifically for iPhone operating system (iOS) devices. The capabilities and current applications of Configurator were reviewed and used to develop the protocol to support device configuration. Tile is a tracking tag associated with a free mobile app available for iOS and Android devices. The features associated with Tile were evaluated and used to develop the Tile protocol to support tablet management. Furthermore, current recommendations on preventing health care–related infections were reviewed to develop the infection control protocol to support tablet maintenance. Results This article provides three protocols: the Configurator protocol, the Tile protocol, and the infection control protocol. Conclusions These protocols can help to ensure consistent implementation of tablet-based interventions, enhance fidelity when employing tablets for research purposes, and serve as a guide for tablet deployments within clinical settings. PMID:27350013
Data handling with SAM and art at the NO vA experiment
Aurisano, A.; Backhouse, C.; Davies, G. S.; ...
2015-12-23
During operations, NOvA produces between 5,000 and 7,000 raw files per day with peaks in excess of 12,000. These files must be processed in several stages to produce fully calibrated and reconstructed analysis files. In addition, many simulated neutrino interactions must be produced and processed through the same stages as data. To accommodate the large volume of data and Monte Carlo, production must be possible both on the Fermilab grid and on off-site farms, such as the ones accessible through the Open Science Grid. To handle the challenge of cataloging these files and to facilitate their off-line processing, we havemore » adopted the SAM system developed at Fermilab. SAM indexes files according to metadata, keeps track of each file's physical locations, provides dataset management facilities, and facilitates data transfer to off-site grids. To integrate SAM with Fermilab's art software framework and the NOvA production workflow, we have developed methods to embed metadata into our configuration files, art files, and standalone ROOT files. A module in the art framework propagates the embedded information from configuration files into art files, and from input art files to output art files, allowing us to maintain a complete processing history within our files. Embedding metadata in configuration files also allows configuration files indexed in SAM to be used as inputs to Monte Carlo production jobs. Further, SAM keeps track of the input files used to create each output file. Parentage information enables the construction of self-draining datasets which have become the primary production paradigm used at NOvA. In this study we will present an overview of SAM at NOvA and how it has transformed the file production framework used by the experiment.« less
Mechanical System Analysis/Design Tool (MSAT) Quick Guide
NASA Technical Reports Server (NTRS)
Lee, HauHua; Kolb, Mark; Madelone, Jack
1998-01-01
MSAT is a unique multi-component multi-disciplinary tool that organizes design analysis tasks around object-oriented representations of configuration components, analysis programs and modules, and data transfer links between them. This creative modular architecture enables rapid generation of input stream for trade-off studies of various engine configurations. The data transfer links automatically transport output from one application as relevant input to the next application once the sequence is set up by the user. The computations are managed via constraint propagation - the constraints supplied by the user as part of any optimization module. The software can be used in the preliminary design stage as well as during the detail design of product development process.
Intelligent Hybrid Vehicle Power Control. Part 2. Online Intelligent Energy Management
2012-06-30
IEC_HEV for vehicle energy optimization. IEC_HEV, the Figure 1. Power Split HEV configuration into VSC 5 online energy control is a component...in the Vehicle System Controller ( VSC ). The VSC for this configuration must manage the powertrain control in order to maintain a proper level of...charge in the battery. However, since two power sources are available to propel the vehicle, the VSC in this configuration has the additional
IceProd 2: A Next Generation Data Analysis Framework for the IceCube Neutrino Observatory
NASA Astrophysics Data System (ADS)
Schultz, D.
2015-12-01
We describe the overall structure and new features of the second generation of IceProd, a data processing and management framework. IceProd was developed by the IceCube Neutrino Observatory for processing of Monte Carlo simulations, detector data, and analysis levels. It runs as a separate layer on top of grid and batch systems. This is accomplished by a set of daemons which process job workflow, maintaining configuration and status information on the job before, during, and after processing. IceProd can also manage complex workflow DAGs across distributed computing grids in order to optimize usage of resources. IceProd is designed to be very light-weight; it runs as a python application fully in user space and can be set up easily. For the initial completion of this second version of IceProd, improvements have been made to increase security, reliability, scalability, and ease of use.
NASA Astrophysics Data System (ADS)
Delventhal, D.; Schultz, D.; Diaz Velez, J. C.
2017-10-01
IceProd is a data processing and management framework developed by the IceCube Neutrino Observatory for processing of Monte Carlo simulations, detector data, and data driven analysis. It runs as a separate layer on top of grid and batch systems. This is accomplished by a set of daemons which process job workflow, maintaining configuration and status information on the job before, during, and after processing. IceProd can also manage complex workflow DAGs across distributed computing grids in order to optimize usage of resources. IceProd has recently been rewritten to increase its scaling capabilities, handle user analysis workflows together with simulation production, and facilitate the integration with 3rd party scheduling tools. IceProd 2, the second generation of IceProd, has been running in production for several months now. We share our experience setting up the system and things we’ve learned along the way.
Software Configuration Management Plan for the B-Plant Canyon Ventilation Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
MCDANIEL, K.S.
1999-08-31
Project W-059 installed a new B Plant Canyon Ventilation System. Monitoring and control of the system is implemented by the Canyon Ventilation Control System (CVCS). This Software Configuration Management Plan provides instructions for change control of the CVCS.
Gaythorpe, Katy; Adams, Ben
2016-05-21
Epidemics of water-borne infections often follow natural disasters and extreme weather events that disrupt water management processes. The impact of such epidemics may be reduced by deployment of transmission control facilities such as clinics or decontamination plants. Here we use a relatively simple mathematical model to examine how demographic and environmental heterogeneities, population behaviour, and behavioural change in response to the provision of facilities, combine to determine the optimal configurations of limited numbers of facilities to reduce epidemic size, and endemic prevalence. We show that, if the presence of control facilities does not affect behaviour, a good general rule for responsive deployment to minimise epidemic size is to place them in exactly the locations where they will directly benefit the most people. However, if infected people change their behaviour to seek out treatment then the deployment of facilities offering treatment can lead to complex effects that are difficult to foresee. So careful mathematical analysis is the only way to get a handle on the optimal deployment. Behavioural changes in response to control facilities can also lead to critical facility numbers at which there is a radical change in the optimal configuration. So sequential improvement of a control strategy by adding facilities to an existing optimal configuration does not always produce another optimal configuration. We also show that the pre-emptive deployment of control facilities has conflicting effects. The configurations that minimise endemic prevalence are very different to those that minimise epidemic size. So cost-benefit analysis of strategies to manage endemic prevalence must factor in the frequency of extreme weather events and natural disasters. Copyright © 2016 Elsevier Ltd. All rights reserved.
Exploration Mission Benefits From Logistics Reduction Technologies
NASA Technical Reports Server (NTRS)
Broyan, James Lee, Jr.; Schlesinger, Thilini; Ewert, Michael K.
2016-01-01
Technologies that reduce logistical mass, volume, and the crew time dedicated to logistics management become more important as exploration missions extend further from the Earth. Even modest reductions in logical mass can have a significant impact because it also reduces the packing burden. NASA's Advanced Exploration Systems' Logistics Reduction Project is developing technologies that can directly reduce the mass and volume of crew clothing and metabolic waste collection. Also, cargo bags have been developed that can be reconfigured for crew outfitting and trash processing technologies to increase habitable volume and improve protection against solar storm events are under development. Additionally, Mars class missions are sufficiently distant that even logistics management without resupply can be problematic due to the communication time delay with Earth. Although exploration vehicles are launched with all consumables and logistics in a defined configuration, the configuration continually changes as the mission progresses. Traditionally significant ground and crew time has been required to understand the evolving configuration and locate misplaced items. For key mission events and unplanned contingencies, the crew will not be able to rely on the ground for logistics localization assistance. NASA has been developing a radio frequency identification autonomous logistics management system to reduce crew time for general inventory and enable greater crew self-response to unplanned events when a wide range of items may need to be located in a very short time period. This paper provides a status of the technologies being developed and there mission benefits for exploration missions.
Exploration Mission Benefits From Logistics Reduction Technologies
NASA Technical Reports Server (NTRS)
Broyan, James Lee, Jr.; Ewert, Michael K.; Schlesinger, Thilini
2016-01-01
Technologies that reduce logistical mass, volume, and the crew time dedicated to logistics management become more important as exploration missions extend further from the Earth. Even modest reductions in logistical mass can have a significant impact because it also reduces the packaging burden. NASA's Advanced Exploration Systems' Logistics Reduction Project is developing technologies that can directly reduce the mass and volume of crew clothing and metabolic waste collection. Also, cargo bags have been developed that can be reconfigured for crew outfitting, and trash processing technologies are under development to increase habitable volume and improve protection against solar storm events. Additionally, Mars class missions are sufficiently distant that even logistics management without resupply can be problematic due to the communication time delay with Earth. Although exploration vehicles are launched with all consumables and logistics in a defined configuration, the configuration continually changes as the mission progresses. Traditionally significant ground and crew time has been required to understand the evolving configuration and to help locate misplaced items. For key mission events and unplanned contingencies, the crew will not be able to rely on the ground for logistics localization assistance. NASA has been developing a radio-frequency-identification autonomous logistics management system to reduce crew time for general inventory and enable greater crew self-response to unplanned events when a wide range of items may need to be located in a very short time period. This paper provides a status of the technologies being developed and their mission benefits for exploration missions.
NASA Technical Reports Server (NTRS)
Dreher, Joseph G.
2009-01-01
For expedience in delivering dispersion guidance in the diversity of operational situations, National Weather Service Melbourne (MLB) and Spaceflight Meteorology Group (SMG) are becoming increasingly reliant on the PC-based version of the HYSPLIT model run through a graphical user interface (GUI). While the GUI offers unique advantages when compared to traditional methods, it is difficult for forecasters to run and manage in an operational environment. To alleviate the difficulty in providing scheduled real-time trajectory and concentration guidance, the Applied Meteorology Unit (AMU) configured a Linux version of the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) (HYSPLIT) model that ingests the National Centers for Environmental Prediction (NCEP) guidance, such as the North American Mesoscale (NAM) and the Rapid Update Cycle (RUC) models. The AMU configured the HYSPLIT system to automatically download the NCEP model products, convert the meteorological grids into HYSPLIT binary format, run the model from several pre-selected latitude/longitude sites, and post-process the data to create output graphics. In addition, the AMU configured several software programs to convert local Weather Research and Forecast (WRF) model output into HYSPLIT format.
1991-12-01
database, the Real Time Operation Management Information System (ROMIS), and Fitting Out Management Information System (FOMIS). These three configuration...Codes ROMIS Real Time Operation Management Information System SCLSIS Ship’s Configuration and Logistics Information System SCN Shipbuilding and
NASA Technical Reports Server (NTRS)
Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo
1992-01-01
This report is one of a series discussing configuration management (CM) topics for Space Station ground systems software development. It provides a description of the Software Support Environment (SSE)-developed Software Test Management (STM) capability, and discusses the possible use of this capability for management of developed software during testing performed on target platforms. This is intended to supplement the formal documentation of STM provided by the SEE Project. How STM can be used to integrate contractor CM and formal CM for software before delivery to operations is described. STM provides a level of control that is flexible enough to support integration and debugging, but sufficiently rigorous to insure the integrity of the testing process.
NASA Astrophysics Data System (ADS)
Poat, M. D.; Lauret, J.; Betts, W.
2015-12-01
The STAR online computing environment is an intensive ever-growing system used for real-time data collection and analysis. Composed of heterogeneous and sometimes groups of custom-tuned machines, the computing infrastructure was previously managed by manual configurations and inconsistently monitored by a combination of tools. This situation led to configuration inconsistency and an overload of repetitive tasks along with lackluster communication between personnel and machines. Globally securing this heterogeneous cyberinfrastructure was tedious at best and an agile, policy-driven system ensuring consistency, was pursued. Three configuration management tools, Chef, Puppet, and CFEngine have been compared in reliability, versatility and performance along with a comparison of infrastructure monitoring tools Nagios and Icinga. STAR has selected the CFEngine configuration management tool and the Icinga infrastructure monitoring system leading to a versatile and sustainable solution. By leveraging these two tools STAR can now swiftly upgrade and modify the environment to its needs with ease as well as promptly react to cyber-security requests. By creating a sustainable long term monitoring solution, the detection of failures was reduced from days to minutes, allowing rapid actions before the issues become dire problems, potentially causing loss of precious experimental data or uptime.
DDDAS-based Resilient Cyberspace (DRCS)
2016-08-03
Resilient Middleware ( CRM ), Supervisor VMs (SVMs), and Master VMs (MVMs). In what follows, we briefly highlight the main functions to be provided by each...phases. 4.5.1.2 Cloud Resilient Middleware ( CRM ) The CRM provides the control and management services to deploy and configure the software and...To speedup the process of selecting the appropriate resilient algorithms and execution environments, the CRM repository contains a set of SBE
Long-Range Ballistic Missile Defense in Europe
2010-04-26
land-based configurations. • Phase 3 ( 2018 timeframe): Deploy improved area coverage in Europe against medium- and intermediate-range Iranian...military services. “I think that all our military programs should be managed through those regular processes,” he said, and “that would include...10 interceptors itself would likely have comprised an area somewhat larger than a football field. The area of supporting infrastructure was likely
NASA Technical Reports Server (NTRS)
Phojanamongkolkij, Nipa; Oseguera-Lohr, Rosa M.; Lohr, Gary W.; Robbins, Steven W.; Fenbert, James W.; Hartman, Christopher L.
2015-01-01
The System-Oriented Runway Management (SORM) concept is a collection of capabilities focused on a more efficient use of runways while considering all of the factors that affect runway use. Tactical Runway Configuration Management (TRCM), one of the SORM capabilities, provides runway configuration and runway usage recommendations, and monitoring the active runway configuration for suitability given existing factors. This report focuses on the metroplex environment, with two or more proximate airports having arrival and departure operations that are highly interdependent. The myriad of factors that affect metroplex opeations require consideration in arriving at runway configurations that collectively best serve the system as a whole. To assess the metroplex TRCM (mTRCM) benefit, the performance metrics must be compared with the actual historical operations. The historical configuration schedules can be viewed as the schedules produced by subject matter experts (SMEs), and therefore are referred to as the SMEs' schedules. These schedules were obtained from the FAA's Aviation System Performance Metrics (ASPM) database; this is the most representative information regarding runway configuration selection by SMEs. This report focused on a benefit assessment of total delay, transit time, and throughput efficiency (TE) benefits using the mTRCM algorithm at representative volumes for today's traffic at the New York metroplex (N90).
78 FR 23685 - Airworthiness Directives; The Boeing Company
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-22
... installing new operational software for the electrical load management system and configuration database. The..., installing a new electrical power control panel, and installing new operational software for the electrical load management system and configuration database. Since the proposed AD was issued, we have received...
NASA Astrophysics Data System (ADS)
Shamugam, Veeramani; Murray, I.; Leong, J. A.; Sidhu, Amandeep S.
2016-03-01
Cloud computing provides services on demand instantly, such as access to network infrastructure consisting of computing hardware, operating systems, network storage, database and applications. Network usage and demands are growing at a very fast rate and to meet the current requirements, there is a need for automatic infrastructure scaling. Traditional networks are difficult to automate because of the distributed nature of their decision making process for switching or routing which are collocated on the same device. Managing complex environments using traditional networks is time-consuming and expensive, especially in the case of generating virtual machines, migration and network configuration. To mitigate the challenges, network operations require efficient, flexible, agile and scalable software defined networks (SDN). This paper discuss various issues in SDN and suggests how to mitigate the network management related issues. A private cloud prototype test bed was setup to implement the SDN on the OpenStack platform to test and evaluate the various network performances provided by the various configurations.
NASA Technical Reports Server (NTRS)
Franklin, J. A.; Innis, R. C.
1980-01-01
Flight experiments were conducted to evaluate two control concepts for configuration management during the transition to landing approach for a powered-lift STOL aircraft. NASA Ames' augmentor wing research aircraft was used in the program. Transitions from nominal level-flight configurations at terminal area pattern speeds were conducted along straight and curved descending flightpaths. Stabilization and command augmentation for attitude and airspeed control were used in conjunction with a three-cue flight director that presented commands for pitch, roll, and throttle controls. A prototype microwave system provided landing guidance. Results of these flight experiments indicate that these configuration management concepts permit the successful performance of transitions and approaches along curved paths by powered-lift STOL aircraft. Flight director guidance was essential to accomplish the task.
A new flight control and management system architecture and configuration
NASA Astrophysics Data System (ADS)
Kong, Fan-e.; Chen, Zongji
2006-11-01
The advanced fighter should possess the performance such as super-sound cruising, stealth, agility, STOVL(Short Take-Off Vertical Landing),powerful communication and information processing. For this purpose, it is not enough only to improve the aerodynamic and propulsion system. More importantly, it is necessary to enhance the control system. A complete flight control system provides not only autopilot, auto-throttle and control augmentation, but also the given mission management. F-22 and JSF possess considerably outstanding flight control system on the basis of pave pillar and pave pace avionics architecture. But their control architecture is not enough integrated. The main purpose of this paper is to build a novel fighter control system architecture. The control system constructed on this architecture should be enough integrated, inexpensive, fault-tolerant, high safe, reliable and effective. And it will take charge of both the flight control and mission management. Starting from this purpose, this paper finishes the work as follows: First, based on the human nervous control, a three-leveled hierarchical control architecture is proposed. At the top of the architecture, decision level is in charge of decision-making works. In the middle, organization & coordination level will schedule resources, monitor the states of the fighter and switch the control modes etc. And the bottom is execution level which holds the concrete drive and measurement; then, according to their function and resources all the tasks involving flight control and mission management are sorted to individual level; at last, in order to validate the three-leveled architecture, a physical configuration is also showed. The configuration is distributed and applies some new advancement in information technology industry such line replaced module and cluster technology.
Emma L. Witt; Christopher D. Barton; Jeffrey W. Stringer; Randy Kolka; Mac A. Cherry
2016-01-01
Streamside management zones (SMZs) are a common best management practice (BMP) used to reduce water quality impacts from logging. The objective of this research was to evaluate the impact of varying SMZ configurations on water quality. Treatments (T1, T2, and T3) that varied in SMZ width, canopy retention within the SMZ, and BMP utilization were applied at the...
A Recipe for Streamlining Mission Management
NASA Technical Reports Server (NTRS)
Mitchell, Andrew E.; Semancik, Susan K.
2004-01-01
This paper describes a project's design and implementation for streamlining mission management with knowledge capture processes across multiple organizations of a NASA directorate. Thc project's focus is on standardizing processes and reports; enabling secure information access and case of maintenance; automating and tracking appropriate workflow rules through process mapping; and infusing new technologies. This paper will describe a small team's experiences using XML technologies through an enhanced vendor suite of applications integrated on Windows-based platforms called the Wallops Integrated Scheduling and Document Management System (WISDMS). This paper describes our results using this system in a variety of endeavors, including providing range project scheduling and resource management for a Range and Mission Management Office; implementing an automated Customer Feedback system for a directorate; streamlining mission status reporting across a directorate; and initiating a document management, configuration management and portal access system for a Range Safety Office's programs. The end result is a reduction of the knowledge gap through better integration and distribution of information, improved process performance, automated metric gathering, and quicker identification of problem areas and issues. However, the real proof of the pudding comes through overcoming the user's reluctance to replace familiar, seasoned processes with new technology ingredients blended with automated procedures in an untested recipe. This paper shares some of the team's observations that led to better implementation techniques, as well as an IS0 9001 Best Practices citation. This project has provided a unique opportunity to advance NASA's competency in new technologies, as well as to strategically implement them within an organizational structure, while wetting the appetite for continued improvements in mission management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
WHITE, D.A.
1999-12-29
This Software Configuration Management Plan (SCMP) provides the instructions for change control of the AZ1101 Mixer Pump Demonstration Data Acquisition System (DAS) and the Sludge Mobilization Cart (Gamma Cart) Data Acquisition and Control System (DACS).
Software life cycle dynamic simulation model: The organizational performance submodel
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1985-01-01
The submodel structure of a software life cycle dynamic simulation model is described. The software process is divided into seven phases, each with product, staff, and funding flows. The model is subdivided into an organizational response submodel, a management submodel, a management influence interface, and a model analyst interface. The concentration here is on the organizational response model, which simulates the performance characteristics of a software development subject to external and internal influences. These influences emanate from two sources: the model analyst interface, which configures the model to simulate the response of an implementing organization subject to its own internal influences, and the management submodel that exerts external dynamic control over the production process. A complete characterization is given of the organizational response submodel in the form of parameterized differential equations governing product, staffing, and funding levels. The parameter values and functions are allocated to the two interfaces.
Advanced information processing system: Local system services
NASA Technical Reports Server (NTRS)
Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter
1989-01-01
The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.
Marshall Space Flight Center Ground Systems Development and Integration
NASA Technical Reports Server (NTRS)
Wade, Gina
2016-01-01
Ground Systems Development and Integration performs a variety of tasks in support of the Mission Operations Laboratory (MOL) and other Center and Agency projects. These tasks include various systems engineering processes such as performing system requirements development, system architecture design, integration, verification and validation, software development, and sustaining engineering of mission operations systems that has evolved the Huntsville Operations Support Center (HOSC) into a leader in remote operations for current and future NASA space projects. The group is also responsible for developing and managing telemetry and command configuration and calibration databases. Personnel are responsible for maintaining and enhancing their disciplinary skills in the areas of project management, software engineering, software development, software process improvement, telecommunications, networking, and systems management. Domain expertise in the ground systems area is also maintained and includes detailed proficiency in the areas of real-time telemetry systems, command systems, voice, video, data networks, and mission planning systems.
Process assessment of small scale low temperature methanol synthesis
NASA Astrophysics Data System (ADS)
Hendriyana, Susanto, Herri; Subagjo
2015-12-01
Biomass is a renewable energy resource and has the potential to make a significant impact on domestic fuel supplies. Biomass can be converted to fuel like methanol via several step process. The process can be split into following main steps: biomass preparation, gasification, gas cooling and cleaning, gas shift and methanol synthesis. Untill now these configuration still has a problem like high production cost, catalyst deactivation, economy of scale and a huge energy requirements. These problems become the leading inhibition for biomass conversion to methanol, which should be resolved to move towards the economical. To address these issues, we developed various process and new configurations for methanol synthesis via methyl formate. This configuration combining two reactors: the one reactor for the carbonylation of methanol and CO to form methyl formate, and the second for the hydrogenolysis of methyl formate and H2 to form two molecule of methanol. Four plant process configurations were compared with the biomass basis is 300 ton/day. The first configuration (A) is equipped with a steam reforming process for converting methane to CO and H2 for increasing H2/CO ratio. CO2 removal is necessary to avoid poisoning the catalyst. COSORB process used for the purpose of increasing the partial pressure of CO in the feed gas. The steam reforming process in B configuration is not used with the aim of reducing the number of process equipment, so expect lower investment costs. For C configuration, the steam reforming process and COSORB are not used with the aim of reducing the number of process equipment, so expect lower investment costs. D configuration is almost similar to the configuration A. This configuration difference is in the synthesis of methanol which was held in a single reactor. Carbonylation and hydrogenolysis reactions carried out in the same reactor one. These processes were analyzed in term of technical process, material and energy balance and economic analysis. The presented study is an attempt to compile most of these efforts in order to guide future work to get cheaper low cost investment. From our study the interesting configuration to the next development is D configuration with methanol yield 112 ton/day and capital cost with 526.4 106 IDR. The configuration of D with non-discounted and discounted rate had the break-even point approximately six and eight years.
A network architecture for International Business Satellite communications
NASA Astrophysics Data System (ADS)
Takahata, Fumio; Nohara, Mitsuo; Takeuchi, Yoshio
Demand Assignment (DA) control is expected to be introduced in the International Business Satellte communications (IBS) network in order to cope with a growing international business traffic. The paper discusses the DA/IBS network from the viewpoints of network configuration, satellite channel configuration and DA control. The network configuration proposed here consists of one Central Station with network management function and several Network Coordination Stations with user management function. A satellite channel configuration is also presented along with a tradeoff study on transmission bit rate, high power amplifier output power requirement, and service quality. The DA control flow and protocol based on CCITT Signalling System No. 7 are also proposed.
Quality assurance planning for lunar Mars exploration
NASA Technical Reports Server (NTRS)
Myers, Kay
1991-01-01
A review is presented of the tools and techniques required to meet the challenge of total quality in the goal of traveling to Mars and returning to the moon. One program used by NASA to ensure the integrity of baselined requirements documents is configuration management (CM). CM is defined as an integrated management process that documents and identifies the functional and physical characteristics of a facility's systems, structures, computer software, and components. It also ensures that changes to these characteristics are properly assessed, developed, approved, implemented, verified, recorded, and incorporated into the facility's documentation. Three principal areas are discussed that will realize significant efficiencies and enhanced effectiveness, change assessment, change avoidance, and requirements management.
Software Design Methodology Migration for a Distributed Ground System
NASA Technical Reports Server (NTRS)
Ritter, George; McNair, Ann R. (Technical Monitor)
2002-01-01
The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has been developed and has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes. The new Software processes still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Process have evolved highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project .
NASA Technical Reports Server (NTRS)
Leptoukh, Gregory G.
2005-01-01
The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is one of the major Distributed Active Archive Centers (DAACs) archiving and distributing remote sensing data from the NASA's Earth Observing System. In addition to providing just data, the GES DISC/DAAC has developed various value-adding processing services. A particularly useful service is data processing a t the DISC (i.e., close to the input data) with the users' algorithms. This can take a number of different forms: as a configuration-managed algorithm within the main processing stream; as a stand-alone program next to the on-line data storage; as build-it-yourself code within the Near-Archive Data Mining (NADM) system; or as an on-the-fly analysis with simple algorithms embedded into the web-based tools (to avoid downloading unnecessary all the data). The existing data management infrastructure at the GES DISC supports a wide spectrum of options: from data subsetting data spatially and/or by parameter to sophisticated on-line analysis tools, producing economies of scale and rapid time-to-deploy. Shifting processing and data management burden from users to the GES DISC, allows scientists to concentrate on science, while the GES DISC handles the data management and data processing at a lower cost. Several examples of successful partnerships with scientists in the area of data processing and mining are presented.
TWRS configuration management requirement source document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vann, J.M.
The TWRS Configuration Management (CM) Requirement Source document prescribes CM as a basic product life-cycle function by which work and activities are conducted or accomplished. This document serves as the requirements basis for the TWRS CM program. The objective of the TWRS CM program is to establish consistency among requirements, physical/functional configuration, information, and documentation for TWRS and TWRS products, and to maintain this consistency throughout the life-cycle of TWRS and the product, particularly as changes are being made.
Systems and methods for detecting and processing
Johnson, Michael M [Livermore, CA; Yoshimura, Ann S [Tracy, CA
2006-03-28
Embodiments of the present invention provides systems and method for detecting. Sensing modules are provided in communication with one or more detectors. In some embodiments, detectors are provided that are sensitive to chemical, biological, or radiological agents. Embodiments of sensing modules include processing capabilities to analyze, perform computations on, and/or run models to predict or interpret data received from one or more detectors. Embodiments of sensing modules form various network configurations with one another and/or with one or more data aggregation devices. Some embodiments of sensing modules include power management functionalities.
SoftLab: A Soft-Computing Software for Experimental Research with Commercialization Aspects
NASA Technical Reports Server (NTRS)
Akbarzadeh-T, M.-R.; Shaikh, T. S.; Ren, J.; Hubbell, Rob; Kumbla, K. K.; Jamshidi, M
1998-01-01
SoftLab is a software environment for research and development in intelligent modeling/control using soft-computing paradigms such as fuzzy logic, neural networks, genetic algorithms, and genetic programs. SoftLab addresses the inadequacies of the existing soft-computing software by supporting comprehensive multidisciplinary functionalities from management tools to engineering systems. Furthermore, the built-in features help the user process/analyze information more efficiently by a friendly yet powerful interface, and will allow the user to specify user-specific processing modules, hence adding to the standard configuration of the software environment.
Saver.net lidar network in southern South America
NASA Astrophysics Data System (ADS)
Ristori, Pablo; Otero, Lidia; Jin, Yoshitaka; Barja, Boris; Shimizu, Atsushi; Barbero, Albane; Salvador, Jacobo; Bali, Juan Lucas; Herrera, Milagros; Etala, Paula; Acquesta, Alejandro; Quel, Eduardo; Sugimoto, Nobuo; Mizuno, Akira
2018-04-01
The South American Environmental Risk Management Network (SAVER-Net) is an instrumentation network, mainly composed by lidars, to provide real-time information for atmospheric hazards and risk management purposes in South America. This lidar network have been developed since 2012 and all its sampling points are expected to be fully implemented by 2017. This paper describes the network's status and configuration, the data acquisition and processing scheme (protocols and data levels), as well as some aspects of the scientific networking in Latin American Lidar Network (LALINET). Similarly, the paper lays out future plans on the operation and integration to major international collaborative efforts.
Tools to manage the enterprise-wide picture archiving and communications system environment.
Lannum, L M; Gumpf, S; Piraino, D
2001-06-01
The presentation will focus on the implementation and utilization of a central picture archiving and communications system (PACS) network-monitoring tool that allows for enterprise-wide operations management and support of the image distribution network. The MagicWatch (Siemens, Iselin, NJ) PACS/radiology information system (RIS) monitoring station from Siemens has allowed our organization to create a service support structure that has given us proactive control of our environment and has allowed us to meet the service level performance expectations of the users. The Radiology Help Desk has used the MagicWatch PACS monitoring station as an applications support tool that has allowed the group to monitor network activity and individual systems performance at each node. Fast and timely recognition of the effects of single events within the PACS/RIS environment has allowed the group to proactively recognize possible performance issues and resolve problems. The PACS/operations group performs network management control, image storage management, and software distribution management from a single, central point in the enterprise. The MagicWatch station allows for the complete automation of software distribution, installation, and configuration process across all the nodes in the system. The tool has allowed for the standardization of the workstations and provides a central configuration control for the establishment and maintenance of the system standards. This report will describe the PACS management and operation prior to the implementation of the MagicWatch PACS monitoring station and will highlight the operational benefits of a centralized network and system-monitoring tool.
49 CFR Appendix A to Part 232 - Schedule of Civil Penalties 1
Code of Federal Regulations, 2011 CFR
2011-10-01
... 7,500 (f) Improper use of car with inoperative or ineffective brakes 2,500 5,000 (g) Improper... Design, interoperability, and configuration management requirements: (a) Failure to meet minimum... comply with a proper configuration management plan 7,500 11,000 232.605 Training Requirements: (a...
Data base management system configuration specification. [computer storage devices
NASA Technical Reports Server (NTRS)
Neiers, J. W.
1979-01-01
The functional requirements and the configuration of the data base management system are described. Techniques and technology which will enable more efficient and timely transfer of useful data from the sensor to the user, extraction of information by the user, and exchange of information among the users are demonstrated.
Processing device with self-scrubbing logic
Wojahn, Christopher K.
2016-03-01
An apparatus includes a processing unit including a configuration memory and self-scrubber logic coupled to read the configuration memory to detect compromised data stored in the configuration memory. The apparatus also includes a watchdog unit external to the processing unit and coupled to the self-scrubber logic to detect a failure in the self-scrubber logic. The watchdog unit is coupled to the processing unit to selectively reset the processing unit in response to detecting the failure in the self-scrubber logic. The apparatus also includes an external memory external to the processing unit and coupled to send configuration data to the configuration memory in response to a data feed signal outputted by the self-scrubber logic.
NASA Technical Reports Server (NTRS)
Pepe, J. T.
1972-01-01
A functional design of software executive system for the space shuttle avionics computer is presented. Three primary functions of the executive are emphasized in the design: task management, I/O management, and configuration management. The executive system organization is based on the applications software and configuration requirements established during the Phase B definition of the Space Shuttle program. Although the primary features of the executive system architecture were derived from Phase B requirements, it was specified for implementation with the IBM 4 Pi EP aerospace computer and is expected to be incorporated into a breadboard data management computer system at NASA Manned Spacecraft Center's Information system division. The executive system was structured for internal operation on the IBM 4 Pi EP system with its external configuration and applications software assumed to the characteristic of the centralized quad-redundant avionics systems defined in Phase B.
Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G
2017-08-01
Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.
Design process of a photonics network for military platforms
NASA Astrophysics Data System (ADS)
Nelson, George F.; Rao, Nagarajan M.; Krawczak, John A.; Stevens, Rick C.
1999-02-01
Technology development in photonics is rapidly progressing. The concept of a Unified Network will provide re- configurable network access to platform sensors, Vehicle Management Systems, Stores and avionics. The re-configurable taps into the network will accommodate present interface standards and provide scaleability for the insertion of future interfaces. Significant to this development is the design and test of the Optical Backplane Interconnect System funded by Naval Air Systems Command and developed by Lockheed Martin Tactical Defense Systems - Eagan. OBIS results in the merging of the electrical backplane and the optical backplane, with interconnect fabric and card edge connectors finally providing adequate electrical and optical card access. Presently OBIS will support 1.2 Gb/s per fiber over multiples of 12 fibers per ribbon cable.
Next Generation Monitoring: Tier 2 Experience
NASA Astrophysics Data System (ADS)
Fay, R.; Bland, J.; Jones, S.
2017-10-01
Monitoring IT infrastructure is essential for maximizing availability and minimizing disruption by detecting failures and developing issues. The HEP group at Liverpool have recently updated our monitoring infrastructure with the goal of increasing coverage, improving visualization capabilities, and streamlining configuration and maintenance. Here we present a summary of Liverpool’s experience, the monitoring infrastructure, and the tools used to build it. In brief, system checks are configured in Puppet using Hiera, and managed by Sensu, replacing Nagios. Centralised logging is managed with Elasticsearch, together with Logstash and Filebeat. Kibana provides an interface for interactive analysis, including visualization and dashboards. Metric collection is also configured in Puppet, managed by collectd and stored in Graphite, with Grafana providing a visualization and dashboard tool. The Uchiwa dashboard for Sensu provides a web interface for viewing infrastructure status. Alert capabilities are provided via external handlers. A custom alert handler is in development to provide an easily configurable, extensible and maintainable alert facility.
WIS Implementation Study Report. Volume 2. Resumes.
1983-10-01
WIS modernization that major attention be paid to interface definition and design, system integra- tion and test , and configuration management of the...Estimates -- Computer Corporation of America -- 155 Test Processing Systems -- Newburyport Computer Associates, Inc. -- 183 Cluster II Papers-- Standards...enhancements of the SPL/I compiler system, development of test systems for the verification of SDEX/M and the timing and architecture of the AN/U YK-20 and
Reinventing The Design Process: Teams and Models
NASA Technical Reports Server (NTRS)
Wall, Stephen D.
1999-01-01
The future of space mission designing will be dramatically different from the past. Formerly, performance-driven paradigms emphasized data return with cost and schedule being secondary issues. Now and in the future, costs are capped and schedules fixed-these two variables must be treated as independent in the design process. Accordingly, JPL has redesigned its design process. At the conceptual level, design times have been reduced by properly defining the required design depth, improving the linkages between tools, and managing team dynamics. In implementation-phase design, system requirements will be held in crosscutting models, linked to subsystem design tools through a central database that captures the design and supplies needed configuration management and control. Mission goals will then be captured in timelining software that drives the models, testing their capability to execute the goals. Metrics are used to measure and control both processes and to ensure that design parameters converge through the design process within schedule constraints. This methodology manages margins controlled by acceptable risk levels. Thus, teams can evolve risk tolerance (and cost) as they would any engineering parameter. This new approach allows more design freedom for a longer time, which tends to encourage revolutionary and unexpected improvements in design.
Software Development and Test Methodology for a Distributed Ground System
NASA Technical Reports Server (NTRS)
Ritter, George; Guillebeau, Pat; McNair, Ann R. (Technical Monitor)
2002-01-01
The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes in an effort to minimize unnecessary overhead while maximizing process benefits. The Software processes that have evolved still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Processes have evolved, highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project.
Configural and component processing in simultaneous and sequential lineup procedures.
Flowe, Heather D; Smith, Harriet M J; Karoğlu, Nilda; Onwuegbusi, Tochukwu O; Rai, Lovedeep
2016-01-01
Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences. We had participants view a crime video, and then they attempted to identify the perpetrator from a simultaneous or sequential lineup. The test faces were presented either upright or inverted, as previous research has shown that inverting test faces disrupts configural processing. The size of the inversion effect for faces was the same across lineup procedures, indicating that configural processing underlies face recognition in both procedures. Discrimination accuracy was comparable across lineup procedures in both the upright and inversion condition. Theoretical implications of the results are discussed.
The Kepler Science Operations Center Pipeline Framework Extensions
NASA Technical Reports Server (NTRS)
Klaus, Todd C.; Cote, Miles T.; McCauliff, Sean; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Chandrasekaran, Hema; Bryson, Stephen T.; Middour, Christopher; Caldwell, Douglas A.;
2010-01-01
The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including managing targets, generating on-board data compression tables, monitoring photometer health and status, processing the science data, and exporting the pipeline products to the mission archive. We describe how the generic pipeline framework software developed for Kepler is extended to achieve these goals, including pipeline configurations for processing science data and other support roles, and custom unit of work generators that control how the Kepler data are partitioned and distributed across the computing cluster. We describe the interface between the Java software that manages the retrieval and storage of the data for a given unit of work and the MATLAB algorithms that process these data. The data for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing these files to be used to debug and evolve the algorithms offline.
Process assessment of small scale low temperature methanol synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendriyana; Chemical Engineering Department, Faculty of Industrial Technology, InstitutTeknologi Bandung; Susanto, Herri, E-mail: herri@che.itb.ac.id
2015-12-29
Biomass is a renewable energy resource and has the potential to make a significant impact on domestic fuel supplies. Biomass can be converted to fuel like methanol via several step process. The process can be split into following main steps: biomass preparation, gasification, gas cooling and cleaning, gas shift and methanol synthesis. Untill now these configuration still has a problem like high production cost, catalyst deactivation, economy of scale and a huge energy requirements. These problems become the leading inhibition for biomass conversion to methanol, which should be resolved to move towards the economical. To address these issues, we developedmore » various process and new configurations for methanol synthesis via methyl formate. This configuration combining two reactors: the one reactor for the carbonylation of methanol and CO to form methyl formate, and the second for the hydrogenolysis of methyl formate and H{sub 2} to form two molecule of methanol. Four plant process configurations were compared with the biomass basis is 300 ton/day. The first configuration (A) is equipped with a steam reforming process for converting methane to CO and H{sub 2} for increasing H{sub 2}/CO ratio. CO{sub 2} removal is necessary to avoid poisoning the catalyst. COSORB process used for the purpose of increasing the partial pressure of CO in the feed gas. The steam reforming process in B configuration is not used with the aim of reducing the number of process equipment, so expect lower investment costs. For C configuration, the steam reforming process and COSORB are not used with the aim of reducing the number of process equipment, so expect lower investment costs. D configuration is almost similar to the configuration A. This configuration difference is in the synthesis of methanol which was held in a single reactor. Carbonylation and hydrogenolysis reactions carried out in the same reactor one. These processes were analyzed in term of technical process, material and energy balance and economic analysis. The presented study is an attempt to compile most of these efforts in order to guide future work to get cheaper low cost investment. From our study the interesting configuration to the next development is D configuration with methanol yield 112 ton/day and capital cost with 526.4 10{sup 6} IDR. The configuration of D with non-discounted and discounted rate had the break-even point approximately six and eight years.« less
Supply network configuration—A benchmarking problem
NASA Astrophysics Data System (ADS)
Brandenburg, Marcus
2018-03-01
Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.
Hoang, Long Phi; Biesbroek, Robbert; Tri, Van Pham Dang; Kummu, Matti; van Vliet, Michelle T H; Leemans, Rik; Kabat, Pavel; Ludwig, Fulco
2018-02-24
Climate change and accelerating socioeconomic developments increasingly challenge flood-risk management in the Vietnamese Mekong River Delta-a typical large, economically dynamic and highly vulnerable delta. This study identifies and addresses the emerging challenges for flood-risk management. Furthermore, we identify and analyse response solutions, focusing on meaningful configurations of the individual solutions and how they can be tailored to specific challenges using expert surveys, content analysis techniques and statistical inferences. Our findings show that the challenges for flood-risk management are diverse, but critical challenges predominantly arise from the current governance and institutional settings. The top-three challenges include weak collaboration, conflicting management objectives and low responsiveness to new issues. We identified 114 reported solutions and developed six flood management strategies that are tailored to specific challenges. We conclude that the current technology-centric flood management approach is insufficient given the rapid socioecological changes. This approach therefore should be adapted towards a more balanced management configuration where technical and infrastructural measures are combined with institutional and governance resolutions. Insights from this study contribute to the emerging repertoire of contemporary flood management solutions, especially through their configurations and tailoring to specific challenges.
The SOFIA Mission Control System Software
NASA Astrophysics Data System (ADS)
Heiligman, G. M.; Brock, D. R.; Culp, S. D.; Decker, P. H.; Estrada, J. C.; Graybeal, J. B.; Nichols, D. M.; Paluzzi, P. R.; Sharer, P. J.; Pampell, R. J.; Papke, B. L.; Salovich, R. D.; Schlappe, S. B.; Spriestersbach, K. K.; Webb, G. L.
1999-05-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) will be delivered with a computerized mission control system (MCS). The MCS communicates with the aircraft's flight management system and coordinates the operations of the telescope assembly, mission-specific subsystems, and the science instruments. The software for the MCS must be reliable and flexible. It must be easily usable by many teams of observers with widely differing needs, and it must support non-intrusive access for education and public outreach. The technology must be appropriate for SOFIA's 20-year lifetime. The MCS software development process is an object-oriented, use case driven approach. The process is iterative: delivery will be phased over four "builds"; each build will be the result of many iterations; and each iteration will include analysis, design, implementation, and test activities. The team is geographically distributed, coordinating its work via Web pages, teleconferences, T.120 remote collaboration, and CVS (for Internet-enabled configuration management). The MCS software architectural design is derived in part from other observatories' experience. Some important features of the MCS are: * distributed computing over several UNIX and VxWorks computers * fast throughput of time-critical data * use of third-party components, such as the Adaptive Communications Environment (ACE) and the Common Object Request Broker Architecture (CORBA) * extensive configurability via stored, editable configuration files * use of several computer languages so developers have "the right tool for the job". C++, Java, scripting languages, Interactive Data Language (from Research Systems, Int'l.), XML, and HTML will all be used in the final deliverables. This paper reports on work in progress, with the final product scheduled for delivery in 2001. This work was performed for Universities Space Research Association for NASA under contract NAS2-97001.
NASA Technical Reports Server (NTRS)
Ghofranian, Siamak (Inventor); Chuang, Li-Ping Christopher (Inventor); Motaghedi, Pejmun (Inventor)
2016-01-01
A method and apparatus for docking a spacecraft. The apparatus comprises elongate members, movement systems, and force management systems. The elongate members are associated with a docking structure for a spacecraft. The movement systems are configured to move the elongate members axially such that the docking structure for the spacecraft moves. Each of the elongate members is configured to move independently. The force management systems connect the movement systems to the elongate members and are configured to limit a force applied by the each of the elongate members to a desired threshold during movement of the elongate members.
Lahaie, A; Mottron, L; Arguin, M; Berthiaume, C; Jemel, B; Saumier, D
2006-01-01
Configural processing in autism was studied in Experiment 1 by using the face inversion effect. A normal inversion effect was observed in the participants with autism, suggesting intact configural face processing. A priming paradigm using partial or complete faces served in Experiment 2 to assess both local and configural face processing. Overall, normal priming effects were found in participants with autism, irrespective of whether the partial face primes were intuitive face parts (i.e., eyes, nose, etc.) or arbitrary segments. An exception, however, was that participants with autism showed magnified priming with single face parts relative to typically developing control participants. The present findings argue for intact configural processing in autism along with an enhanced processing for individual face parts. The face-processing peculiarities known to characterize autism are discussed on the basis of these results and past congruent results with nonsocial stimuli.
The Roles of Featural and Configural Face Processing in Snap Judgments of Sexual Orientation
Tabak, Joshua A.; Zayas, Vivian
2012-01-01
Research has shown that people are able to judge sexual orientation from faces with above-chance accuracy, but little is known about how these judgments are formed. Here, we investigated the importance of well-established face processing mechanisms in such judgments: featural processing (e.g., an eye) and configural processing (e.g., spatial distance between eyes). Participants judged sexual orientation from faces presented for 50 milliseconds either upright, which recruits both configural and featural processing, or upside-down, when configural processing is strongly impaired and featural processing remains relatively intact. Although participants judged women’s and men’s sexual orientation with above-chance accuracy for upright faces and for upside-down faces, accuracy for upside-down faces was significantly reduced. The reduced judgment accuracy for upside-down faces indicates that configural face processing significantly contributes to accurate snap judgments of sexual orientation. PMID:22629321
Operational Management System for Regulated Water Systems
NASA Astrophysics Data System (ADS)
van Loenen, A.; van Dijk, M.; van Verseveld, W.; Berger, H.
2012-04-01
Most of the Dutch large rivers, canals and lakes are controlled by the Dutch water authorities. The main reasons concern safety, navigation and fresh water supply. Historically the separate water bodies have been controlled locally. For optimizating management of these water systems an integrated approach was required. Presented is a platform which integrates data from all control objects for monitoring and control purposes. The Operational Management System for Regulated Water Systems (IWP) is an implementation of Delft-FEWS which supports operational control of water systems and actively gives advice. One of the main characteristics of IWP is that is real-time collects, transforms and presents different types of data, which all add to the operational water management. Next to that, hydrodynamic models and intelligent decision support tools are added to support the water managers during their daily control activities. An important advantage of IWP is that it uses the Delft-FEWS framework, therefore processes like central data collection, transformations, data processing and presentation are simply configured. At all control locations the same information is readily available. The operational water management itself gains from this information, but it can also contribute to cost efficiency (no unnecessary pumping), better use of available storage and advise during (water polution) calamities.
Space Shuttle processing - A case study in artificial intelligence
NASA Technical Reports Server (NTRS)
Mollikarimi, Cindy; Gargan, Robert; Zweben, Monte
1991-01-01
A scheduling system incorporating AI is described and applied to the automated processing of the Space Shuttle. The unique problem of addressing the temporal, resource, and orbiter-configuration requirements of shuttle processing is described with comparisons to traditional project management for manufacturing processes. The present scheduling system is developed to handle the late inputs and complex programs that characterize shuttle processing by incorporating fixed preemptive scheduling, constraint-based simulated annealing, and the characteristics of an 'anytime' algorithm. The Space-Shuttle processing environment is modeled with 500 activities broken down into 4000 subtasks and with 1600 temporal constraints, 8000 resource constraints, and 3900 state requirements. The algorithm is shown to scale to very large problems and maintain anytime characteristics suggesting that an automated scheduling process is achievable and potentially cost-effective.
ATLAS TDAQ System Administration: Master of Puppets
NASA Astrophysics Data System (ADS)
Ballestrero, S.; Brasolin, F.; Fazio, D.; Gament, C.; Lee, C. J.; Scannicchio, D. A.; Twomey, M. S.
2017-10-01
Within the ATLAS detector, the Trigger and Data Acquisition system is responsible for the online processing of data streamed from the detector during collisions at the Large Hadron Collider at CERN. The online farm is comprised of ∼4000 servers processing the data read out from ∼100 million detector channels through multiple trigger levels. The configurtion of these servers is not an easy task, especially since the detector itself is made up of multiple different sub-detectors, each with their own particular requirements. The previous method of configuring these servers, using Quattor and a hierarchical scripts system was cumbersome and restrictive. A better, unified system was therefore required to simplify the tasks of the TDAQ Systems Administrators, for both the local and net-booted systems, and to be able to fulfil the requirements of TDAQ, Detector Control Systems and the sub-detectors groups. Various configuration management systems were evaluated, though in the end, Puppet was chosen as the application of choice and was the first such implementation at CERN.
Pulley, S; Collins, A L
2018-09-01
The mitigation of diffuse sediment pollution requires reliable provenance information so that measures can be targeted. Sediment source fingerprinting represents one approach for supporting these needs, but recent methodological developments have resulted in an increasing complexity of data processing methods rendering the approach less accessible to non-specialists. A comprehensive new software programme (SIFT; SedIment Fingerprinting Tool) has therefore been developed which guides the user through critical data analysis decisions and automates all calculations. Multiple source group configurations and composite fingerprints are identified and tested using multiple methods of uncertainty analysis. This aims to explore the sediment provenance information provided by the tracers more comprehensively than a single model, and allows for model configurations with high uncertainties to be rejected. This paper provides an overview of its application to an agricultural catchment in the UK to determine if the approach used can provide a reduction in uncertainty and increase in precision. Five source group classifications were used; three formed using a k-means cluster analysis containing 2, 3 and 4 clusters, and two a-priori groups based upon catchment geology. Three different composite fingerprints were used for each classification and bi-plots, range tests, tracer variability ratios and virtual mixtures tested the reliability of each model configuration. Some model configurations performed poorly when apportioning the composition of virtual mixtures, and different model configurations could produce different sediment provenance results despite using composite fingerprints able to discriminate robustly between the source groups. Despite this uncertainty, dominant sediment sources were identified, and those in close proximity to each sediment sampling location were found to be of greatest importance. This new software, by integrating recent methodological developments in tracer data processing, guides users through key steps. Critically, by applying multiple model configurations and uncertainty assessment, it delivers more robust solutions for informing catchment management of the sediment problem than many previously used approaches. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
49 CFR 232.603 - Design, interoperability, and configuration management requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... requirements. (a) General. A freight car or freight train equipped with an ECP brake system shall, at a minimum...) Approval. A freight train or freight car equipped with an ECP brake system and equipment covered by the AAR...) Configuration management. A railroad operating a freight train or freight car equipped with ECP brake systems...
49 CFR 232.603 - Design, interoperability, and configuration management requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... requirements. (a) General. A freight car or freight train equipped with an ECP brake system shall, at a minimum...) Approval. A freight train or freight car equipped with an ECP brake system and equipment covered by the AAR...) Configuration management. A railroad operating a freight train or freight car equipped with ECP brake systems...
49 CFR 232.603 - Design, interoperability, and configuration management requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... requirements. (a) General. A freight car or freight train equipped with an ECP brake system shall, at a minimum...) Approval. A freight train or freight car equipped with an ECP brake system and equipment covered by the AAR...) Configuration management. A railroad operating a freight train or freight car equipped with ECP brake systems...
49 CFR 232.603 - Design, interoperability, and configuration management requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... requirements. (a) General. A freight car or freight train equipped with an ECP brake system shall, at a minimum...) Approval. A freight train or freight car equipped with an ECP brake system and equipment covered by the AAR...) Configuration management. A railroad operating a freight train or freight car equipped with ECP brake systems...
Lighting system with thermal management system having point contact synthetic jets
Arik, Mehmet; Weaver, Stanton Earl; Kuenzler, Glenn Howard; Wolfe, Jr., Charles Franklin; Sharma, Rajdeep
2013-12-10
Lighting system having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system includes a plurality of synthetic jets. The synthetic jets are arranged within the lighting system such that they are secured at contact points.
Lighting system with thermal management system having point contact synthetic jets
Arik, Mehmet; Weaver, Stanton Earl; Kuenzler, Glenn Howard; Wolfe, Jr, Charles Franklin; Sharma, Rajdeep
2016-08-30
Lighting systems having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system includes a plurality of synthetic jets. The synthetic jets are arranged within the lighting system such that they are secured at contact points.
Lighting system with thermal management system having point contact synthetic jets
Arik, Mehmet; Weaver, Stanton Earl; Kuenzler, Glenn Howard; Wolfe, Jr., Charles Franklin; Sharma, Rajdeep
2016-08-23
Lighting systems having unique configurations are provided. For instance, the lighting system may include a light source, a thermal management system and driver electronics, each contained within a housing structure. The light source is configured to provide illumination visible through an opening in the housing structure. The thermal management system includes a plurality of synthetic jets. The synthetic jets are arranged within the lighting system such that they are secured at contact points.
2016-12-13
INFORMATION TECHNOLOGY , GOVERNMENT ACCOUNTABILITY OFFICE SUBJECT: DoD Cybersecurity Weaknesses as Reported in Audit Reports Issued From August...The Air Force Audit Agency recommended that the Air Force Reserve officials direct AFRC personnel to implement a standard process to ensure continued...those products and systems throughout the system development life cycle. The DoD audit community and the GAO reported configuration management
Kepler: A Search for Terrestrial Planets - SOC 9.3 DR25 Pipeline Parameter Configuration Reports
NASA Technical Reports Server (NTRS)
Campbell, Jennifer R.
2017-01-01
This document describes the manner in which the pipeline and algorithm parameters for the Kepler Science Operations Center (SOC) science data processing pipeline were managed. This document is intended for scientists and software developers who wish to better understand the software design for the final Kepler codebase (SOC 9.3) and the effect of the software parameters on the Data Release (DR) 25 archival products.
Dynamic Communication Resource Negotiations
NASA Technical Reports Server (NTRS)
Chow, Edward; Vatan, Farrokh; Paloulian, George; Frisbie, Steve; Srostlik, Zuzana; Kalomiris, Vasilios; Apgar, Daniel
2012-01-01
Today's advanced network management systems can automate many aspects of the tactical networking operations within a military domain. However, automation of joint and coalition tactical networking across multiple domains remains challenging. Due to potentially conflicting goals and priorities, human agreement is often required before implementation into the network operations. This is further complicated by incompatible network management systems and security policies, rendering it difficult to implement automatic network management, thus requiring manual human intervention to the communication protocols used at various network routers and endpoints. This process of manual human intervention is tedious, error-prone, and slow. In order to facilitate a better solution, we are pursuing a technology which makes network management automated, reliable, and fast. Automating the negotiation of the common network communication parameters between different parties is the subject of this paper. We present the technology that enables inter-force dynamic communication resource negotiations to enable ad-hoc inter-operation in the field between force domains, without pre-planning. It also will enable a dynamic response to changing conditions within the area of operations. Our solution enables the rapid blending of intra-domain policies so that the forces involved are able to inter-operate effectively without overwhelming each other's networks with in-appropriate or un-warranted traffic. It will evaluate the policy rules and configuration data for each of the domains, then generate a compatible inter-domain policy and configuration that will update the gateway systems between the two domains.
Development of NETCONF-Based Network Management Systems in Web Services Framework
NASA Astrophysics Data System (ADS)
Iijima, Tomoyuki; Kimura, Hiroyasu; Kitani, Makoto; Atarashi, Yoshifumi
To develop a network management system (NMS) more easily, the authors developed an application programming interface (API) for configuring network devices. Because this API is used in a Java development environment, an NMS can be developed by utilizing the API and other commonly available Java libraries. It is thus possible to easily develop an NMS that is highly compatible with other IT systems. And operations that are generated from the API and that are exchanged between the NMS and network devices are based on NETCONF, which is standardized by the Internet Engineering Task Force (IETF) as a next-generation network-configuration protocol. Adopting a standardized technology ensures that the NMS developed by using the API can manage network devices provided from multi-vendors in a unified manner. Furthermore, the configuration items exchanged over NETCONF are specified in an object-oriented design. They are therefore easier to manage than such items in the Management Information Base (MIB), which is defined as data to be managed by the Simple Network Management Protocol (SNMP). We actually developed several NMSs by using the API. Evaluation of these NMSs showed that, in terms of configuration time and development time, the NMS developed by using the API performed as well as NMSs developed by using a command line interface (CLI) and SNMP. The NMS developed by using the API showed feasibility to achieve “autonomic network management” and “high interoperability with IT systems.”
Modular workcells: modern methods for laboratory automation.
Felder, R A
1998-12-01
Laboratory automation is beginning to become an indispensable survival tool for laboratories facing difficult market competition. However, estimates suggest that only 8% of laboratories will be able to afford total laboratory automation systems. Therefore, automation vendors have developed alternative hardware configurations called 'modular automation', to fit the smaller laboratory. Modular automation consists of consolidated analyzers, integrated analyzers, modular workcells, and pre- and post-analytical automation. These terms will be defined in this paper. Using a modular automation model, the automated core laboratory will become a site where laboratory data is evaluated by trained professionals to provide diagnostic information to practising physicians. Modem software information management and process control tools will complement modular hardware. Proper standardization that will allow vendor-independent modular configurations will assure success of this revolutionary new technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
SADE is a software package for rapidly assembling analytic pipelines to manipulate data. The packages consists of the engine that manages the data and coordinates the movement of data between the tasks performing a function? a set of core libraries consisting of plugins that perform common tasks? and a framework to extend the system supporting the development of new plugins. Currently through configuration files, a pipeline can be defined that maps the routing of data through a series of plugins. Pipelines can be run in a batch mode or can process streaming data? they can be executed from the commandmore » line or run through a Windows background service. There currently exists over a hundred plugins, over fifty pipeline configurations? and the software is now being used by about a half-dozen projects.« less
Use of software engineering techniques in the design of the ALEPH data acquisition system
NASA Astrophysics Data System (ADS)
Charity, T.; McClatchey, R.; Harvey, J.
1987-08-01
The SASD methodology is being used to provide a rigorous design framework for various components of the ALEPH data acquisition system. The Entity-Relationship data model is used to describe the layout and configuration of the control and acquisition systems and detector components. State Transition Diagrams are used to specify control applications such as run control and resource management and Data Flow Diagrams assist in decomposing software tasks and defining interfaces between processes. These techniques encourage rigorous software design leading to enhanced functionality and reliability. Improved documentation and communication ensures continuity over the system life-cycle and simplifies project management.
Laboratory Information Systems.
Henricks, Walter H
2015-06-01
Laboratory information systems (LISs) supply mission-critical capabilities for the vast array of information-processing needs of modern laboratories. LIS architectures include mainframe, client-server, and thin client configurations. The LIS database software manages a laboratory's data. LIS dictionaries are database tables that a laboratory uses to tailor an LIS to the unique needs of that laboratory. Anatomic pathology LIS (APLIS) functions play key roles throughout the pathology workflow, and laboratories rely on LIS management reports to monitor operations. This article describes the structure and functions of APLISs, with emphasis on their roles in laboratory operations and their relevance to pathologists. Copyright © 2015 Elsevier Inc. All rights reserved.
The Deep Space Network information system in the year 2000
NASA Technical Reports Server (NTRS)
Markley, R. W.; Beswick, C. A.
1992-01-01
The Deep Space Network (DSN), the largest, most sensitive scientific communications and radio navigation network in the world, is considered. Focus is made on the telemetry processing, monitor and control, and ground data transport architectures of the DSN ground information system envisioned for the year 2000. The telemetry architecture will be unified from the front-end area to the end user. It will provide highly automated monitor and control of the DSN, automated configuration of support activities, and a vastly improved human interface. Automated decision support systems will be in place for DSN resource management, performance analysis, fault diagnosis, and contingency management.
Performing Verification and Validation in Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1999-01-01
The implementation of reuse-based software engineering not only introduces new activities to the software development process, such as domain analysis and domain modeling, it also impacts other aspects of software engineering. Other areas of software engineering that are affected include Configuration Management, Testing, Quality Control, and Verification and Validation (V&V). Activities in each of these areas must be adapted to address the entire domain or product line rather than a specific application system. This paper discusses changes and enhancements to the V&V process, in order to adapt V&V to reuse-based software engineering.
NASA Technical Reports Server (NTRS)
George, Jude (Inventor); Schlecht, Leslie (Inventor); McCabe, James D. (Inventor); LeKashman, John Jr. (Inventor)
1998-01-01
A network management system has SNMP agents distributed at one or more sites, an input output module at each site, and a server module located at a selected site for communicating with input output modules, each of which is configured for both SNMP and HNMP communications. The server module is configured exclusively for HNMP communications, and it communicates with each input output module according to the HNMP. Non-iconified, informationally complete views are provided of network elements to aid in network management.
Experience of Data Handling with IPPM Payload
NASA Astrophysics Data System (ADS)
Errico, Walter; Tosi, Pietro; Ilstad, Jorgen; Jameux, David; Viviani, Riccardo; Collantoni, Daniele
2010-08-01
A simplified On-Board Data Handling system has been developed by CAEN AURELIA SPACE and ABSTRAQT as PUS-over-SpaceWire demonstration platform for the Onboard Payload Data Processing laboratory at ESTEC. The system is composed of three Leon2-based IPPM (Integrated Payload Processing Module) computers that play the roles of Instrument, Payload Data Handling Unit and Satellite Management Unit. Two PCs complete the test set-up simulating an external Memory Management Unit and the Ground Control Unit. Communication among units take place primarily through SpaceWire links; RMAP[2] protocol is used for configuration and housekeeping. A limited implementation of ECSS-E-70-41B Packet Utilisation Standard (PUS)[1] over CANbus and MIL-STD-1553B has been also realized. The Open Source RTEMS is running on the IPPM AT697E CPU as real-time operating system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. L. Sharp; R. T. McCracken
The Advanced Test Reactor (ATR) is a pressurized light-water reactor with a design thermal power of 250 MW. The principal function of the ATR is to provide a high neutron flux for testing reactor fuels and other materials. The reactor also provides other irradiation services such as radioisotope production. The ATR and its support facilities are located at the Test Reactor Area of the Idaho National Engineering and Environmental Laboratory (INEEL). An audit conducted by the Department of Energy's Office of Independent Oversight and Performance Assurance (DOE OA) raised concerns that design conditions at the ATR were not adequately analyzedmore » in the safety analysis and that legacy design basis management practices had the potential to further impact safe operation of the facility.1 The concerns identified by the audit team, and issues raised during additional reviews performed by ATR safety analysts, were evaluated through the unreviewed safety question process resulting in shutdown of the ATR for more than three months while these concerns were resolved. Past management of the ATR safety basis, relative to facility design basis management and change control, led to concerns that discrepancies in the safety basis may have developed. Although not required by DOE orders or regulations, not performing design basis verification in conjunction with development of the 10 CFR 830 Subpart B upgraded safety basis allowed these potential weaknesses to be carried forward. Configuration management and a clear definition of the existing facility design basis have a direct relation to developing and maintaining a high quality safety basis which properly identifies and mitigates all hazards and postulated accident conditions. These relations and the impact of past safety basis management practices have been reviewed in order to identify lessons learned from the safety basis upgrade process and appropriate actions to resolve possible concerns with respect to the current ATR safety basis. The need for a design basis reconstitution program for the ATR has been identified along with the use of sound configuration management principles in order to support safe and efficient facility operation.« less
Scott, Felipe; Aroca, Germán; Caballero, José Antonio; Conejeros, Raúl
2017-07-01
The aim of this study is to analyze the techno-economic performance of process configurations for ethanol production involving solid-liquid separators and reactors in the saccharification and fermentation stage, a family of process configurations where few alternatives have been proposed. Since including these process alternatives creates a large number of possible process configurations, a framework for process synthesis and optimization is proposed. This approach is supported on kinetic models fed with experimental data and a plant-wide techno-economic model. Among 150 process configurations, 40 show an improved MESP compared to a well-documented base case (BC), almost all include solid separators and some show energy retrieved in products 32% higher compared to the BC. Moreover, 16 of them also show a lower capital investment per unit of ethanol produced per year. Several of the process configurations found in this work have not been reported in the literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Expandable and reconfigurable instrument node arrays
NASA Technical Reports Server (NTRS)
Hilliard, Lawrence M. (Inventor); Deshpande, Manohar (Inventor)
2012-01-01
An expandable and reconfigurable instrument node includes a feature detection means and a data processing portion in communication with the feature detection means, the data processing portion configured and disposed to process feature information. The instrument node further includes a phase locked loop (PLL) oscillator in communication with the data processing portion, the PLL oscillator configured and disposed to provide PLL information to the processing portion. The instrument node further includes a single tone transceiver and a pulse transceiver in communication with the PLL oscillator, the single tone transceiver configured and disposed to transmit or receive a single tone for phase correction of the PLL oscillator and the pulse transceiver configured and disposed to transmit and receive signals for phase correction of the PLL oscillator. The instrument node further includes a global positioning (GPA) receiver in communication with the processing portion, the GPS receiver configured and disposed to establish a global position of the instrument node.
Yu, Dantong; Katramatos, Dimitrios; Sim, Alexander; Shoshani, Arie
2014-04-22
A cross-domain network resource reservation scheduler configured to schedule a path from at least one end-site includes a management plane device configured to monitor and provide information representing at least one of functionality, performance, faults, and fault recovery associated with a network resource; a control plane device configured to at least one of schedule the network resource, provision local area network quality of service, provision local area network bandwidth, and provision wide area network bandwidth; and a service plane device configured to interface with the control plane device to reserve the network resource based on a reservation request and the information from the management plane device. Corresponding methods and computer-readable medium are also disclosed.
Tanaka, Ryoma; Takahashi, Naoyuki; Nakamura, Yasuaki; Hattori, Yusuke; Ashizawa, Kazuhide; Otsuka, Makoto
2017-01-01
Resonant acoustic ® mixing (RAM) technology is a system that performs high-speed mixing by vibration through the control of acceleration and frequency. In recent years, real-time process monitoring and prediction has become of increasing interest, and process analytical technology (PAT) systems will be increasingly introduced into actual manufacturing processes. This study examined the application of PAT with the combination of RAM, near-infrared spectroscopy, and chemometric technology as a set of PAT tools for introduction into actual pharmaceutical powder blending processes. Content uniformity was based on a robust partial least squares regression (PLSR) model constructed to manage the RAM configuration parameters and the changing concentration of the components. As a result, real-time monitoring may be possible and could be successfully demonstrated for in-line real-time prediction of active pharmaceutical ingredients and other additives using chemometric technology. This system is expected to be applicable to the RAM method for the risk management of quality.
Turning Configural Processing Upside Down: Part and Whole Body Postures
ERIC Educational Resources Information Center
Reed, Catherine L.; Stone, Valerie E.; Grubb, Jefferson D.; McGoldrick, John E.
2006-01-01
Like faces, body postures are susceptible to an inversion effect in untrained viewers. The inversion effect may be indicative of configural processing, but what kind of configural processing is used for the recognition of body postures must be specified. The information available in the body stimulus was manipulated. The presence and magnitude of…
Communications Management at the Parks Reserve Forces Training Area, Camp Parks, California
1994-10-31
The overall objective of the audit was to evaluate DoD management of circuit configurations for Defense Switched Network access requirements. The specific objective for this segment of the audit was to determine whether the Army used the most cost effective configuration of base and long haul telecommunications equipment and services at Camp Parks to access the Defense Switched Network.
Methods of forming thermal management systems and thermal management methods
Gering, Kevin L.; Haefner, Daryl R.
2012-06-05
A thermal management system for a vehicle includes a heat exchanger having a thermal energy storage material provided therein, a first coolant loop thermally coupled to an electrochemical storage device located within the first coolant loop and to the heat exchanger, and a second coolant loop thermally coupled to the heat exchanger. The first and second coolant loops are configured to carry distinct thermal energy transfer media. The thermal management system also includes an interface configured to facilitate transfer of heat generated by an internal combustion engine to the heat exchanger via the second coolant loop in order to selectively deliver the heat to the electrochemical storage device. Thermal management methods are also provided.
Processing device with self-scrubbing logic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
An apparatus includes a processing unit including a configuration memory and self-scrubber logic coupled to read the configuration memory to detect compromised data stored in the configuration memory. The apparatus also includes a watchdog unit external to the processing unit and coupled to the self-scrubber logic to detect a failure in the self-scrubber logic. The watchdog unit is coupled to the processing unit to selectively reset the processing unit in response to detecting the failure in the self-scrubber logic. The apparatus also includes an external memory external to the processing unit and coupled to send configuration data to the configurationmore » memory in response to a data feed signal outputted by the self-scrubber logic.« less
Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator
NASA Technical Reports Server (NTRS)
Bolen, Kenny; Greenlaw, Ronald
2010-01-01
A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.
Aided generation of search interfaces to astronomical archives
NASA Astrophysics Data System (ADS)
Zorba, Sonia; Bignamini, Andrea; Cepparo, Francesco; Knapic, Cristina; Molinaro, Marco; Smareglia, Riccardo
2016-07-01
Astrophysical data provider organizations that host web based interfaces to provide access to data resources have to cope with possible changes in data management that imply partial rewrites of web applications. To avoid doing this manually it was decided to develop a dynamically configurable Java EE web application that can set itself up reading needed information from configuration files. Specification of what information the astronomical archive database has to expose is managed using the TAP SCHEMA schema from the IVOA TAP recommendation, that can be edited using a graphical interface. When configuration steps are done the tool will build a war file to allow easy deployment of the application.
Environmental control/life support system for Space Station
NASA Technical Reports Server (NTRS)
Miller, C. W.; Heppner, D. B.; Schubert, F. H.; Dahlhausen, M. J.
1986-01-01
The functional, operational, and design load requirements for the Environmental Control/Life Support System (ECLSS) are described. The ECLSS is divided into two groups: (1) an atmosphere management group and (2) a water and waste management group. The interaction between the ECLSS and the Space Station Habitability System is examined. The cruciform baseline station design, the delta and big T module configuration, and the reference Space Station configuration are evaluated in terms of ECLSS requirements. The distribution of ECLSS equipment in a reference Space Station configuration is studied as a function of initial operating conditions and growth orbit capabilities. The benefits of water electrolysis as a Space Station utility are considered.
Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)
NASA Technical Reports Server (NTRS)
Niewoehner, Kevin R.; Carter, John (Technical Monitor)
2001-01-01
The research accomplishments for the cooperative agreement 'Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)' include the following: (1) previous IFC program data collection and analysis; (2) IFC program support site (configured IFC systems support network, configured Tornado/VxWorks OS development system, made Configuration and Documentation Management Systems Internet accessible); (3) Airborne Research Test Systems (ARTS) II Hardware (developed hardware requirements specification, developing environmental testing requirements, hardware design, and hardware design development); (4) ARTS II software development laboratory unit (procurement of lab style hardware, configured lab style hardware, and designed interface module equivalent to ARTS II faceplate); (5) program support documentation (developed software development plan, configuration management plan, and software verification and validation plan); (6) LWR algorithm analysis (performed timing and profiling on algorithm); (7) pre-trained neural network analysis; (8) Dynamic Cell Structures (DCS) Neural Network Analysis (performing timing and profiling on algorithm); and (9) conducted technical interchange and quarterly meetings to define IFC research goals.
Surveillance and reconnaissance ground system architecture
NASA Astrophysics Data System (ADS)
Devambez, Francois
2001-12-01
Modern conflicts induces various modes of deployment, due to the type of conflict, the type of mission, and phase of conflict. It is then impossible to define fixed architecture systems for surveillance ground segments. Thales has developed a structure for a ground segment based on the operational functions required, and on the definition of modules and networks. Theses modules are software and hardware modules, including communications and networks. This ground segment is called MGS (Modular Ground Segment), and is intended for use in airborne reconnaissance systems, surveillance systems, and U.A.V. systems. Main parameters for the definition of a modular ground image exploitation system are : Compliance with various operational configurations, Easy adaptation to the evolution of theses configurations, Interoperability with NATO and multinational forces, Security, Multi-sensors, multi-platforms capabilities, Technical modularity, Evolutivity Reduction of life cycle cost The general performances of the MGS are presented : type of sensors, acquisition process, exploitation of images, report generation, data base management, dissemination, interface with C4I. The MGS is then described as a set of hardware and software modules, and their organization to build numerous operational configurations. Architectures are from minimal configuration intended for a mono-sensor image exploitation system, to a full image intelligence center, for a multilevel exploitation of multi-sensor.
CONFIG: Qualitative simulation tool for analyzing behavior of engineering devices
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Basham, Bryan D.; Harris, Richard A.
1987-01-01
To design failure management expert systems, engineers mentally analyze the effects of failures and procedures as they propagate through device configurations. CONFIG is a generic device modeling tool for use in discrete event simulation, to support such analyses. CONFIG permits graphical modeling of device configurations and qualitative specification of local operating modes of device components. Computation requirements are reduced by focussing the level of component description on operating modes and failure modes, and specifying qualitative ranges of variables relative to mode transition boundaries. Simulation processing occurs only when modes change or variables cross qualitative boundaries. Device models are built graphically, using components from libraries. Components are connected at ports by graphical relations that define data flow. The core of a component model is its state transition diagram, which specifies modes of operation and transitions among them.
Land-mobile satellite communication system
NASA Technical Reports Server (NTRS)
Yan, Tsun-Yee (Inventor); Rafferty, William (Inventor); Dessouky, Khaled I. (Inventor); Wang, Charles C. (Inventor); Cheng, Unjeng (Inventor)
1993-01-01
A satellite communications system includes an orbiting communications satellite for relaying communications to and from a plurality of ground stations, and a network management center for making connections via the satellite between the ground stations in response to connection requests received via the satellite from the ground stations, the network management center being configured to provide both open-end service and closed-end service. The network management center of one embodiment is configured to provides both types of service according to a predefined channel access protocol that enables the ground stations to request the type of service desired. The channel access protocol may be configured to adaptively allocate channels to open-end service and closed-end service according to changes in the traffic pattern and include a free-access tree algorithm that coordinates collision resolution among the ground stations.
Artificial Intelligent Platform as Decision Tool for Asset Management, Operations and Maintenance.
2018-01-04
An Artificial Intelligence (AI) system has been developed and implemented for water, wastewater and reuse plants to improve management of sensors, short and long term maintenance plans, asset and investment management plans. It is based on an integrated approach to capture data from different computer systems and files. It adds a layer of intelligence to the data. It serves as a repository of key current and future operations and maintenance conditions that a plant needs have knowledge of. With this information, it is able to simulate the configuration of processes and assets for those conditions to improve or optimize operations, maintenance and asset management, using the IViewOps (Intelligent View of Operations) model. Based on the optimization through model runs, it is able to create output files that can feed data to other systems and inform the staff regarding optimal solutions to the conditions experienced or anticipated in the future.
Cryogenic Fluid Management Facility
NASA Technical Reports Server (NTRS)
Eberhardt, R. N.; Bailey, W. J.; Symons, E. P.; Kroeger, E. W.
1984-01-01
The Cryogenic Fluid Management Facility (CFMF) is a reusable test bed which is designed to be carried into space in the Shuttle cargo bay to investigate systems and technologies required to efficiently and effectively manage cryogens in space. The facility hardware is configured to provide low-g verification of fluid and thermal models of cryogenic storage, transfer concepts and processes. Significant design data and criteria for future subcritical cryogenic storage and transfer systems will be obtained. Future applications include space-based and ground-based orbit transfer vehicles (OTV), space station life support, attitude control, power and fuel depot supply, resupply tankers, external tank (ET) propellant scavenging, space-based weapon systems and space-based orbit maneuvering vehicles (OMV). This paper describes the facility and discusses the cryogenic fluid management technology to be investigated. A brief discussion of the integration issues involved in loading and transporting liquid hydrogen within the Shuttle cargo bay is also included.
Processing of configural and componential information in face-selective cortical areas.
Zhao, Mintao; Cheung, Sing-Hang; Wong, Alan C-N; Rhodes, Gillian; Chan, Erich K S; Chan, Winnie W L; Hayward, William G
2014-01-01
We investigated how face-selective cortical areas process configural and componential face information and how race of faces may influence these processes. Participants saw blurred (preserving configural information), scrambled (preserving componential information), and whole faces during fMRI scan, and performed a post-scan face recognition task using blurred or scrambled faces. The fusiform face area (FFA) showed stronger activation to blurred than to scrambled faces, and equivalent responses to blurred and whole faces. The occipital face area (OFA) showed stronger activation to whole than to blurred faces, which elicited similar responses to scrambled faces. Therefore, the FFA may be more tuned to process configural than componential information, whereas the OFA similarly participates in perception of both. Differences in recognizing own- and other-race blurred faces were correlated with differences in FFA activation to those faces, suggesting that configural processing within the FFA may underlie the other-race effect in face recognition.
SEPAC software configuration control plan and procedures, revision 1
NASA Technical Reports Server (NTRS)
1981-01-01
SEPAC Software Configuration Control Plan and Procedures are presented. The objective of the software configuration control is to establish the process for maintaining configuration control of the SEPAC software beginning with the baselining of SEPAC Flight Software Version 1 and encompass the integration and verification tests through Spacelab Level IV Integration. They are designed to provide a simplified but complete configuration control process. The intent is to require a minimum amount of paperwork but provide total traceability of SEPAC software.
Thermal management systems and methods
Gering, Kevin L.; Haefner, Daryl R.
2006-12-12
A thermal management system for a vehicle includes a heat exchanger having a thermal energy storage material provided therein, a first coolant loop thermally coupled to an electrochemical storage device located within the first coolant loop and to the heat exchanger, and a second coolant loop thermally coupled to the heat exchanger. The first and second coolant loops are configured to carry distinct thermal energy transfer media. The thermal management system also includes an interface configured to facilitate transfer of heat generated by an internal combustion engine to the heat exchanger via the second coolant loop in order to selectively deliver the heat to the electrochemical storage device. Thermal management methods are also provided.
ERIC Educational Resources Information Center
Turel, Ofir; Zhang, Yi
2010-01-01
Due to the increased importance and usage of self-managed virtual teams, many recent studies have examined factors that affect their success. One such factor that merits examination is the configuration or composition of virtual teams. This article tackles this point by (1) empirically testing trait-configuration effects on virtual team…
Holistic processing of face configurations and components.
Hayward, William G; Crookes, Kate; Chu, Ming Hon; Favelle, Simone K; Rhodes, Gillian
2016-10-01
Although many researchers agree that faces are processed holistically, we know relatively little about what information holistic processing captures from a face. Most studies that assess the nature of holistic processing do so with changes to the face affecting many different aspects of face information (e.g., different identities). Does holistic processing affect every aspect of a face? We used the composite task, a common means of examining the strength of holistic processing, with participants making same-different judgments about configuration changes or component changes to 1 portion of a face. Configuration changes involved changes in spatial position of the eyes, whereas component changes involved lightening or darkening the eyebrows. Composites were either aligned or misaligned, and were presented either upright or inverted. Both configuration judgments and component judgments showed evidence of holistic processing, and in both cases it was strongest for upright face composites. These results suggest that holistic processing captures a broad range of information about the face, including both configuration-based and component-based information. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Cloudweaver: Adaptive and Data-Driven Workload Manager for Generic Clouds
NASA Astrophysics Data System (ADS)
Li, Rui; Chen, Lei; Li, Wen-Syan
Cloud computing denotes the latest trend in application development for parallel computing on massive data volumes. It relies on clouds of servers to handle tasks that used to be managed by an individual server. With cloud computing, software vendors can provide business intelligence and data analytic services for internet scale data sets. Many open source projects, such as Hadoop, offer various software components that are essential for building a cloud infrastructure. Current Hadoop (and many others) requires users to configure cloud infrastructures via programs and APIs and such configuration is fixed during the runtime. In this chapter, we propose a workload manager (WLM), called CloudWeaver, which provides automated configuration of a cloud infrastructure for runtime execution. The workload management is data-driven and can adapt to dynamic nature of operator throughput during different execution phases. CloudWeaver works for a single job and a workload consisting of multiple jobs running concurrently, which aims at maximum throughput using a minimum set of processors.
Flexible Airspace Management (FAM) Research 2010 Human-in-the-Loop Simulation
NASA Technical Reports Server (NTRS)
Lee, Paul U.; Brasil, Connie; Homola, Jeffrey; Kessell, Angela; Prevot, Thomas; Smith, Nancy
2011-01-01
A human-in-the-Ioop (HITL) simulation was conducted to assess potential user and system benefits of Flexible Airspace Management (FAM) concept, as well as designing role definitions, procedures, and tools to support the FAM operations in the mid-term High Altitude Airspace (HAA) environment. The study evaluated the benefits and feasibility of flexible airspace reconfiguration in response to traffic overload caused by weather deviations, and compared them to those in a baseline condition without the airspace reconfiguration. The test airspace consisted of either four sectors in one Area of Specialization or seven sectors across two Areas. The test airspace was assumed to be at or above FL340 and fully equipped Vvith data communications (Data Comm). Other assumptions were consistent with those of the HAA concept. Overall, results showed that FAM operations with multiple Traffic Management Coordinators, Area Supervisors, and controllers worked remarkably well. The results showed both user and system benefits, some of which include the increased throughput, decreased flight distance, more manageable sector loads, and better utilized airspace. Also, the roles, procedures, airspace designs, and tools were all very well received. Airspace configuration options that resulted from a combination of algorithm-generated airspace configurations with manual modifications were well acceptec and posed little difficuIty and/or workload during airspace reconfiguration process. The results suggest a positive impact of FAM operations in HAA. Further investigation would be needed to evaluate if the benefits and feasibility would extend in either non-HAA or mixed equipage environment.
A WiFi public address system for disaster management.
Andrade, Nicholas; Palmer, Douglas A; Lenert, Leslie A
2006-01-01
The WiFi Bullhorn is designed to assist emergency workers in the event of a disaster situation by offering a rapidly configurable wireless of public address system for disaster sites. The current configuration plays either pre recorded or custom recorded messages and utilizes 802.11b networks for communication. Units can be position anywhere wireless coverage exists to help manage crowds or to recall first responders from dangerous areas.
A WiFi Public Address System for Disaster Management
Andrade, Nicholas; Palmer, Douglas A.; Lenert, Leslie A.
2006-01-01
The WiFi Bullhorn is designed to assist emergency workers in the event of a disaster situation by offering a rapidly configurable wireless public address system for disaster sites. The current configuration plays either pre recorded or custom recorded messages and utilizes 802.11b networks for communication. Units can be position anywhere wireless coverage exists to help manage crowds or to recall first responders from dangerous areas. PMID:17238466
Lighting system with heat distribution face plate
Arik, Mehmet; Weaver, Stanton Earl; Stecher, Thomas Elliot; Kuenzler, Glenn Howard; Wolfe, Jr., Charles Franklin; Li, Ri
2013-09-10
Lighting systems having a light source and a thermal management system are provided. The thermal management system includes synthetic jet devices, a heat sink and a heat distribution face plate. The synthetic jet devices are arranged in parallel to one and other and are configured to actively cool the lighting system. The heat distribution face plate is configured to radially transfer heat from the light source into the ambient air.
Harmonize Pipeline and Archiving Aystem: PESSTO@IA2 Use Case
NASA Astrophysics Data System (ADS)
Smareglia, R.; Knapic, C.; Molinaro, M.; Young, D.; Valenti, S.
2013-10-01
Italian Astronomical Archives Center (IA2) is a research infrastructure project that aims at coordinating different national and international initiatives to improve the quality of astrophysical data services. IA2 is now also involved in the PESSTO (Public ESO Spectroscopic Survey of Transient Objects) collaboration, developing a complete archiving system to store calibrated post processed data (including sensitive intermediate products), a user interface to access private data and Virtual Observatory (VO) compliant web services to access public fast reduction data via VO tools. The archive system shall rely on the PESSTO Marshall to provide file data and its associated metadata output by the PESSTO data-reduction pipeline. To harmonize the object repository, data handling and archiving system, new tools are under development. These systems must have a strong cross-interaction without increasing the complexities of any single task, in order to improve the performances of the whole system and must have a sturdy logic in order to perform all operations in coordination with the other PESSTO tools. MySQL Replication technology and triggers are used for the synchronization of new data in an efficient, fault tolerant manner. A general purpose library is under development to manage data starting from raw observations to final calibrated ones, open to the overriding of different sources, formats, management fields, storage and publication policies. Configurations for all the systems are stored in a dedicated schema (no configuration files), but can be easily updated by a planned Archiving System Configuration Interface (ASCI).
Teng, Ming-jun; Zeng, Li-xiong; Xiao, Wen-fa; Zhou, Zhi-xiang; Huang, Zhi-lin; Wang, Peng-cheng; Dian, Yuan-yong
2014-12-01
The Three Gorges Reservoir area (TGR area) , one of the most sensitive ecological zones in China, has dramatically changes in ecosystem configurations and services driven by the Three Gorges Engineering Project and its related human activities. Thus, understanding the dynamics of ecosystem configurations, ecological processes and ecosystem services is an attractive and critical issue to promote regional ecological security of the TGR area. The remote sensing of environment is a promising approach to the target and is thus increasingly applied to and ecosystem dynamics of the TGR area on mid- and macro-scales. However, current researches often showed controversial results in ecological and environmental changes in the TGR area due to the differences in remote sensing data, scale, and land-use/cover classification. Due to the complexity of ecological configurations and human activities, challenges still exist in the remote-sensing based research of ecological and environmental changes in the TGR area. The purpose of this review was to summarize the research advances in remote sensing of ecological and environmental changes in the TGR area. The status, challenges and trends of ecological and environmental remote-sensing in the TGR area were further discussed and concluded in the aspect of land-use/land-cover, vegetation dynamics, soil and water security, ecosystem services, ecosystem health and its management. The further researches on the remote sensing of ecological and environmental changes were proposed to improve the ecosystem management of the TGR area.
Wireless communication devices and movement monitoring methods
Skorpik, James R.
2006-10-31
Wireless communication devices and movement monitoring methods are described. In one aspect, a wireless communication device includes a housing, wireless communication circuitry coupled with the housing and configured to communicate wireless signals, movement circuitry coupled with the housing and configured to provide movement data regarding movement sensed by the movement circuitry, and event processing circuitry coupled with the housing and the movement circuitry, wherein the event processing circuitry is configured to process the movement data, and wherein at least a portion of the event processing circuitry is configured to operate in a first operational state having a different power consumption rate compared with a second operational state.
Avionics test bed development plan
NASA Technical Reports Server (NTRS)
Harris, L. H.; Parks, J. M.; Murdock, C. R.
1981-01-01
A development plan for a proposed avionics test bed facility for the early investigation and evaluation of new concepts for the control of large space structures, orbiter attached flex body experiments, and orbiter enhancements is presented. A distributed data processing facility that utilizes the current laboratory resources for the test bed development is outlined. Future studies required for implementation, the management system for project control, and the baseline system configuration are defined. A background analysis of the specific hardware system for the preliminary baseline avionics test bed system is included.
Systems and methods for process and user driven dynamic voltage and frequency scaling
Mallik, Arindam [Evanston, IL; Lin, Bin [Hillsboro, OR; Memik, Gokhan [Evanston, IL; Dinda, Peter [Evanston, IL; Dick, Robert [Evanston, IL
2011-03-22
Certain embodiments of the present invention provide a method for power management including determining at least one of an operating frequency and an operating voltage for a processor and configuring the processor based on the determined at least one of the operating frequency and the operating voltage. The operating frequency is determined based at least in part on direct user input. The operating voltage is determined based at least in part on an individual profile for processor.
Reconfigurable environmentally adaptive computing
NASA Technical Reports Server (NTRS)
Coxe, Robin L. (Inventor); Galica, Gary E. (Inventor)
2008-01-01
Described are methods and apparatus, including computer program products, for reconfigurable environmentally adaptive computing technology. An environmental signal representative of an external environmental condition is received. A processing configuration is automatically selected, based on the environmental signal, from a plurality of processing configurations. A reconfigurable processing element is reconfigured to operate according to the selected processing configuration. In some examples, the environmental condition is detected and the environmental signal is generated based on the detected condition.
A Holistic Approach to Systems Development
NASA Technical Reports Server (NTRS)
Wong, Douglas T.
2008-01-01
Introduces a Holistic and Iterative Design Process. Continuous process but can be loosely divided into four stages. More effort spent early on in the design. Human-centered and Multidisciplinary. Emphasis on Life-Cycle Cost. Extensive use of modeling, simulation, mockups, human subjects, and proven technologies. Human-centered design doesn t mean the human factors discipline is the most important Disciplines should be involved in the design: Subsystem vendors, configuration management, operations research, manufacturing engineering, simulation/modeling, cost engineering, hardware engineering, software engineering, test and evaluation, human factors, electromagnetic compatibility, integrated logistics support, reliability/maintainability/availability, safety engineering, test equipment, training systems, design-to-cost, life cycle cost, application engineering etc. 9
STS-114 Discovery Return to Flight: International Space Station Processing Overview
NASA Technical Reports Server (NTRS)
2005-01-01
Bruce Buckingham, NASA Public Affairs, introduces Scott Higgenbotham, STS-114 Payload Manager. Higgenbotham gives a power point presentation on the hardware that is going to fly in the Discovery Mission to the International Space Station. He presents a layout of the hardware which includes The Logistics Flight 1 (LF1) launch package configuration Multipurpose Logistics Module (MPLM), External Stowage Platform-2 (ESP-2) and the Lightweight Mission Peculiar Equipment Support Structure Carrier (LMC). He explains these payloads in detail. The LF-1 team is also shown in the International Space Station Processing Facility. This presentation ends with a brief question and answer period.
Three Studies on Configural Face Processing by Chimpanzees
ERIC Educational Resources Information Center
Parr, Lisa A.; Heintz, Matthew; Akamagwuna, Unoma
2006-01-01
Previous studies have demonstrated the sensitivity of chimpanzees to facial configurations. Three studies further these findings by showing this sensitivity to be specific to second-order relational properties. In humans, this type of configural processing requires prolonged experience and enables subordinate-level discriminations of many…
A highly scalable information system as extendable framework solution for medical R&D projects.
Holzmüller-Laue, Silke; Göde, Bernd; Stoll, Regina; Thurow, Kerstin
2009-01-01
For research projects in preventive medicine a flexible information management is needed that offers a free planning and documentation of project specific examinations. The system should allow a simple, preferably automated data acquisition from several distributed sources (e.g., mobile sensors, stationary diagnostic systems, questionnaires, manual inputs) as well as an effective data management, data use and analysis. An information system fulfilling these requirements has been developed at the Center for Life Science Automation (celisca). This system combines data of multiple investigations and multiple devices and displays them on a single screen. The integration of mobile sensor systems for comfortable, location-independent capture of time-based physiological parameter and the possibility of observation of these measurements directly by this system allow new scenarios. The web-based information system presented in this paper is configurable by user interfaces. It covers medical process descriptions, operative process data visualizations, a user-friendly process data processing, modern online interfaces (data bases, web services, XML) as well as a comfortable support of extended data analysis with third-party applications.
Launch Vehicle Control Center Architectures
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Levesque, Marl; Williams, Randall; Mclaughlin, Tom
2014-01-01
Launch vehicles within the international community vary greatly in their configuration and processing. Each launch site has a unique processing flow based on the specific launch vehicle configuration. Launch and flight operations are managed through a set of control centers associated with each launch site. Each launch site has a control center for launch operations; however flight operations support varies from being co-located with the launch site to being shared with the space vehicle control center. There is also a nuance of some having an engineering support center which may be co-located with either the launch or flight control center, or in a separate geographical location altogether. A survey of control center architectures is presented for various launch vehicles including the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures shares some similarities in basic structure while differences in functional distribution also exist. The driving functions which lead to these factors are considered and a model of control center architectures is proposed which supports these commonalities and variations.
Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers
NASA Technical Reports Server (NTRS)
Tumer, K.; Lawson, J.
2003-01-01
Efficient management of large-scale, distributed data storage and processing systems is a major challenge for many computational applications. Many of these systems are characterized by multi-resource tasks processed across a heterogeneous network. Conventional approaches, such as load balancing, work well for centralized, single resource problems, but breakdown in the more general case. In addition, most approaches are often based on heuristics which do not directly attempt to optimize the world utility. In this paper, we propose an agent based control system using the theory of collectives. We configure the servers of our network with agents who make local job scheduling decisions. These decisions are based on local goals which are constructed to be aligned with the objective of optimizing the overall efficiency of the system. We demonstrate that multi-agent systems in which all the agents attempt to optimize the same global utility function (team game) only marginally outperform conventional load balancing. On the other hand, agents configured using collectives outperform both team games and load balancing (by up to four times for the latter), despite their distributed nature and their limited access to information.
Effects of configural processing on the perceptual spatial resolution for face features.
Namdar, Gal; Avidan, Galia; Ganel, Tzvi
2015-11-01
Configural processing governs human perception across various domains, including face perception. An established marker of configural face perception is the face inversion effect, in which performance is typically better for upright compared to inverted faces. In two experiments, we tested whether configural processing could influence basic visual abilities such as perceptual spatial resolution (i.e., the ability to detect spatial visual changes). Face-related perceptual spatial resolution was assessed by measuring the just noticeable difference (JND) to subtle positional changes between specific features in upright and inverted faces. The results revealed robust inversion effect for spatial sensitivity to configural-based changes, such as the distance between the mouth and the nose, or the distance between the eyes and the nose. Critically, spatial resolution for face features within the region of the eyes (e.g., the interocular distance between the eyes) was not affected by inversion, suggesting that the eye region operates as a separate 'gestalt' unit which is relatively immune to manipulations that would normally hamper configural processing. Together these findings suggest that face orientation modulates fundamental psychophysical abilities including spatial resolution. Furthermore, they indicate that classic psychophysical methods can be used as a valid measure of configural face processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Safety diagnosis: are we doing a good job?
Park, Peter Y; Sahaji, Rajib
2013-03-01
Collision diagnosis is the second step in the six-step road safety management process described in the AASHTO Highway Safety Manual (HSM). Diagnosis is designed to identify a dominant or abnormally high proportion of particular collision configurations (e.g., rear end, right angle, etc.) at a target location. The primary diagnosis method suggested in the HSM is descriptive data analysis. This type of analysis relies on, for example, pie charts, histograms, and/or collision diagrams. Using location specific collision data (e.g., collision frequency per collision configuration for a target location), safety engineers identify (the most) frequent collision configurations. Safety countermeasures are then likely to concentrate on preventing the selected collision configurations. Although its real-world application in engineering practice is limited, an additional collision diagnosis method, known as the beta-binomial (BB) test, is also presented as the secondary diagnosis tool in the HSM. The BB test compares the proportion of a particular collision configuration observed at one location with the proportion of the same collision configuration found at other reference locations which are similar to the target location in terms of selected traffic and roadway characteristics (e.g., traffic volume, traffic control, and number of lanes). This study compared the outcomes obtained from descriptive data analysis and the BB test, and investigates two questions: (1) Do descriptive data analysis and the BB tests produce the same results (i.e., do they select the same collision configurations at the same locations)? and (2) If the tests produce different results, which result should be adopted in engineering practice? This study's analysis was based on a sample of the most recent five years (2005-2009) of collision and roadway configuration data for 143 signalized intersections in the City of Saskatoon, Saskatchewan. The study results show that the BB test's role in diagnosing safety concerns in road safety engineering projects such as safety review projects for existing roadways may be just as important as the descriptive data analysis method. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
The Role of Configural Processing in Face Classification by Race: An ERP Study
Lv, Jing; Yan, Tianyi; Tao, Luyang; Zhao, Lun
2015-01-01
The current study investigated the time course of the other-race classification advantage (ORCA) in the subordinate classification of normally configured faces and distorted faces by race. Slightly distorting the face configuration delayed the categorization of own-race faces and had no conspicuous effects on other-race faces. The N170 was sensitive neither to configural distortions nor to faces' races. The P3 was enhanced for other-race than own-race faces and reduced by configural manipulation only for own-race faces. We suggest that the source of ORCA is the configural analysis applied by default while processing own-race faces. PMID:26733850
Lalleman, P C B; Smid, G A C; Lagerwey, M D; Shortridge-Baggett, L M; Schuurmans, M J
2016-11-01
Nurse managers play an important role in implementing patient safety practices in hospitals. However, the influence of their professional background on their clinical leadership behaviour remains unclear. Research has demonstrated that concepts of Bourdieu (dispositions of habitus, capital and field) help to describe this influence. It revealed various configurations of dispositions of the habitus in which a caring disposition plays a crucial role. We explore how the caring disposition of nurse middle managers' habitus influences their clinical leadership behaviour in patient safety practices. Our paper reports the findings of a Bourdieusian, multi-site, ethnographic case study. Two Dutch and two American acute care, mid-sized, non-profit hospitals. A total of 16 nurse middle managers of adult care units. Observations were made over 560h of shadowing nurse middle managers, semi-structured interviews and member check meetings with the participants. We observed three distinct configurations of dispositions of the habitus which influenced the clinical leadership of nurse middle managers in patient safety practices; they all include a caring disposition: (1) a configuration with a dominant caring disposition that was helpful (via solving urgent matters) and hindering (via ad hoc and reactive actions, leading to quick fixes and 'compensatory modes'); (2) a configuration with an interaction of caring and collegial dispositions that led to an absence of clinical involvement and discouraged patient safety practices; and (3) a configuration with a dominant scientific disposition showing an investigative, non-judging, analytic stance, a focus on evidence-based practice that curbs the ad hoc repertoire of the caring disposition. The dispositions of the nurse middle managers' habitus influenced their clinical leadership in patient safety practices. A dominance of the caring disposition, which meant 'always' answering calls for help and reactive and ad hoc reactions, did not support the clinical leadership role of nurse middle managers. By perceiving the team of staff nurses as pseudo-patients, patient safety practice was jeopardized because of erosion of the clinical disposition. The nurse middle managers' clinical leadership was enhanced by leadership behaviour based on the clinical and scientific dispositions that was manifested through an investigative, non-judging, analytic stance, a focus on evidence-based practice and a curbed caring disposition. Copyright © 2016 Elsevier Ltd. All rights reserved.
PTC MathCAD and Workgroup Manager: Implementation in a Multi-Org System
NASA Technical Reports Server (NTRS)
Jones, Corey
2015-01-01
In this presentation, the presenter will review what was done at Kennedy Space Center to deploy and implement PTC MathCAD and PTC Workgroup Manager in a multi-org system. During the presentation the presenter will explain how they configured PTC Windchill to create custom soft-types and object initialization rules for their custom numbering scheme and why they choose these methods. This presentation will also include how to modify the EPM default soft-type file in the PTC Windchill server codebase folder. The presenter will also go over the code used in a start up script to initiate PTC MathCAD and PTC Workgroup Manager in the proper order, and also set up the environment variables when running both PTC Workgroup Manager and PTC Creo. The configuration.ini file the presenter used will also be reviewed to show you how to set up the PTC Workgroup Manager and customized it to their user community. This presentation will be of interest to administrators trying to create a similar set-up in either a single org or multiple org system deployment. The big take away will be ideas and best practices learned through implementing this system, and the lessons learned what to do and not to do when setting up this configuration. Attendees will be exposed to several different sets of code used and that worked well and will hear some limitations on what the software can accomplish when configured this way.
Dynamic Airspace Configuration
NASA Technical Reports Server (NTRS)
Bloem, Michael J.
2014-01-01
In air traffic management systems, airspace is partitioned into regions in part to distribute the tasks associated with managing air traffic among different systems and people. These regions, as well as the systems and people allocated to each, are changed dynamically so that air traffic can be safely and efficiently managed. It is expected that new air traffic control systems will enable greater flexibility in how airspace is partitioned and how resources are allocated to airspace regions. In this talk, I will begin by providing an overview of some previous work and open questions in Dynamic Airspace Configuration research, which is concerned with how to partition airspace and assign resources to regions of airspace. For example, I will introduce airspace partitioning algorithms based on clustering, integer programming optimization, and computational geometry. I will conclude by discussing the development of a tablet-based tool that is intended to help air traffic controller supervisors configure airspace and controllers in current operations.
Experiment Management System for the SND Detector
NASA Astrophysics Data System (ADS)
Pugachev, K.
2017-10-01
We present a new experiment management system for the SND detector at the VEPP-2000 collider (Novosibirsk). An important part to report about is access to experimental databases (configuration, conditions and metadata). The system is designed in client-server architecture. User interaction comes true using web-interface. The server side includes several logical layers: user interface templates; template variables description and initialization; implementation details. The templates are meant to involve as less IT knowledge as possible. Experiment configuration, conditions and metadata are stored in a database. To implement the server side Node.js, a modern JavaScript framework, has been chosen. A new template engine having an interesting feature is designed. A part of the system is put into production. It includes templates dealing with showing and editing first level trigger configuration and equipment configuration and also showing experiment metadata and experiment conditions data index.
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Sunkel, John W.
1990-01-01
An attitude-control and momentum-management (ACMM) system for the Space Station in a large-angle torque-equilibrium-attitude (TEA) configuration is developed analytically and demonstrated by means of numerical simulations. The equations of motion for a rigid-body Space Station model are outlined; linearized equations for an arbitrary TEA (resulting from misalignment of control and body axes) are derived; the general requirements for an ACMM are summarized; and a pole-placement linear-quadratic regulator solution based on scheduled gains is proposed. Results are presented in graphs for (1) simulations based on configuration MB3 (showing the importance of accounting for the cross-inertia terms in the TEA estimate) and (2) simulations of a stepwise change from configuration MB3 to the 'assembly complete' stage over 130 orbits (indicating that the present ACCM scheme maintains sufficient control over slowly varying Space Station dynamics).
On I/O Virtualization Management
NASA Astrophysics Data System (ADS)
Danciu, Vitalian A.; Metzker, Martin G.
The quick adoption of virtualization technology in general and the advent of the Cloud business model entail new requirements on the structure and the configuration of back-end I/O systems. Several approaches to virtualization of I/O links are being introduced, which aim at implementing a more flexible I/O channel configuration without compromising performance. While previously the management of I/O devices could be limited to basic technical requirments (e.g. the establishment and termination of fixed-point links), the additional flexibility carries in its wake additional management requirements on the representation and control of I/O sub-systems.
Buttles, John W [Idaho Falls, ID
2011-12-20
Wireless communication devices include a software-defined radio coupled to processing circuitry. The processing circuitry is configured to execute computer programming code. Storage media is coupled to the processing circuitry and includes computer programming code configured to cause the processing circuitry to configure and reconfigure the software-defined radio to operate on each of a plurality of communication networks according to a selected sequence. Methods for communicating with a wireless device and methods of wireless network-hopping are also disclosed.
Spurrier, Francis R.; Pierce, Bill L.; Wright, Maynard K.
1986-01-01
A plate for a fuel cell has an arrangement of ribs defining an improved configuration of process gas channels and slots on a surface of the plate which provide a modified serpentine gas flow pattern across the plate surface. The channels are generally linear and arranged parallel to one another while the spaced slots allow cross channel flow of process gas in a staggered fashion which creates a plurality of generally mini-serpentine flow paths extending transverse to the longitudinal gas flow along the channels. Adjacent pairs of the channels are interconnected to one another in flow communication. Also, a bipolar plate has the aforementioned process gas channel configuration on one surface and another configuration on the opposite surface. In the other configuration, there are not slots and the gas flow channels have a generally serpentine configuration.
Constructing Flexible, Configurable, ETL Pipelines for the Analysis of "Big Data" with Apache OODT
NASA Astrophysics Data System (ADS)
Hart, A. F.; Mattmann, C. A.; Ramirez, P.; Verma, R.; Zimdars, P. A.; Park, S.; Estrada, A.; Sumarlidason, A.; Gil, Y.; Ratnakar, V.; Krum, D.; Phan, T.; Meena, A.
2013-12-01
A plethora of open source technologies for manipulating, transforming, querying, and visualizing 'big data' have blossomed and matured in the last few years, driven in large part by recognition of the tremendous value that can be derived by leveraging data mining and visualization techniques on large data sets. One facet of many of these tools is that input data must often be prepared into a particular format (e.g.: JSON, CSV), or loaded into a particular storage technology (e.g.: HDFS) before analysis can take place. This process, commonly known as Extract-Transform-Load, or ETL, often involves multiple well-defined steps that must be executed in a particular order, and the approach taken for a particular data set is generally sensitive to the quantity and quality of the input data, as well as the structure and complexity of the desired output. When working with very large, heterogeneous, unstructured or semi-structured data sets, automating the ETL process and monitoring its progress becomes increasingly important. Apache Object Oriented Data Technology (OODT) provides a suite of complementary data management components called the Process Control System (PCS) that can be connected together to form flexible ETL pipelines as well as browser-based user interfaces for monitoring and control of ongoing operations. The lightweight, metadata driven middleware layer can be wrapped around custom ETL workflow steps, which themselves can be implemented in any language. Once configured, it facilitates communication between workflow steps and supports execution of ETL pipelines across a distributed cluster of compute resources. As participants in a DARPA-funded effort to develop open source tools for large-scale data analysis, we utilized Apache OODT to rapidly construct custom ETL pipelines for a variety of very large data sets to prepare them for analysis and visualization applications. We feel that OODT, which is free and open source software available through the Apache Software Foundation, is particularly well suited to developing and managing arbitrary large-scale ETL processes both for the simplicity and flexibility of its wrapper framework, as well as the detailed provenance information it exposes throughout the process. Our experience using OODT to manage processing of large-scale data sets in domains as diverse as radio astronomy, life sciences, and social network analysis demonstrates the flexibility of the framework, and the range of potential applications to a broad array of big data ETL challenges.
NASA Astrophysics Data System (ADS)
Huang, Xiao
2006-04-01
Today's and especially tomorrow's competitive launch vehicle design environment requires the development of a dedicated generic Space Access Vehicle (SAV) design methodology. A total of 115 industrial, research, and academic aircraft, helicopter, missile, and launch vehicle design synthesis methodologies have been evaluated. As the survey indicates, each synthesis methodology tends to focus on a specific flight vehicle configuration, thus precluding the key capability to systematically compare flight vehicle design alternatives. The aim of the research investigation is to provide decision-making bodies and the practicing engineer a design process and tool box for robust modeling and simulation of flight vehicles where the ultimate performance characteristics may hinge on numerical subtleties. This will enable the designer of a SAV for the first time to consistently compare different classes of SAV configurations on an impartial basis. This dissertation presents the development steps required towards a generic (configuration independent) hands-on flight vehicle conceptual design synthesis methodology. This process is developed such that it can be applied to any flight vehicle class if desired. In the present context, the methodology has been put into operation for the conceptual design of a tourist Space Access Vehicle. The case study illustrates elements of the design methodology & algorithm for the class of Horizontal Takeoff and Horizontal Landing (HTHL) SAVs. The HTHL SAV design application clearly outlines how the conceptual design process can be centrally organized, executed and documented with focus on design transparency, physical understanding and the capability to reproduce results. This approach offers the project lead and creative design team a management process and tool which iteratively refines the individual design logic chosen, leading to mature design methods and algorithms. As illustrated, the HTHL SAV hands-on design methodology offers growth potential in that the same methodology can be continually updated and extended to other SAV configuration concepts, such as the Vertical Takeoff and Vertical Landing (VTVL) SAV class. Having developed, validated and calibrated the methodology for HTHL designs in the 'hands-on' mode, the report provides an outlook how the methodology will be integrated into a prototype computerized design synthesis software AVDS-PrADOSAV in a follow-on step.
Feasibility of Supersonic Aircraft Concepts for Low-Boom and Flight Trim Constraints
NASA Technical Reports Server (NTRS)
Li, Wu
2015-01-01
This paper documents a process for analyzing whether a particular supersonic aircraft configuration layout and a given cruise condition are feasible to achieve a trimmed low-boom design. This process was motivated by the need to know whether a particular configuration at a given cruise condition could be reshaped to satisfy both low-boom and flight trim constraints. Without such a process, much effort could be wasted on shaping a configuration layout at a cruise condition that could never satisfy both low-boom and flight trim constraints simultaneously. The process helps to exclude infeasible configuration layouts with minimum effort and allows a designer to develop trimmed low-boom concepts more effectively. A notional low-boom supersonic demonstrator concept is used to illustrate the analysis/design process.
Model Based Verification of Cyber Range Event Environments
2015-11-13
Commercial and Open Source Systems," in SOSP, Cascais, Portugal, 2011. [3] Sanjai Narain, Sharad Malik, and Ehab Al-Shaer, "Towards Eliminating...Configuration Errors in Cyber Infrastructure," in 4th IEEE Symposium on Configuration Analytics and Automation, Arlington, VA, 2011. [4] Sanjai Narain...Verlag, 2010. [5] Sanjai Narain, "Network Configuration Management via Model Finding," in 19th Large Installation System Administration Conference, San
Flexibility First, Then Standardize: A Strategy for Growing Inter-Departmental Systems.
á Torkilsheyggi, Arnvør
2015-01-01
Any attempt to use IT to standardize work practices faces the challenge of finding a balance between standardization and flexibility. In implementing electronic whiteboards with the goal of standardizing inter-departmental practices, a hospital in Denmark chose to follow the strategy of "flexibility first, then standardization." To improve the local grounding of the system, they first focused on flexibility by configuring the whiteboards to support intra-departmental practices. Subsequently, they focused on standardization by using the white-boards to negotiate standardization of inter-departmental practices. This paper investigates the chosen strategy and finds: that super users on many wards managed to configure the whiteboard to support intra-departmental practices; that initiatives to standardize inter-departmental practices improved coordination of certain processes; and that the chosen strategy posed a challenge for finding the right time and manner to shift the balance from flexibility to standardization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Lasagna{trademark} is an integrated, in situ remediation technology being developed by an industrial consortium consisting of Monsanto, E. I. DuPont de Nemours & Co., Inc. (DuPont), and General Electric, with participation from the Department of Energy (DOE) Office of Environmental Management, Office of Science and Technology (EM-50), and the Environmental Protection Agency (EPA) Office of Research and Development (Figure 1). Lasagna{trademark} remediates soils and soil pore water contaminated with soluble organic compounds. Lasagna{trademark} is especially suited to sites with low permeability soils where electroosmosis can move water faster and more uniformly than hydraulic methods, with very low power consumption. Themore » process uses electrokinetics to move contaminants in soil pore water into treatment zones where the contaminants can be captured or decomposed. Initial focus is on trichloroethylene (TCE), a major contaminant at many DOE and industrial sites. Both vertical and horizontal configurations have been conceptualized, but fieldwork to date is more advanced for the vertical configuration.« less
NASA Technical Reports Server (NTRS)
Batcher, K. E.; Eddey, E. E.; Faiss, R. O.; Gilmore, P. A.
1981-01-01
The processing of synthetic aperture radar (SAR) signals using the massively parallel processor (MPP) is discussed. The fast Fourier transform convolution procedures employed in the algorithms are described. The MPP architecture comprises an array unit (ARU) which processes arrays of data; an array control unit which controls the operation of the ARU and performs scalar arithmetic; a program and data management unit which controls the flow of data; and a unique staging memory (SM) which buffers and permutes data. The ARU contains a 128 by 128 array of bit-serial processing elements (PE). Two-by-four surarrays of PE's are packaged in a custom VLSI HCMOS chip. The staging memory is a large multidimensional-access memory which buffers and permutes data flowing with the system. Efficient SAR processing is achieved via ARU communication paths and SM data manipulation. Real time processing capability can be realized via a multiple ARU, multiple SM configuration.
Wang, Lei; Templer, Richard; Murphy, Richard J
2012-09-01
This study uses Life Cycle Assessment (LCA) to assess the environmental profiles and greenhouse gas (GHG) emissions for bioethanol production from waste papers and to compare them with the alternative waste management options of recycling or incineration with energy recovery. Bioethanol production scenarios both with and without pre-treatments were conducted. It was found that an oxidative lime pre-treatment reduced GHG emissions and overall environmental burdens for a newspaper-to-bioethanol process whereas a dilute acid pre-treatment raised GHG emissions and overall environmental impacts for an office paper-to-bioethanol process. In the comparison of bioethanol production systems with alternative management of waste papers by different technologies, it was found that the environmental profiles of each system vary significantly and this variation affects the outcomes of the specific comparisons made. Overall, a number of configurations of bioethanol production from waste papers offer environmentally favourable or neutral profiles when compared with recycling or incineration. Copyright © 2012 Elsevier Ltd. All rights reserved.
YAMM - Yet Another Menu Manager
NASA Technical Reports Server (NTRS)
Mazer, Alan S.; Weidner, Richard J.
1991-01-01
Yet Another Menu Manager (YAMM) computer program an application-independent menuing package of software designed to remove much difficulty and save much time inherent in implementation of front ends of large packages of software. Provides complete menuing front end for wide variety of applications, with provisions for independence from specific types of terminals, configurations that meet specific needs of users, and dynamic creation of menu trees. Consists of two parts: description of menu configuration and body of application code. Written in C.
NASA Technical Reports Server (NTRS)
Hoh, R. H.; Klein, R. H.; Johnson, W. A.
1977-01-01
A system analysis method for the development of an integrated configuration management/flight director system for IFR STOL approaches is presented. Curved descending decelerating approach trajectories are considered. Considerable emphasis is placed on satisfying the pilot centered requirements (acceptable workload) as well as the usual guidance and control requirements (acceptable performance). The Augmentor Wing Jet STOL Research Aircraft was utilized to allow illustration by example, and to validate the analysis procedure via manned simulation.
Grinding assembly, grinding apparatus, weld joint defect repair system, and methods
Larsen, Eric D.; Watkins, Arthur D.; Bitsoi, Rodney J.; Pace, David P.
2005-09-27
A grinding assembly for grinding a weld joint of a workpiece includes a grinder apparatus, a grinder apparatus includes a grinding wheel configured to grind the weld joint, a member configured to receive the grinding wheel, the member being configured to be removably attached to the grinder apparatus, and a sensor assembly configured to detect a contact between the grinding wheel and the workpiece. The grinding assembly also includes a processing circuitry in communication with the grinder apparatus and configured to control operations of the grinder apparatus, the processing circuitry configured to receive weld defect information of the weld joint from an inspection assembly to create a contour grinding profile to grind the weld joint in a predetermined shape based on the received weld defect information, and a manipulator having an end configured to carry the grinder apparatus, the manipulator further configured to operate in multiple dimensions.
FPGA-based protein sequence alignment : A review
NASA Astrophysics Data System (ADS)
Isa, Mohd. Nazrin Md.; Muhsen, Ku Noor Dhaniah Ku; Saiful Nurdin, Dayana; Ahmad, Muhammad Imran; Anuar Zainol Murad, Sohiful; Nizam Mohyar, Shaiful; Harun, Azizi; Hussin, Razaidi
2017-11-01
Sequence alignment have been optimized using several techniques in order to accelerate the computation time to obtain the optimal score by implementing DP-based algorithm into hardware such as FPGA-based platform. During hardware implementation, there will be performance challenges such as the frequent memory access and highly data dependent in computation process. Therefore, investigation in processing element (PE) configuration where involves more on memory access in load or access the data (substitution matrix, query sequence character) and the PE configuration time will be the main focus in this paper. There are various approaches to enhance the PE configuration performance that have been done in previous works such as by using serial configuration chain and parallel configuration chain i.e. the configuration data will be loaded into each PEs sequentially and simultaneously respectively. Some researchers have proven that the performance using parallel configuration chain has optimized both the configuration time and area.
GPM Timeline Inhibits For IT Processing
NASA Technical Reports Server (NTRS)
Dion, Shirley K.
2014-01-01
The Safety Inhibit Timeline Tool was created as one approach to capturing and understanding inhibits and controls from IT through launch. Global Precipitation Measurement (GPM) Mission, which launched from Japan in March 2014, was a joint mission under a partnership between the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM was one of the first NASA Goddard in-house programs that extensively used software controls. Using this tool during the GPM buildup allowed a thorough review of inhibit and safety critical software design for hazardous subsystems such as the high gain antenna boom, solar array, and instrument deployments, transmitter turn-on, propulsion system release, and instrument radar turn-on. The GPM safety team developed a methodology to document software safety as part of the standard hazard report. As a result of this process, a new tool safety inhibit timeline was created for management of inhibits and their controls during spacecraft buildup and testing during IT at GSFC and at the launch range in Japan. The Safety Inhibit Timeline Tool was a pathfinder approach for reviewing software that controls the electrical inhibits. The Safety Inhibit Timeline Tool strengthens the Safety Analysts understanding of the removal of inhibits during the IT process with safety critical software. With this tool, the Safety Analyst can confirm proper safe configuration of a spacecraft during each IT test, track inhibit and software configuration changes, and assess software criticality. In addition to understanding inhibits and controls during IT, the tool allows the Safety Analyst to better communicate to engineers and management the changes in inhibit states with each phase of hardware and software testing and the impact of safety risks. Lessons learned from participating in the GPM campaign at NASA and JAXA will be discussed during this session.
International Collaboration Activities on Engineered Barrier Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jove-Colon, Carlos F.
The Used Fuel Disposition Campaign (UFDC) within the DOE Fuel Cycle Technologies (FCT) program has been engaging in international collaborations between repository R&D programs for high-level waste (HLW) disposal to leverage on gathered knowledge and laboratory/field data of near- and far-field processes from experiments at underground research laboratories (URL). Heater test experiments at URLs provide a unique opportunity to mimetically study the thermal effects of heat-generating nuclear waste in subsurface repository environments. Various configurations of these experiments have been carried out at various URLs according to the disposal design concepts of the hosting country repository program. The FEBEX (Full-scale Engineeredmore » Barrier Experiment in Crystalline Host Rock) project is a large-scale heater test experiment originated by the Spanish radioactive waste management agency (Empresa Nacional de Residuos Radiactivos S.A. – ENRESA) at the Grimsel Test Site (GTS) URL in Switzerland. The project was subsequently managed by CIEMAT. FEBEX-DP is a concerted effort of various international partners working on the evaluation of sensor data and characterization of samples obtained during the course of this field test and subsequent dismantling. The main purpose of these field-scale experiments is to evaluate feasibility for creation of an engineered barrier system (EBS) with a horizontal configuration according to the Spanish concept of deep geological disposal of high-level radioactive waste in crystalline rock. Another key aspect of this project is to improve the knowledge of coupled processes such as thermal-hydro-mechanical (THM) and thermal-hydro-chemical (THC) operating in the near-field environment. The focus of these is on model development and validation of predictions through model implementation in computational tools to simulate coupled THM and THC processes.« less
SHARP pre-release v1.0 - Current Status and Documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahadevan, Vijay S.; Rahaman, Ronald O.
The NEAMS Reactor Product Line effort aims to develop an integrated multiphysics simulation capability for the design and analysis of future generations of nuclear power plants. The Reactor Product Line code suite’s multi-resolution hierarchy is being designed to ultimately span the full range of length and time scales present in relevant reactor design and safety analyses, as well as scale from desktop to petaflop computing platforms. In this report, building on a several previous report issued in September 2014, we describe our continued efforts to integrate thermal/hydraulics, neutronics, and structural mechanics modeling codes to perform coupled analysis of a representativemore » fast sodium-cooled reactor core in preparation for a unified release of the toolkit. The work reported in the current document covers the software engineering aspects of managing the entire stack of components in the SHARP toolkit and the continuous integration efforts ongoing to prepare a release candidate for interested reactor analysis users. Here we report on the continued integration effort of PROTEUS/Nek5000 and Diablo into the NEAMS framework and the software processes that enable users to utilize the capabilities without losing scientific productivity. Due to the complexity of the individual modules and their necessary/optional dependency library chain, we focus on the configuration and build aspects for the SHARP toolkit, which includes capability to autodownload dependencies and configure/install with optimal flags in an architecture-aware fashion. Such complexity is untenable without strong software engineering processes such as source management, source control, change reviews, unit tests, integration tests and continuous test suites. Details on these processes are provided in the report as a building step for a SHARP user guide that will accompany the first release, expected by Mar 2016.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, John; Gilchrist, Phillip Charles
Processes, systems, devices, and articles of manufacture are provided. Each may include adapting micro-inverters initially configured for frame-mounting to mounting on a frameless solar panel. This securement may include using an adaptive clamp or several adaptive clamps secured to a micro-inverter or its components, and using compressive forces applied directly to the solar panel to secure the adaptive clamp and the components to the solar panel. The clamps can also include compressive spacers and safeties for managing the compressive forces exerted on the solar panels. Friction zones may also be used for managing slipping between the clamp and the solarmore » panel during or after installation. Adjustments to the clamps may be carried out through various means and by changing the physical size of the clamps themselves.« less
Set processing in a network environment. [data bases and magnetic disks and tapes
NASA Technical Reports Server (NTRS)
Hardgrave, W. T.
1975-01-01
A combination of a local network, a mass storage system, and an autonomous set processor serving as a data/storage management machine is described. Its characteristics include: content-accessible data bases usable from all connected devices; efficient storage/access of large data bases; simple and direct programming with data manipulation and storage management handled by the set processor; simple data base design and entry from source representation to set processor representation with no predefinition necessary; capability available for user sort/order specification; significant reduction in tape/disk pack storage and mounts; flexible environment that allows upgrading hardware/software configuration without causing major interruptions in service; minimal traffic on data communications network; and improved central memory usage on large processors.
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J. (Editor)
2008-01-01
The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes configuration management and quality assurance documents from the GCS project. Volume 4 contains six appendices: A. Software Accomplishment Summary for the Guidance and Control Software Project; B. Software Configuration Index for the Guidance and Control Software Project; C. Configuration Management Records for the Guidance and Control Software Project; D. Software Quality Assurance Records for the Guidance and Control Software Project; E. Problem Report for the Pluto Implementation of the Guidance and Control Software Project; and F. Support Documentation Change Reports for the Guidance and Control Software Project.
SOFIA Program SE and I Lessons Learned
NASA Technical Reports Server (NTRS)
Ray, Ronald J.; Fobel, Laura J.; Brignola, Michael P.
2011-01-01
Once a "Troubled Project" threatened with cancellation, the Stratospheric Observatory for Infrared Astronomy (SOFIA) Program has overcome many difficult challenges and recently achieved its first light images. To achieve success, SOFIA had to overcome significant deficiencies in fundamental Systems Engineering identified during a major Program restructuring. This presentation will summarize the lessons learn in Systems Engineering on the SOFIA Program. After the Program was reformulated, an initial assessment of Systems Engineering established the scope of the problem and helped to set a list of priorities that needed to be work. A revised Systems Engineering Management Plan (SEMP) was written to address the new Program structure and requirements established in the approved NPR7123.1A. An important result of the "Technical Planning" effort was the decision by the Program and Technical Leadership team to re-phasing the lifecycle into increments. The reformed SOFIA Program Office had to quickly develop and establish several new System Engineering core processes including; Requirements Management, Risk Management, Configuration Management and Data Management. Implementing these processes had to consider the physical and cultural diversity of the SOFIA Program team which includes two Projects spanning two NASA Centers, a major German partnership, and sub-contractors located across the United States and Europe. The SOFIA Program experience represents a creative approach to doing "System Engineering in the middle" while a Program is well established. Many challenges were identified and overcome. The SOFIA example demonstrates it is never too late to benefit from fixing deficiencies in the System Engineering processes.
Intelligent redundant actuation system requirements and preliminary system design
NASA Technical Reports Server (NTRS)
Defeo, P.; Geiger, L. J.; Harris, J.
1985-01-01
Several redundant actuation system configurations were designed and demonstrated to satisfy the stringent operational requirements of advanced flight control systems. However, this has been accomplished largely through brute force hardware redundancy, resulting in significantly increased computational requirements on the flight control computers which perform the failure analysis and reconfiguration management. Modern technology now provides powerful, low-cost microprocessors which are effective in performing failure isolation and configuration management at the local actuator level. One such concept, called an Intelligent Redundant Actuation System (IRAS), significantly reduces the flight control computer requirements and performs the local tasks more comprehensively than previously feasible. The requirements and preliminary design of an experimental laboratory system capable of demonstrating the concept and sufficiently flexible to explore a variety of configurations are discussed.
NASA Astrophysics Data System (ADS)
Demyanova, O. V.; Andreeva, E. V.; Sibgatullina, D. R.; Kireeva-Karimova, A. M.; Gafurova, A. Y.; Zakirova, Ch S.
2018-05-01
ERP in a modern enterprise information system allowed optimizing internal business processes, reducing production costs and increasing the attractiveness of enterprises for investors. It is an important component of success in the competition and an important condition for attracting investments in the key sector of the state. A vivid example of these systems are enterprise information systems using the methodology of ERP (Enterprise Resource Planning - enterprise resource planning). ERP is an integrated set of methods, processes, technologies and tools. It is based on: supply chain management; advanced planning and scheduling; sales automation; tool responsible for configuring; final resource planning; intelligence business; OLAP technology; block e- Commerce; management of product data. The main purpose of ERP systems is the automation of interrelated processes of planning, accounting and management in key areas of the company. ERP systems are automated systems that effectively address complex problems, including optimal allocation of business resources, ensuring quick and efficient delivery of goods and services to the consumer. Knowledge embedded in ERP systems provided enterprise-wide automation to introduce the activities of all functional departments of the company as a single complex system. At the level of quality estimates, most managers understand that the implementations of ERP systems is a necessary and useful procedure. Assessment of the effectiveness of the information systems implementation is relevant.
The Exact Art and Subtle Science of DC Smelting: Practical Perspectives on the Hot Zone
NASA Astrophysics Data System (ADS)
Geldenhuys, Isabel J.
2017-02-01
Increasingly, sustainable smelting requires technology that can process metallurgically complex, low-grade, ultra-fine and waste materials. It is likely that more applications for direct current (DC) technology will inevitably follow in the future as DC open-arc furnaces have some wonderful features that facilitate processing of a variety of materials in an open-arc open-bath configuration. A DC open-arc furnace allows for optimization and choice of chemistry to benefit the process, rather than being constrained by the electrical or physical properties of the material. In a DC configuration, the power is typically supplied by an open arc, providing relative independence and thus an extra degree of freedom. However, if the inherent features of the technology are misunderstood, much of the potential may never be realised. It is thus important to take cognisance of the freedom an operator will have as a result of the open arc and ensure that operating strategies are implemented. This extra degree of freedom hands an operator a very flexible tool, namely virtually unlimited power. Successful open-arc smelting is about properly managing the balance between power and feed, and practical perspectives on the importance of power and feed balance are presented to highlight this aspect as the foundation of proper open-arc furnace control.
Arguments Against a Configural Processing Account of Familiar Face Recognition.
Burton, A Mike; Schweinberger, Stefan R; Jenkins, Rob; Kaufmann, Jürgen M
2015-07-01
Face recognition is a remarkable human ability, which underlies a great deal of people's social behavior. Individuals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified; (2) large configural changes leave recognition unharmed; (3) recognition is harmed by nonconfigural changes; and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition. © The Author(s) 2015.
Yip, Kenneth; Pang, Suk-King; Chan, Kui-Tim; Chan, Chi-Kuen; Lee, Tsz-Leung
2016-08-08
Purpose - The purpose of this paper is to present a simulation modeling application to reconfigure the outpatient phlebotomy service of an acute regional and teaching hospital in Hong Kong, with an aim to improve service efficiency, shorten patient queuing time and enhance workforce utilization. Design/methodology/approach - The system was modeled as an inhomogeneous Poisson process and a discrete-event simulation model was developed to simulate the current setting, and to evaluate how various performance metrics would change if switched from a decentralized to a centralized model. Variations were then made to the model to test different workforce arrangements for the centralized service, so that managers could decide on the service's final configuration via an evidence-based and data-driven approach. Findings - This paper provides empirical insights about the relationship between staffing arrangement and system performance via a detailed scenario analysis. One particular staffing scenario was chosen by manages as it was considered to strike the best balance between performance and workforce scheduled. The resulting centralized phlebotomy service was successfully commissioned. Practical implications - This paper demonstrates how analytics could be used for operational planning at the hospital level. The authors show that a transparent and evidence-based scenario analysis, made available through analytics and simulation, greatly facilitates management and clinical stakeholders to arrive at the ideal service configuration. Originality/value - The authors provide a robust method in evaluating the relationship between workforce investment, queuing reduction and workforce utilization, which is crucial for managers when deciding the delivery model for any outpatient-related service.
Random covering of the circle: the configuration-space of the free deposition process
NASA Astrophysics Data System (ADS)
Huillet, Thierry
2003-12-01
Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = rgr, for some finite density rgr of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Rényi's random sequential adsorption model.
PDSS configuration control plan and procedures
NASA Technical Reports Server (NTRS)
1983-01-01
The payload development support system (PDSS) configuration control plan and procedures are presented. These plans and procedures establish the process for maintaining configuration control of the PDSS system, especially the Spacelab experiment interface device's (SEID) RAU, HRM, and PDI interface simulations and the PDSS ECOS DEP Services simulation. The plans and procedures as specified are designed to provide a simplified but complete configuration control process. The intent is to require a minimum amount of paperwork but provide total traceability of PDSS during experiment test activities.
Optical components damage parameters database system
NASA Astrophysics Data System (ADS)
Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong
2012-10-01
Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.
An Own-Race Advantage for Components as Well as Configurations in Face Recognition
ERIC Educational Resources Information Center
Hayward, William G.; Rhodes, Gillian; Schwaninger, Adrian
2008-01-01
The own-race advantage in face recognition has been hypothesized as being due to a superiority in the processing of configural information for own-race faces. Here we examined the contributions of both configural and component processing to the own-race advantage. We recruited 48 Caucasian participants in Australia and 48 Chinese participants in…
Design of a flight director/configuration management system for piloted STOL approaches
NASA Technical Reports Server (NTRS)
Hoh, R. H.; Klein, R. H.; Johnson, W. A.
1973-01-01
The design and characteristics of a flight director for V/STOL aircraft are discussed. A configuration management system for piloted STOL approaches is described. The individual components of the overall system designed to reduce pilot workload to an acceptable level during curved, decelerating, and descending STOL approaches are defined. The application of the system to augmentor wing aircraft is analyzed. System performance checks and piloted evaluations were conducted on a flight simulator and the results are summarized.
NASA Technical Reports Server (NTRS)
Mckay, Charles
1991-01-01
This is the configuration management Plan for the AdaNet Repository Based Software Engineering (RBSE) contract. This document establishes the requirements and activities needed to ensure that the products developed for the AdaNet RBSE contract are accurately identified, that proposed changes to the product are systematically evaluated and controlled, that the status of all change activity is known at all times, and that the product achieves its functional performance requirements and is accurately documented.
SAGA: A project to automate the management of software production systems
NASA Technical Reports Server (NTRS)
Campbell, R. H.; Badger, W.; Beckman, C. S.; Beshers, G.; Hammerslag, D.; Kimball, J.; Kirslis, P. A.; Render, H.; Richards, P.; Terwilliger, R.
1984-01-01
The project to automate the management of software production systems is described. The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. Several major components of the SAGA system are completed to prototype form. The construction methods are described.
NASA Astrophysics Data System (ADS)
Hincks, A. D.; Shaw, J. R.; Chime Collaboration
2015-09-01
The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is an ambitious new radio telescope project for measuring cosmic expansion and investigating dark energy. Keeping good records of both physical configuration of its 1280 antennas and their analogue signal chains as well as the ˜100 TB of data produced daily from its correlator will be essential to the success of CHIME. In these proceedings we describe the database-driven software we have developed to manage this complexity.
The Selection of Q-Switch for a 350mJ Air-borne 2-micron Wind Lidar
NASA Technical Reports Server (NTRS)
Petros, Mulugeta; Yu, Jirong; Trieu, Bo; Bai, Yingxin; Petzar, Paul; Singh, Upendra N.
2008-01-01
In the process of designing a coherent, high energy 2micron, Doppler wind Lidar, various types of Q-Switch materials and configurations have been investigated for the oscillator. Designing an oscillator with a relatively low gain laser material, presents challenges related to the management high internal circulating fluence due to high reflective output coupler. This problem is compounded by the loss of hold-off. In addition, the selection has to take into account the round trip optical loss in the resonator and the loss of hold-off. For this application, a Brewster cut 5mm aperture, fused silica AO Q-switch is selected. Once the Q-switch is selected various rf frequencies were evaluated. Since the Lidar has to perform in single longitudinal and transverse mode with transform limited line width, in this paper, various seeding configurations are presented in the context of Q-Switch diffraction efficiency. The master oscillator power amplifier has demonstrated over 350mJ output when the amplifier is operated in double pass mode and higher than 250mJ when operated in single pass configuration. The repetition rate of the system is 10Hz and the pulse length 200ns.
SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)
Zhang, Xiang; Chen, Zhangwei
2013-01-01
This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385
Yampolsky, Maya A.; Amiot, Catherine E.; de la Sablonnière, Roxane
2013-01-01
Understanding the experiences of multicultural individuals is vital in our diverse populations. Multicultural people often need to navigate the different norms and values associated with their multiple cultural identities. Recent research on multicultural identification has focused on how individuals with multiple cultural groups manage these different identities within the self, and how this process predicts well-being. The current study built on this research by using a qualitative method to examine the process of configuring one's identities within the self. The present study employed three of the four different multiple identity configurations in Amiot et al. (2007) cognitive-developmental model of social identity integration: categorization, where people identify with one of their cultural groups over others; compartmentalization, where individuals maintain multiple, separate identities within themselves; and integration, where people link their multiple cultural identities. Life narratives were used to investigate the relationship between each of these configurations and well-being, as indicated by narrative coherence. It was expected that individuals with integrated cultural identities would report greater narrative coherence than individuals who compartmentalized and categorized their cultural identities. For all twenty-two participants, identity integration was significantly and positively related to narrative coherence, while compartmentalization was significantly and negatively related to narrative coherence. ANOVAs revealed that integrated and categorized participants reported significantly greater narrative coherence than compartmentalized participants. These findings are discussed in light of previous research on multicultural identity integration. PMID:23504407
System and method for merging clusters of wireless nodes in a wireless network
Budampati, Ramakrishna S [Maple Grove, MN; Gonia, Patrick S [Maplewood, MN; Kolavennu, Soumitri N [Blaine, MN; Mahasenan, Arun V [Kerala, IN
2012-05-29
A system includes a first cluster having multiple first wireless nodes. One first node is configured to act as a first cluster master, and other first nodes are configured to receive time synchronization information provided by the first cluster master. The system also includes a second cluster having one or more second wireless nodes. One second node is configured to act as a second cluster master, and any other second nodes configured to receive time synchronization information provided by the second cluster master. The system further includes a manager configured to merge the clusters into a combined cluster. One of the nodes is configured to act as a single cluster master for the combined cluster, and the other nodes are configured to receive time synchronization information provided by the single cluster master.
NASA Technical Reports Server (NTRS)
Corban, Robert
1993-01-01
The systems engineering process for the concept definition phase of the program involves requirements definition, system definition, and consistent concept definition. The requirements definition process involves obtaining a complete understanding of the system requirements based on customer needs, mission scenarios, and nuclear thermal propulsion (NTP) operating characteristics. A system functional analysis is performed to provide a comprehensive traceability and verification of top-level requirements down to detailed system specifications and provides significant insight into the measures of system effectiveness to be utilized in system evaluation. The second key element in the process is the definition of system concepts to meet the requirements. This part of the process involves engine system and reactor contractor teams to develop alternative NTP system concepts that can be evaluated against specific attributes, as well as a reference configuration against which to compare system benefits and merits. Quality function deployment (QFD), as an excellent tool within Total Quality Management (TQM) techniques, can provide the required structure and provide a link to the voice of the customer in establishing critical system qualities and their relationships. The third element of the process is the consistent performance comparison. The comparison process involves validating developed concept data and quantifying system merits through analysis, computer modeling, simulation, and rapid prototyping of the proposed high risk NTP subsystems. The maximum amount possible of quantitative data will be developed and/or validated to be utilized in the QFD evaluation matrix. If upon evaluation of a new concept or its associated subsystems determine to have substantial merit, those features will be incorporated into the reference configuration for subsequent system definition and comparison efforts.
Recipe for Success: Digital Viewables
NASA Technical Reports Server (NTRS)
LaPha, Steven; Gaydos, Frank
2014-01-01
The Engineering Services Contract (ESC) and Information Management Communication Support contract (IMCS) at Kennedy Space Center (KSC) provide services to NASA in respect to flight and ground systems design and development. These groups provides the necessary tools, aid, and best practice methodologies required for efficient, optimized design and process development. The team is responsible for configuring and implementing systems, software, along with training, documentation, and administering standards. The team supports over 200 engineers and design specialists with the use of Windchill, Creo Parametric, NX, AutoCAD, and a variety of other design and analysis tools.
Systems design analysis applied to launch vehicle configuration
NASA Technical Reports Server (NTRS)
Ryan, R.; Verderaime, V.
1993-01-01
As emphasis shifts from optimum-performance aerospace systems to least lift-cycle costs, systems designs must seek, adapt, and innovate cost improvement techniques in design through operations. The systems design process of concept, definition, and design was assessed for the types and flow of total quality management techniques that may be applicable in a launch vehicle systems design analysis. Techniques discussed are task ordering, quality leverage, concurrent engineering, Pareto's principle, robustness, quality function deployment, criteria, and others. These cost oriented techniques are as applicable to aerospace systems design analysis as to any large commercial system.
Network Configuration of Oracle and Database Programming Using SQL
NASA Technical Reports Server (NTRS)
Davis, Melton; Abdurrashid, Jibril; Diaz, Philip; Harris, W. C.
2000-01-01
A database can be defined as a collection of information organized in such a way that it can be retrieved and used. A database management system (DBMS) can further be defined as the tool that enables us to manage and interact with the database. The Oracle 8 Server is a state-of-the-art information management environment. It is a repository for very large amounts of data, and gives users rapid access to that data. The Oracle 8 Server allows for sharing of data between applications; the information is stored in one place and used by many systems. My research will focus primarily on SQL (Structured Query Language) programming. SQL is the way you define and manipulate data in Oracle's relational database. SQL is the industry standard adopted by all database vendors. When programming with SQL, you work on sets of data (i.e., information is not processed one record at a time).
Estes, Jason G.; Othman, Nurzhafarina; Ismail, Sulaiman; Ancrenaz, Marc; Goossens, Benoit; Ambu, Laurentius N.; Estes, Anna B.; Palmiotto, Peter A.
2012-01-01
The approximately 300 (298, 95% CI: 152–581) elephants in the Lower Kinabatangan Managed Elephant Range in Sabah, Malaysian Borneo are a priority sub-population for Borneo's total elephant population (2,040, 95% CI: 1,184–3,652). Habitat loss and human-elephant conflict are recognized as the major threats to Bornean elephant survival. In the Kinabatangan region, human settlements and agricultural development for oil palm drive an intense fragmentation process. Electric fences guard against elephant crop raiding but also remove access to suitable habitat patches. We conducted expert opinion-based least-cost analyses, to model the quantity and configuration of available suitable elephant habitat in the Lower Kinabatangan, and called this the Elephant Habitat Linkage. At 184 km2, our estimate of available habitat is 54% smaller than the estimate used in the State's Elephant Action Plan for the Lower Kinabatangan Managed Elephant Range (400 km2). During high flood levels, available habitat is reduced to only 61 km2. As a consequence, short-term elephant densities are likely to surge during floods to 4.83 km−2 (95% CI: 2.46–9.41), among the highest estimated for forest-dwelling elephants in Asia or Africa. During severe floods, the configuration of remaining elephant habitat and the surge in elephant density may put two villages at elevated risk of human-elephant conflict. Lower Kinabatangan elephants are vulnerable to the natural disturbance regime of the river due to their limited dispersal options. Twenty bottlenecks less than one km wide throughout the Elephant Habitat Linkage, have the potential to further reduce access to suitable habitat. Rebuilding landscape connectivity to isolated habitat patches and to the North Kinabatangan Managed Elephant Range (less than 35 km inland) are conservation priorities that would increase the quantity of available habitat, and may work as a mechanism to allow population release, lower elephant density, reduce human-elephant conflict, and enable genetic mixing. PMID:23071499
Cryogenic Fluid Management Facility
NASA Technical Reports Server (NTRS)
Eberhardt, R. N.; Bailey, W. J.
1985-01-01
The Cryogenic Fluid Management Facility is a reusable test bed which is designed to be carried within the Shuttle cargo bay to investigate the systems and technologies associated with the efficient management of cryogens in space. Cryogenic fluid management consists of the systems and technologies for: (1) liquid storage and supply, including capillary acquisition/expulsion systems which provide single-phase liquid to the user system, (2) both passive and active thermal control systems, and (3) fluid transfer/resupply systems, including transfer lines and receiver tanks. The facility contains a storage and supply tank, a transfer line and a receiver tank, configured to provide low-g verification of fluid and thermal models of cryogenic storage and transfer processes. The facility will provide design data and criteria for future subcritical cryogenic storage and transfer system applications, such as Space Station life support, attitude control, power and fuel depot supply, resupply tankers, external tank (ET) propellant scavenging, and ground-based and space-based orbit transfer vehicles (OTV).
Process for Forming a High Temperature Single Crystal Canted Spring
NASA Technical Reports Server (NTRS)
DeMange, Jeffrey J (Inventor); Ritzert, Frank J (Inventor); Nathal, Michael V (Inventor); Dunlap, Patrick H (Inventor); Steinetz, Bruce M (Inventor)
2017-01-01
A process for forming a high temperature single crystal canted spring is provided. In one embodiment, the process includes fabricating configurations of a rapid prototype spring to fabricate a sacrificial mold pattern to create a ceramic mold and casting a canted coiled spring to form at least one canted coil spring configuration based on the ceramic mold. The high temperature single crystal canted spring is formed from a nickel-based alloy containing rhenium using the at least one coil spring configuration.
Fifty-eighth Christmas Bird Count. 166. Ocean City, Md
Keough, J.R.; Thompson, T.A.; Guntenspergen, G.R.; Wilcox, D.A.
1999-01-01
Gauging the impact of manipulative activities, such as rehabilitation or management, on wetlands requires having a notion of the unmanipulated condition as a reference. An understanding of the reference condition requires knowledge of dominant factors influencing ecosystem processes and biological communities. In this paper, we focus on natural physical factors (conditions and processes) that drive coastal wetland ecosystems of the Laurentian Great Lakes. Great Lakes coastal wetlands develop under conditions of large-lake hydrology and disturbance imposed at a hierarchy of spatial and temporal scales and contain biotic communities adapted to unstable and unpredictable conditions. Coastal wetlands are configured along a continuum of hydrogeomorphic types: open coastal wetlands, drowned river mouth and flooded delta wetlands, and protected wetlands, each developing distinct ecosystem properties and biotic communities. Hydrogeomorphic factors associated with the lake and watershed operate at a hierarchy of scales: a) local and short-term (seiches and ice action), b) watershed / lakewide / annual (seasonal water- level change), and c) larger or year-to-year and longer ( regional and/or greater than one-year). Other physical factors include the unique water quality features of each lake. The aim of this paper is to provide scientists and managers with a framework for considering regional and site-specific geomorphometry and a hierarchy of physical processes in planning management and conservation projects.
Hydrogeomorphic factors and ecosystem responses in coastal wetlands of the Great Lakes
Keough, Janet R.; Thompson, Todd A.; Guntenspergen, Glenn R.; Wilcox, Douglas A.
1999-01-01
Gauging the impact of manipulative activities, such as rehabilitation or management, on wetlands requires having a notion of the unmanipulated condition as a reference. And understanding of the reference condition requires knowledge of dominant factors influencing ecosystem processes and biological communities. In this paper, we focus on natural physical factors (conditions and processes) that drive coastal wetland ecosystems of the Laurentian Great Lakes. Great Lakes coastal wetlands develop under conditions of large-lake hydrology and disturbance imposed at a hiearchy of spatial and temporal scales and contain biotic communities adapted to unstable and unpredictable conditions. Coastal wetlands are configured along a continuum of hydrogeomorphic types: open coastal wetlands, drowned river mouth and flooded delta wetlands, and protected wetlands, each developing distinct ecosystem propertics and biotic communities. Hydrogeomorphic factors associated with the lake and watershed operate at a hierarchy of scales: a) local and short-term (seiches and ice action), b) watershed / lakewide / annual (seasonal water-level change), and c) larger or year-to-year and longer (regional and/or greater than one-year). Other physical factors include the unique water quality features of each lake. The aim of this paper is to provide scientists and managers with a framework for considering regional and site-specific geomorphometry and a hierarchy of physical processes in planning management and conservation projects.
A data management system to enable urgent natural disaster computing
NASA Astrophysics Data System (ADS)
Leong, Siew Hoon; Kranzlmüller, Dieter; Frank, Anton
2014-05-01
Civil protection, in particular natural disaster management, is very important to most nations and civilians in the world. When disasters like flash floods, earthquakes and tsunamis are expected or have taken place, it is of utmost importance to make timely decisions for managing the affected areas and reduce casualties. Computer simulations can generate information and provide predictions to facilitate this decision making process. Getting the data to the required resources is a critical requirement to enable the timely computation of the predictions. An urgent data management system to support natural disaster computing is thus necessary to effectively carry out data activities within a stipulated deadline. Since the trigger of a natural disaster is usually unpredictable, it is not always possible to prepare required resources well in advance. As such, an urgent data management system for natural disaster computing has to be able to work with any type of resources. Additional requirements include the need to manage deadlines and huge volume of data, fault tolerance, reliable, flexibility to changes, ease of usage, etc. The proposed data management platform includes a service manager to provide a uniform and extensible interface for the supported data protocols, a configuration manager to check and retrieve configurations of available resources, a scheduler manager to ensure that the deadlines can be met, a fault tolerance manager to increase the reliability of the platform and a data manager to initiate and perform the data activities. These managers will enable the selection of the most appropriate resource, transfer protocol, etc. such that the hard deadline of an urgent computation can be met for a particular urgent activity, e.g. data staging or computation. We associated 2 types of deadlines [2] with an urgent computing system. Soft-hard deadline: Missing a soft-firm deadline will render the computation less useful resulting in a cost that can have severe consequences Hard deadline: Missing a hard deadline renders the computation useless and results in full catastrophic consequences. A prototype of this system has a REST-based service manager. The REST-based implementation provides a uniform interface that is easy to use. New and upcoming file transfer protocols can easily be extended and accessed via the service manager. The service manager interacts with the other four managers to coordinate the data activities so that the fundamental natural disaster urgent computing requirement, i.e. deadline, can be fulfilled in a reliable manner. A data activity can include data storing, data archiving and data storing. Reliability is ensured by the choice of a network of managers organisation model[1] the configuration manager and the fault tolerance manager. With this proposed design, an easy to use, resource-independent data management system that can support and fulfill the computation of a natural disaster prediction within stipulated deadlines can thus be realised. References [1] H. G. Hegering, S. Abeck, and B. Neumair, Integrated management of networked systems - concepts, architectures, and their operational application, Morgan Kaufmann Publishers, 340 Pine Stret, Sixth Floor, San Francisco, CA 94104-3205, USA, 1999. [2] H. Kopetz, Real-time systems design principles for distributed embedded applications, second edition, Springer, LLC, 233 Spring Street, New York, NY 10013, USA, 2011. [3] S. H. Leong, A. Frank, and D. Kranzlmu¨ ller, Leveraging e-infrastructures for urgent computing, Procedia Computer Science 18 (2013), no. 0, 2177 - 2186, 2013 International Conference on Computational Science. [4] N. Trebon, Enabling urgent computing within the existing distributed computing infrastructure, Ph.D. thesis, University of Chicago, August 2011, http://people.cs.uchicago.edu/~ntrebon/docs/dissertation.pdf.
Gaia DR1 documentation Chapter 6: Variability
NASA Astrophysics Data System (ADS)
Eyer, L.; Rimoldini, L.; Guy, L.; Holl, B.; Clementini, G.; Cuypers, J.; Mowlavi, N.; Lecoeur-Taïbi, I.; De Ridder, J.; Charnas, J.; Nienartowicz, K.
2017-12-01
This chapter describes the photometric variability processing of the Gaia DR1 data. Coordination Unit 7 is responsible for the variability analysis of over a billion celestial sources. In particular the definition, design, development, validation and provision of a software package for the data processing of photometrically variable objects. Data Processing Centre Geneva (DPCG) responsibilities cover all issues related to the computational part of the CU7 analysis. These span: hardware provisioning, including selection, deployment and optimisation of suitable hardware, choosing and developing software architecture, defining data and scientific workflows as well as operational activities such as configuration management, data import, time series reconstruction, storage and processing handling, visualisation and data export. CU7/DPCG is also responsible for interaction with other DPCs and CUs, software and programming training for the CU7 members, scientific software quality control and management of software and data lifecycle. Details about the specific data treatment steps of the Gaia DR1 data products are found in Eyer et al. (2017) and are not repeated here. The variability content of the Gaia DR1 focusses on a subsample of Cepheids and RR Lyrae stars around the South ecliptic pole, showcasing the performance of the Gaia photometry with respect to variable objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobson, D; Churby, A; Krieger, E
2011-07-25
The National Ignition Facility (NIF) is the world's largest laser composed of millions of individual parts brought together to form one massive assembly. Maintaining control of the physical definition, status and configuration of this structure is a monumental undertaking yet critical to the validity of the shot experiment data and the safe operation of the facility. The NIF business application suite of software provides the means to effectively manage the definition, build, operation, maintenance and configuration control of all components of the National Ignition Facility. State of the art Computer Aided Design software applications are used to generate a virtualmore » model and assemblies. Engineering bills of material are controlled through the Enterprise Configuration Management System. This data structure is passed to the Enterprise Resource Planning system to create a manufacturing bill of material. Specific parts are serialized then tracked along their entire lifecycle providing visibility to the location and status of optical, target and diagnostic components that are key to assessing pre-shot machine readiness. Nearly forty thousand items requiring preventive, reactive and calibration maintenance are tracked through the System Maintenance & Reliability Tracking application to ensure proper operation. Radiological tracking applications ensure proper stewardship of radiological and hazardous materials and help provide a safe working environment for NIF personnel.« less
Supporting performance and configuration management of GTE cellular networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Ming; Lafond, C.; Jakobson, G.
GTE Laboratories, in cooperation with GTE Mobilnet, has developed and deployed PERFFEX (PERFormance Expert), an intelligent system for performance and configuration management of cellular networks. PERFEX assists cellular network performance and radio engineers in the analysis of large volumes of cellular network performance and configuration data. It helps them locate and determine the probable causes of performance problems, and provides intelligent suggestions about how to correct them. The system combines an expert cellular network performance tuning capability with a map-based graphical user interface, data visualization programs, and a set of special cellular engineering tools. PERFEX is in daily use atmore » more than 25 GTE Mobile Switching Centers. Since the first deployment of the system in late 1993, PERFEX has become a major GTE cellular network performance optimization tool.« less
NASA Astrophysics Data System (ADS)
Lapotre, Vianney; Gogniat, Guy; Baghdadi, Amer; Diguet, Jean-Philippe
2017-12-01
The multiplication of connected devices goes along with a large variety of applications and traffic types needing diverse requirements. Accompanying this connectivity evolution, the last years have seen considerable evolutions of wireless communication standards in the domain of mobile telephone networks, local/wide wireless area networks, and Digital Video Broadcasting (DVB). In this context, intensive research has been conducted to provide flexible turbo decoder targeting high throughput, multi-mode, multi-standard, and power consumption efficiency. However, flexible turbo decoder implementations have not often considered dynamic reconfiguration issues in this context that requires high speed configuration switching. Starting from this assessment, this paper proposes the first solution that allows frame-by-frame run-time configuration management of a multi-processor turbo decoder without compromising the decoding performances.
Computer software configuration description, 241-AY and 241-AZ tank farm MICON automation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winkelman, W.D.
This document describes the configuration process, choices and conventions used during the configuration activities, and issues involved in making changes to the configuration. Includes the master listings of the Tag definitions, which should be revised to authorize any changes. Revision 2 incorporates minor changes to ensure the document setpoints accurately reflect limits (including exhaust stack flow of 800 scfm) established in OSD-T-151-00019. The MICON DCS software controls and monitors the instrumentation and equipment associated with plant systems and processes.
Dynamically re-configurable CMOS imagers for an active vision system
NASA Technical Reports Server (NTRS)
Yang, Guang (Inventor); Pain, Bedabrata (Inventor)
2005-01-01
A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.
Infants' Perception of Chasing
ERIC Educational Resources Information Center
Frankenhuis, Willem E.; House, Bailey; Barrett, H. Clark; Johnson, Scott P.
2013-01-01
Two significant questions in cognitive and developmental science are first, whether objects and events are selected for attention based on their features (featural processing) or the configuration of their features (configural processing), and second, how these modes of processing develop. These questions have been addressed in part with…
Fakra, Eric; Jouve, Elisabeth; Guillaume, Fabrice; Azorin, Jean-Michel; Blin, Olivier
2015-03-01
Deficit in facial affect recognition is a well-documented impairment in schizophrenia, closely connected to social outcome. This deficit could be related to psychopathology, but also to a broader dysfunction in processing facial information. In addition, patients with schizophrenia inadequately use configural information-a type of processing that relies on spatial relationships between facial features. To date, no study has specifically examined the link between symptoms and misuse of configural information in the deficit in facial affect recognition. Unmedicated schizophrenia patients (n = 30) and matched healthy controls (n = 30) performed a facial affect recognition task and a face inversion task, which tests aptitude to rely on configural information. In patients, regressions were carried out between facial affect recognition, symptom dimensions and inversion effect. Patients, compared with controls, showed a deficit in facial affect recognition and a lower inversion effect. Negative symptoms and lower inversion effect could account for 41.2% of the variance in facial affect recognition. This study confirms the presence of a deficit in facial affect recognition, and also of dysfunctional manipulation in configural information in antipsychotic-free patients. Negative symptoms and poor processing of configural information explained a substantial part of the deficient recognition of facial affect. We speculate that this deficit may be caused by several factors, among which independently stand psychopathology and failure in correctly manipulating configural information. PsycINFO Database Record (c) 2015 APA, all rights reserved.
NASA Technical Reports Server (NTRS)
Ng, Tak-kwong (Inventor); Herath, Jeffrey A. (Inventor)
2010-01-01
An integrated system mitigates the effects of a single event upset (SEU) on a reprogrammable field programmable gate array (RFPGA). The system includes (i) a RFPGA having an internal configuration memory, and (ii) a memory for storing a configuration associated with the RFPGA. Logic circuitry programmed into the RFPGA and coupled to the memory reloads a portion of the configuration from the memory into the RFPGA's internal configuration memory at predetermined times. Additional SEU mitigation can be provided by logic circuitry on the RFPGA that monitors and maintains synchronized operation of the RFPGA's digital clock managers.
Model Ambiguities in Configurational Comparative Research
ERIC Educational Resources Information Center
Baumgartner, Michael; Thiem, Alrik
2017-01-01
For many years, sociologists, political scientists, and management scholars have readily relied on Qualitative Comparative Analysis (QCA) for the purpose of configurational causal modeling. However, this article reveals that a severe problem in the application of QCA has gone unnoticed so far: model ambiguities. These arise when multiple causal…
SAMI Automated Plug Plate Configuration
NASA Astrophysics Data System (ADS)
Lorente, N. P. F.; Farrell, T.; Goodwin, M.
2013-10-01
The Sydney-AAO Multi-object Integral field spectrograph (SAMI) is a prototype wide-field system at the Anglo-Australian Telescope (AAT) which uses a plug-plate to mount its 13×61-core imaging fibre bundles (hexabundles) in the optical path at the telescope's prime focus. In this paper we describe the process of determining the positions of the plug-plate holes, where plates contain three or more stacked observation configurations. The process, which up until now has involved several separate processes and has required significant manual configuration and checking, is now being automated to increase efficiency and reduce error. This is carried out by means of a thin Java controller layer which drives the configuration cycle. This layer controls the user interface and the C++ algorithm layer where the plate configuration and optimisation is carried out. Additionally, through the Aladin display package, it provides visualisation and facilitates user verification of the resulting plates.
Creep life management system for a turbine engine and method of operating the same
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tralshawala, Nilesh; Miller, Harold Edward; Badami, Vivek Venugopal
A creep life management system includes at least one sensor apparatus coupled to a first component. The at least one sensor apparatus is configured with a unique identifier. The creep life management system also includes at least one reader unit coupled to a second component. The at least one reader unit is configured to transmit an interrogation request signal to the at least one sensor apparatus and receive a measurement response signal transmitted from the at least one sensor apparatus. The creep life management system further includes at least one processor programmed to determine a real-time creep profile of themore » first component as a function of the measurement response signal transmitted from the at least one sensor apparatus.« less
Introduction to the scientific application system of DAMPE (On behalf of DAMPE collaboration)
NASA Astrophysics Data System (ADS)
Zang, Jingjing
2016-07-01
The Dark Matter Particle Explorer (DAMPE) is a high energy particle physics experiment satellite, launched on 17 Dec 2015. The science data processing and payload operation maintenance for DAMPE will be provided by the DAMPE Scientific Application System (SAS) at the Purple Mountain Observatory (PMO) of Chinese Academy of Sciences. SAS is consisted of three subsystems - scientific operation subsystem, science data and user management subsystem and science data processing subsystem. In cooperation with the Ground Support System (Beijing), the scientific operation subsystem is responsible for proposing observation plans, monitoring the health of satellite, generating payload control commands and participating in all activities related to payload operation. Several databases developed by the science data and user management subsystem of DAMPE methodically manage all collected and reconstructed science data, down linked housekeeping data, payload configuration and calibration data. Under the leadership of DAMPE Scientific Committee, this subsystem is also responsible for publication of high level science data and supporting all science activities of the DAMPE collaboration. The science data processing subsystem of DAMPE has already developed a series of physics analysis software to reconstruct basic information about detected cosmic ray particle. This subsystem also maintains the high performance computing system of SAS to processing all down linked science data and automatically monitors the qualities of all produced data. In this talk, we will describe all functionalities of whole DAMPE SAS system and show you main performances of data processing ability.
Multi-objective reverse logistics model for integrated computer waste management.
Ahluwalia, Poonam Khanijo; Nema, Arvind K
2006-12-01
This study aimed to address the issues involved in the planning and design of a computer waste management system in an integrated manner. A decision-support tool is presented for selecting an optimum configuration of computer waste management facilities (segregation, storage, treatment/processing, reuse/recycle and disposal) and allocation of waste to these facilities. The model is based on an integer linear programming method with the objectives of minimizing environmental risk as well as cost. The issue of uncertainty in the estimated waste quantities from multiple sources is addressed using the Monte Carlo simulation technique. An illustrated example of computer waste management in Delhi, India is presented to demonstrate the usefulness of the proposed model and to study tradeoffs between cost and risk. The results of the example problem show that it is possible to reduce the environmental risk significantly by a marginal increase in the available cost. The proposed model can serve as a powerful tool to address the environmental problems associated with exponentially growing quantities of computer waste which are presently being managed using rudimentary methods of reuse, recovery and disposal by various small-scale vendors.
The Advanced Communication Technology Satellite and ISDN
NASA Technical Reports Server (NTRS)
Lowry, Peter A.
1996-01-01
This paper depicts the Advanced Communication Technology Satellite (ACTS) system as a global central office switch. The ground portion of the system is the collection of earth stations or T1-VSAT's (T1 very small aperture terminals). The control software for the T1-VSAT's resides in a single CPU. The software consists of two modules, the modem manager and the call manager. The modem manager (MM) controls the RF modem portion of the T1-VSAT. It processes the orderwires from the satellite or from signaling generated by the call manager (CM). The CM controls the Recom Laboratories MSPs by receiving signaling messages from the stacked MSP shelves ro units and sending appropriate setup commands to them. There are two methods used to setup and process calls in the CM; first by dialing up a circuit using a standard telephone handset or, secondly by using an external processor connected to the CPU's second COM port, by sending and receiving signaling orderwires. It is the use of the external processor which permits the ISDN (Integrated Services Digital Network) Signaling Processor to implement ISDN calls. In August 1993, the initial testing of the ISDN Signaling Processor was carried out at ACTS System Test at Lockheed Marietta, Princeton, NJ using the spacecraft in its test configuration on the ground.
Kepler Science Operations Center Pipeline Framework
NASA Technical Reports Server (NTRS)
Klaus, Todd C.; McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Middour, Christopher; Caldwell, Douglas A.; Jenkins, Jon M.
2010-01-01
The Kepler mission is designed to continuously monitor up to 170,000 stars at a 30 minute cadence for 3.5 years searching for Earth-size planets. The data are processed at the Science Operations Center (SOC) at NASA Ames Research Center. Because of the large volume of data and the memory and CPU-intensive nature of the analysis, significant computing hardware is required. We have developed generic pipeline framework software that is used to distribute and synchronize the processing across a cluster of CPUs and to manage the resulting products. The framework is written in Java and is therefore platform-independent, and scales from a single, standalone workstation (for development and research on small data sets) to a full cluster of homogeneous or heterogeneous hardware with minimal configuration changes. A plug-in architecture provides customized control of the unit of work without the need to modify the framework itself. Distributed transaction services provide for atomic storage of pipeline products for a unit of work across a relational database and the custom Kepler DB. Generic parameter management and data accountability services are provided to record the parameter values, software versions, and other meta-data used for each pipeline execution. A graphical console allows for the configuration, execution, and monitoring of pipelines. An alert and metrics subsystem is used to monitor the health and performance of the pipeline. The framework was developed for the Kepler project based on Kepler requirements, but the framework itself is generic and could be used for a variety of applications where these features are needed.
Configuration Analysis Tool (CAT). System Description and users guide (revision 1)
NASA Technical Reports Server (NTRS)
Decker, W.; Taylor, W.; Mcgarry, F. E.; Merwarth, P.
1982-01-01
A system description of, and user's guide for, the Configuration Analysis Tool (CAT) are presented. As a configuration management tool, CAT enhances the control of large software systems by providing a repository for information describing the current status of a project. CAT provides an editing capability to update the information and a reporting capability to present the information. CAT is an interactive program available in versions for the PDP-11/70 and VAX-11/780 computers.
Context based configuration management system
NASA Technical Reports Server (NTRS)
Gurram, Mohana M. (Inventor); Maluf, David A. (Inventor); Mederos, Luis A. (Inventor); Gawdiak, Yuri O. (Inventor)
2010-01-01
A computer-based system for configuring and displaying information on changes in, and present status of, a collection of events associated with a project. Classes of icons for decision events, configurations and feedback mechanisms, and time lines (sequential and/or simultaneous) for related events are displayed. Metadata for each icon in each class is displayed by choosing and activating the corresponding icon. Access control (viewing, reading, writing, editing, deleting, etc.) is optionally imposed for metadata and other displayed information.
Management system for the SND experiments
NASA Astrophysics Data System (ADS)
Pugachev, K.; Korol, A.
2017-09-01
A new management system for the SND detector experiments (at VEPP-2000 collider in Novosibirsk) is developed. We describe here the interaction between a user and the SND databases. These databases contain experiment configuration, conditions and metadata. The new system is designed in client-server architecture. It has several logical layers corresponding to the users roles. A new template engine is created. A web application is implemented using Node.js framework. At the time the application provides: showing and editing configuration; showing experiment metadata and experiment conditions data index; showing SND log (prototype).
1982-10-01
spent in preparing this document. 00. EXECUTIVE SUMMARY The O’Hare Runway Configuration Management System (CMS) is an interactive multi-user computer ...MITRE Washington’s Computer Center. Currently, CMS is housed in an IBM 4341 computer with VM/SP operating system. CMS employs the IBM’s Display...iV 0O, o 0 .r4L /~ wA 0U 00 00 0 w vi O’Hare, it will operate on a dedicated mini- computer which permits multi-tasking (that is, multiple users
48 CFR 352.239-70 - Standard for security configurations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... configure its computers that contain HHS data with the applicable Federal Desktop Core Configuration (FDCC) (see http://nvd.nist.gov/fdcc/index.cfm) and ensure that its computers have and maintain the latest... technology (IT) that is used to process information on behalf of HHS. The following security configuration...
48 CFR 352.239-70 - Standard for security configurations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... configure its computers that contain HHS data with the applicable Federal Desktop Core Configuration (FDCC) (see http://nvd.nist.gov/fdcc/index.cfm) and ensure that its computers have and maintain the latest... technology (IT) that is used to process information on behalf of HHS. The following security configuration...
48 CFR 352.239-70 - Standard for security configurations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... configure its computers that contain HHS data with the applicable Federal Desktop Core Configuration (FDCC) (see http://nvd.nist.gov/fdcc/index.cfm) and ensure that its computers have and maintain the latest... technology (IT) that is used to process information on behalf of HHS. The following security configuration...
48 CFR 352.239-70 - Standard for security configurations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... configure its computers that contain HHS data with the applicable Federal Desktop Core Configuration (FDCC) (see http://nvd.nist.gov/fdcc/index.cfm) and ensure that its computers have and maintain the latest... technology (IT) that is used to process information on behalf of HHS. The following security configuration...
48 CFR 352.239-70 - Standard for security configurations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... configure its computers that contain HHS data with the applicable Federal Desktop Core Configuration (FDCC) (see http://nvd.nist.gov/fdcc/index.cfm) and ensure that its computers have and maintain the latest... technology (IT) that is used to process information on behalf of HHS. The following security configuration...
A novel BCI based on ERP components sensitive to configural processing of human faces
NASA Astrophysics Data System (ADS)
Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej
2012-04-01
This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.
A novel BCI based on ERP components sensitive to configural processing of human faces.
Zhang, Yu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej
2012-04-01
This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min(-1) using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... Software Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION... Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses, with clarifications... Electrical and Electronic Engineers (IEEE) Standard 828-2005, ``IEEE Standard for Software Configuration...
Managing configuration software of ground software applications with glueware
NASA Technical Reports Server (NTRS)
Larsen, B.; Herrera, R.; Sesplaukis, T.; Cheng, L.; Sarrel, M.
2003-01-01
This paper reports on a simple, low-cost effort to streamline the configuration of the uplink software tools. Even though the existing ground system consisted of JPL and custom Cassini software rather than COTS, we chose a glueware approach--reintegrating with wrappers and bridges and adding minimal new functionality.
78 FR 23690 - Airworthiness Directives; The Boeing Company
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-22
... management system (CMS) configuration database; and installing new operational program software (OPS) for the CSCP, zone management unit (ZMU), passenger address controller, cabin interphone controller, cabin area... on the Internet at http://www.regulations.gov ; or in person at the Docket Management Facility...
Van Rheenen, Tamsyn E; Joshua, Nicole; Castle, David J; Rossell, Susan L
2017-03-01
Emotion recognition impairments have been demonstrated in schizophrenia (Sz), but are less consistent and lesser in magnitude in bipolar disorder (BD). This may be related to the extent to which different face processing strategies are engaged during emotion recognition in each of these disorders. We recently showed that Sz patients had impairments in the use of both featural and configural face processing strategies, whereas BD patients were impaired only in the use of the latter. Here we examine the influence that these impairments have on facial emotion recognition in these cohorts. Twenty-eight individuals with Sz, 28 individuals with BD, and 28 healthy controls completed a facial emotion labeling task with two conditions designed to separate the use of featural and configural face processing strategies; part-based and whole-face emotion recognition. Sz patients performed worse than controls on both conditions, and worse than BD patients on the whole-face condition. BD patients performed worse than controls on the whole-face condition only. Configural processing deficits appear to influence the recognition of facial emotions in BD, whereas both configural and featural processing abnormalities impair emotion recognition in Sz. This may explain discrepancies in the profiles of emotion recognition between the disorders. (JINS, 2017, 23, 287-291).
Applying NASA's explosive seam welding
NASA Technical Reports Server (NTRS)
Bement, Laurence J.
1991-01-01
The status of an explosive seam welding process, which was developed and evaluated for a wide range of metal joining opportunities, is summarized. The process employs very small quantities of explosive in a ribbon configuration to accelerate a long-length, narrow area of sheet stock into a high-velocity, angular impact against a second sheet. At impact, the oxide films of both surface are broken up and ejected by the closing angle to allow atoms to bond through the sharing of valence electrons. This cold-working process produces joints having parent metal properties, allowing a variety of joints to be fabricated that achieve full strength of the metals employed. Successful joining was accomplished in all aluminum alloys, a wide variety of iron and steel alloys, copper, brass, titanium, tantalum, zirconium, niobium, telerium, and columbium. Safety issues were addressed and are as manageable as many currently accepted joining processes.
National Cycle Program (NCP) Common Analysis Tool for Aeropropulsion
NASA Technical Reports Server (NTRS)
Follen, G.; Naiman, C.; Evans, A.
1999-01-01
Through the NASA/Industry Cooperative Effort (NICE) agreement, NASA Lewis and industry partners are developing a new engine simulation, called the National Cycle Program (NCP), which is the initial framework of NPSS. NCP is the first phase toward achieving the goal of NPSS. This new software supports the aerothermodynamic system simulation process for the full life cycle of an engine. The National Cycle Program (NCP) was written following the Object Oriented Paradigm (C++, CORBA). The software development process used was also based on the Object Oriented paradigm. Software reviews, configuration management, test plans, requirements, design were all apart of the process used in developing NCP. Due to the many contributors to NCP, the stated software process was mandatory for building a common tool intended for use by so many organizations. The U.S. aircraft and airframe companies recognize NCP as the future industry standard for propulsion system modeling.
System and method of designing a load bearing layer of an inflatable vessel
NASA Technical Reports Server (NTRS)
Spexarth, Gary R. (Inventor)
2007-01-01
A computer-implemented method is provided for designing a restraint layer of an inflatable vessel. The restraint layer is inflatable from an initial uninflated configuration to an inflated configuration and is constructed from a plurality of interfacing longitudinal straps and hoop straps. The method involves providing computer processing means (e.g., to receive user inputs, perform calculations, and output results) and utilizing this computer processing means to implement a plurality of subsequent design steps. The computer processing means is utilized to input the load requirements of the inflated restraint layer and to specify an inflated configuration of the restraint layer. This includes specifying a desired design gap between pairs of adjacent longitudinal or hoop straps, whereby the adjacent straps interface with a plurality of transversely extending hoop or longitudinal straps at a plurality of intersections. Furthermore, an initial uninflated configuration of the restraint layer that is inflatable to achieve the specified inflated configuration is determined. This includes calculating a manufacturing gap between pairs of adjacent longitudinal or hoop straps that correspond to the specified desired gap in the inflated configuration of the restraint layer.
What carries a mediation process? Configural analysis of mediation.
von Eye, Alexander; Mun, Eun Young; Mair, Patrick
2009-09-01
Mediation is a process that links a predictor and a criterion via a mediator variable. Mediation can be full or partial. This well-established definition operates at the level of variables even if they are categorical. In this article, two new approaches to the analysis of mediation are proposed. Both of these approaches focus on the analysis of categorical variables. The first involves mediation analysis at the level of configurations instead of variables. Thus, mediation can be incorporated into the arsenal of methods of analysis for person-oriented research. Second, it is proposed that Configural Frequency Analysis (CFA) can be used for both exploration and confirmation of mediation relationships among categorical variables. The implications of using CFA are first that mediation hypotheses can be tested at the level of individual configurations instead of variables. Second, this approach leaves the door open for different types of mediation processes to exist within the same set. Using a data example, it is illustrated that aggregate-level analysis can overlook mediation processes that operate at the level of individual configurations.
Sánchez Cuervo, Marina; Muñoz García, María; Gómez de Salazar López de Silanes, María Esther; Bermejo Vicedo, Teresa
2015-03-01
to describe the features of a computer program for management of drugs in special situations (off-label and compassionate use) in a Department of Hospital Pharmacy (PD). To describe the methodology followed for its implementation in the Medical Services. To evaluate their use after 2 years of practice. the design was carried out by pharmacists of the PD. The stages of the process were: selection of a software development company, establishment of a working group, selection of a development platform, design of an interactive Viewer, definition of functionality and data processing, creation of databases, connection, installation and configuration, application testing and improvements development. A directed sequential strategy was used for implementation in the Medical Services. The program's utility and experience of use were evaluated after 2 years. a multidisciplinary working group was formed and developed Pk_Usos®. The program works in web environment with a common viewer for all users enabling real time checking of the request files' status and that adapts to the management of medications in special situations procedure. Pk_Usos® was introduced first in the Oncology Department, with 15 oncologists as users of the program. 343 patients had 384 treatment requests managed, of which 363 are authorized throughout two years. PK_Usos® is the first software designed for the management of drugs in special situations in the PD. It is a dynamic and efficient tool for all professionals involved in the process by optimization of times. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
An exploratory study of organization design configurations in health care delivery organizations.
Sheppeck, Mick; Militello, Jack
2014-01-01
Organizations are configurations of variables that support each other to achieve customer satisfaction. Based on Treacy and Wiersema (1995), we predicted the emergence of two configurations, one supporting a product leadership stance and one predicting the customer intimate approach from a set of 73 for profit health care clinics. In addition, we predicted the emergence of a configuration where the scores on most variables were near the mean for each variable. Using cluster analysis and discriminant function analysis, we identified three configurations: one a "master of two" strategy, one "stuck-in-the-middle," and one showing scores well below the mean on most variables. The implications for organization design and manager actions in the health care industry are discussed.
Design and implementation of fishery rescue data mart system
NASA Astrophysics Data System (ADS)
Pan, Jun; Huang, Haiguang; Liu, Yousong
A novel data mart based system for fishery rescue field was designed and implemented. The system runs ETL process to deal with original data from various databases and data warehouses, and then reorganized the data into the fishery rescue data mart. Next, online analytical processing (OLAP) are carried out and statistical reports are generated automatically. Particularly, quick configuration schemes are designed to configure query dimensions and OLAP data sets. The configuration file will be transformed into statistic interfaces automatically through a wizard-style process. The system provides various forms of reporting files, including crystal reports, flash graphical reports, and two-dimensional data grids. In addition, a wizard style interface was designed to guide users customizing inquiry processes, making it possible for nontechnical staffs to access customized reports. Characterized by quick configuration, safeness and flexibility, the system has been successfully applied in city fishery rescue department.
An object-oriented approach to deploying highly configurable Web interfaces for the ATLAS experiment
NASA Astrophysics Data System (ADS)
Lange, Bruno; Maidantchik, Carmen; Pommes, Kathy; Pavani, Varlen; Arosa, Breno; Abreu, Igor
2015-12-01
The ATLAS Technical Coordination disposes of 17 Web systems to support its operation. These applications, whilst ranging from managing the process of publishing scientific papers to monitoring radiation levels in the equipment in the experimental cavern, are constantly prone to changes in requirements due to the collaborative nature of the experiment and its management. In this context, a Web framework is proposed to unify the generation of the supporting interfaces. FENCE assembles classes to build applications by making extensive use of JSON configuration files. It relies heavily on Glance, a technology that was set forth in 2003 to create an abstraction layer on top of the heterogeneous sources that store the technical coordination data. Once Glance maps out the database modeling, records can be referenced in the configuration files by wrapping unique identifiers around double enclosing brackets. The deployed content can be individually secured by attaching clearance attributes to their description thus ensuring that view/edit privileges are granted to eligible users only. The framework also provides tools for securely writing into a database. Fully HTML5-compliant multi-step forms can be generated from their JSON description to assure that the submitted data comply with a series of constraints. Input validation is carried out primarily on the server- side but, following progressive enhancement guidelines, verification might also be performed on the client-side by enabling specific markup data attributes which are then handed over to the jQuery validation plug-in. User monitoring is accomplished by thoroughly logging user requests along with any POST data. Documentation is built from the source code using the phpDocumentor tool and made readily available for developers online. Fence, therefore, speeds up the implementation of Web interfaces and reduces the response time to requirement changes by minimizing maintenance overhead.
Ground control station software design for micro aerial vehicles
NASA Astrophysics Data System (ADS)
Walendziuk, Wojciech; Oldziej, Daniel; Binczyk, Dawid Przemyslaw; Slowik, Maciej
2017-08-01
This article describes the process of designing the equipment part and the software of a ground control station used for configuring and operating micro unmanned aerial vehicles (UAV). All the works were conducted on a quadrocopter model being a commonly accessible commercial construction. This article contains a characteristics of the research object, the basics of operating the micro aerial vehicles (MAV) and presents components of the ground control station model. It also describes the communication standards for the purpose of building a model of the station. Further part of the work concerns the software of the product - the GIMSO application (Generally Interactive Station for Mobile Objects), which enables the user to manage the actions and communication and control processes from the UAV. The process of creating the software and the field tests of a station model are also presented in the article.
INcreasing Security and Protection through Infrastructure REsilience: The INSPIRE Project
NASA Astrophysics Data System (ADS)
D'Antonio, Salvatore; Romano, Luigi; Khelil, Abdelmajid; Suri, Neeraj
The INSPIRE project aims at enhancing the European potential in the field of security by ensuring the protection of critical information infrastructures through (a) the identification of their vulnerabilities and (b) the development of innovative techniques for securing networked process control systems. To increase the resilience of such systems INSPIRE will develop traffic engineering algorithms, diagnostic processes and self-reconfigurable architectures along with recovery techniques. Hence, the core idea of the INSPIRE project is to protect critical information infrastructures by appropriately configuring, managing, and securing the communication network which interconnects the distributed control systems. A working prototype will be implemented as a final demonstrator of selected scenarios. Controls/Communication Experts will support project partners in the validation and demonstration activities. INSPIRE will also contribute to standardization process in order to foster multi-operator interoperability and coordinated strategies for securing lifeline systems.
Calibration and simulation of two large wastewater treatment plants operated for nutrient removal.
Ferrer, J; Morenilla, J J; Bouzas, A; García-Usach, F
2004-01-01
Control and optimisation of plant processes has become a priority for WWTP managers. The calibration and verification of a mathematical model provides an important tool for the investigation of advanced control strategies that may assist in the design or optimization of WWTPs. This paper describes the calibration of the ASM2d model for two full scale biological nitrogen and phosphorus removal plants in order to characterize the biological process and to upgrade the plants' performance. Results from simulation showed a good correspondence with experimental data demonstrating that the model and the calibrated parameters were able to predict the behaviour of both WWTPs. Once the calibration and simulation process was finished, a study for each WWTP was done with the aim of improving its performance. Modifications focused on reactor configuration and operation strategies were proposed.
Hanford tanks initiative (HTI) configuration management desk instruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaus, P.S., Fluor Daniel Hanford
The purpose of the document is to provide working level directions for submitting requirements, making changes to the requirements database, and entering Project documentation into the HTI Project information and document management system.
Kiesel, Andrea; Kunde, Wilfried; Pohl, Carsten; Berner, Michael P; Hoffmann, Joachim
2009-01-01
Expertise in a certain stimulus domain enhances perceptual capabilities. In the present article, the authors investigate whether expertise improves perceptual processing to an extent that allows complex visual stimuli to bias behavior unconsciously. Expert chess players judged whether a target chess configuration entailed a checking configuration. These displays were preceded by masked prime configurations that either represented a checking or a nonchecking configuration. Chess experts, but not novice chess players, revealed a subliminal response priming effect, that is, faster responding when prime and target displays were congruent (both checking or both nonchecking) rather than incongruent. Priming generalized to displays that were not used as targets, ruling out simple repetition priming effects. Thus, chess experts were able to judge unconsciously presented chess configurations as checking or nonchecking. A 2nd experiment demonstrated that experts' priming does not occur for simpler but uncommon chess configurations. The authors conclude that long-term practice prompts the acquisition of visual memories of chess configurations with integrated form-location conjunctions. These perceptual chunks enable complex visual processing outside of conscious awareness.
Management approach recommendations. Earth Observatory Satellite system definition study (EOS)
NASA Technical Reports Server (NTRS)
1974-01-01
Management analyses and tradeoffs were performed to determine the most cost effective management approach for the Earth Observatory Satellite (EOS) Phase C/D. The basic objectives of the management approach are identified. Some of the subjects considered are as follows: (1) contract startup phase, (2) project management control system, (3) configuration management, (4) quality control and reliability engineering requirements, and (5) the parts procurement program.
GI-conf: A configuration tool for the GI-cat distributed catalog
NASA Astrophysics Data System (ADS)
Papeschi, F.; Boldrini, E.; Bigagli, L.; Mazzetti, P.
2009-04-01
In this work we present a configuration tool for the GI-cat. In an Service-Oriented Architecture (SOA) framework, GI-cat implements a distributed catalog service providing advanced capabilities, such as: caching, brokering and mediation functionalities. GI-cat applies a distributed approach, being able to distribute queries to the remote service providers of interest in an asynchronous style, and notifies the status of the queries to the caller implementing an incremental feedback mechanism. Today, GI-cat functionalities are made available through two standard catalog interfaces: the OGC CSW ISO and CSW Core Application Profiles. However, two other interfaces are under testing: the CIM and the EO Extension Packages of the CSW ebRIM Application Profile. GI-cat is able to interface a multiplicity of discovery and access services serving heterogeneous Earth and Space Sciences resources. They include international standards like the OGC Web Services -i.e. OGC CSW, WCS, WFS and WMS, as well as interoperability arrangements (i.e. community standards) such as: UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services, and SibESS-C infrastructure services. GI-conf implements user-friendly configuration tool for GI-cat. This is a GUI application that employs a visual and very simple approach to configure both the GI-cat publishing and distribution capabilities, in a dynamic way. The tool allows to set one or more GI-cat configurations. Each configuration consists of: a) the catalog standards interfaces published by GI-cat; b) the resources (i.e. services/servers) to be accessed and mediated -i.e. federated. Simple icons are used for interfaces and resources, implementing a user-friendly visual approach. The main GI-conf functionalities are: • Interfaces and federated resources management: user can set which interfaces must be published; besides, she/he can add a new resource, update or remove an already federated resource. • Multiple configuration management: multiple GI-cat configurations can be defined; every configuration identifies a set of published interfaces and a set of federated resources. Configurations can be edited, added, removed, exported, and even imported. • HTML report creation: an HTML report can be created, showing the current active GI-cat configuration, including the resources that are being federated and the published interface endpoints. The configuration tool is shipped with GI-cat and can be used to configure the service after its installation is completed.
LHCb Online event processing and filtering
NASA Astrophysics Data System (ADS)
Alessio, F.; Barandela, C.; Brarda, L.; Frank, M.; Franek, B.; Galli, D.; Gaspar, C.; Herwijnen, E. v.; Jacobsson, R.; Jost, B.; Köstner, S.; Moine, G.; Neufeld, N.; Somogyi, P.; Stoica, R.; Suman, S.
2008-07-01
The first level trigger of LHCb accepts one million events per second. After preprocessing in custom FPGA-based boards these events are distributed to a large farm of PC-servers using a high-speed Gigabit Ethernet network. Synchronisation and event management is achieved by the Timing and Trigger system of LHCb. Due to the complex nature of the selection of B-events, which are the main interest of LHCb, a full event-readout is required. Event processing on the servers is parallelised on an event basis. The reduction factor is typically 1/500. The remaining events are forwarded to a formatting layer, where the raw data files are formed and temporarily stored. A small part of the events is also forwarded to a dedicated farm for calibration and monitoring. The files are subsequently shipped to the CERN Tier0 facility for permanent storage and from there to the various Tier1 sites for reconstruction. In parallel files are used by various monitoring and calibration processes running within the LHCb Online system. The entire data-flow is controlled and configured by means of a SCADA system and several databases. After an overview of the LHCb data acquisition and its design principles this paper will emphasize the LHCb event filter system, which is now implemented using the final hardware and will be ready for data-taking for the LHC startup. Control, configuration and security aspects will also be discussed.
Lightning Mapper Sensor Lens Assembly S.O. 5459: Project Management Plan
NASA Technical Reports Server (NTRS)
Zeidler, Janet
1999-01-01
Kaiser Electro-Optics, Inc. (KEO) has developed this Project Management Plan for the Lightning Mapper Sensor (LMS) program. KEO has integrated a team of experts in a structured program management organization to meet the needs of the LMS program. The project plan discusses KEO's approach to critical program elements including Program Management, Quality Assurance, Configuration Management, and Schedule.
Snap evaporation of droplets on smooth topographies.
Wells, Gary G; Ruiz-Gutiérrez, Élfego; Le Lirzin, Youen; Nourry, Anthony; Orme, Bethany V; Pradas, Marc; Ledesma-Aguilar, Rodrigo
2018-04-11
Droplet evaporation on solid surfaces is important in many applications including printing, micro-patterning and cooling. While seemingly simple, the configuration of evaporating droplets on solids is difficult to predict and control. This is because evaporation typically proceeds as a "stick-slip" sequence-a combination of pinning and de-pinning events dominated by static friction or "pinning", caused by microscopic surface roughness. Here we show how smooth, pinning-free, solid surfaces of non-planar topography promote a different process called snap evaporation. During snap evaporation a droplet follows a reproducible sequence of configurations, consisting of a quasi-static phase-change controlled by mass diffusion interrupted by out-of-equilibrium snaps. Snaps are triggered by bifurcations of the equilibrium droplet shape mediated by the underlying non-planar solid. Because the evolution of droplets during snap evaporation is controlled by a smooth topography, and not by surface roughness, our ideas can inspire programmable surfaces that manage liquids in heat- and mass-transfer applications.
Run Environment and Data Management for Earth System Models
NASA Astrophysics Data System (ADS)
Widmann, H.; Lautenschlager, M.; Fast, I.; Legutke, S.
2009-04-01
The Integrating Model and Data Infrastructure (IMDI) developed and maintained by the Model and Data Group (M&D) comprises the Standard Compile Environment (SCE) and the Standard Run Environment (SRE). The IMDI software has a modular design, which allows to combine and couple a suite of model components and as well to execute the tasks independently and on various platforms. Furthermore the modular structure enables the extension to new model combinations and new platforms. The SRE presented here enables the configuration and performance of earth system model experiments from model integration up to storage and visualization of data. We focus on recently implemented tasks such as synchronous data base filling, graphical monitoring and automatic generation of meta data in XML forms during run time. As well we address the capability to run experiments in heterogeneous IT environments with different computing systems for model integration, data processing and storage. These features are demonstrated for model configurations and on platforms used in current or upcoming projects, e.g. MILLENNIUM or IPCC AR5.
Standoff aircraft IR characterization with ABB dual-band hyper spectral imager
NASA Astrophysics Data System (ADS)
Prel, Florent; Moreau, Louis; Lantagne, Stéphane; Bullis, Ritchie D.; Roy, Claude; Vallières, Christian; Levesque, Luc
2012-09-01
Remote sensing infrared characterization of rapidly evolving events generally involves the combination of a spectro-radiometer and infrared camera(s) as separated instruments. Time synchronization, spatial coregistration, consistent radiometric calibration and managing several systems are important challenges to overcome; they complicate the target infrared characterization data processing and increase the sources of errors affecting the final radiometric accuracy. MR-i is a dual-band Hyperspectal imaging spectro-radiometer, that combines two 256 x 256 pixels infrared cameras and an infrared spectro-radiometer into one single instrument. This field instrument generates spectral datacubes in the MWIR and LWIR. It is designed to acquire the spectral signatures of rapidly evolving events. The design is modular. The spectrometer has two output ports configured with two simultaneously operated cameras to either widen the spectral coverage or to increase the dynamic range of the measured amplitudes. Various telescope options are available for the input port. Recent platform developments and field trial measurements performances will be presented for a system configuration dedicated to the characterization of airborne targets.
Geyer, John; Myers, Kathleen; Vander Stoep, Ann; McCarty, Carolyn; Palmer, Nancy; DeSalvo, Amy
2011-10-01
Clinical trials with multiple intervention locations and a single research coordinating center can be logistically difficult to implement. Increasingly, web-based systems are used to provide clinical trial support with many commercial, open source, and proprietary systems in use. New web-based tools are available which can be customized without programming expertise to deliver web-based clinical trial management and data collection functions. To demonstrate the feasibility of utilizing low-cost configurable applications to create a customized web-based data collection and study management system for a five intervention site randomized clinical trial establishing the efficacy of providing evidence-based treatment via teleconferencing to children with attention-deficit hyperactivity disorder. The sites are small communities that would not usually be included in traditional randomized trials. A major goal was to develop database that participants could access from computers in their home communities for direct data entry. Discussed is the selection process leading to the identification and utilization of a cost-effective and user-friendly set of tools capable of customization for data collection and study management tasks. An online assessment collection application, template-based web portal creation application, and web-accessible Access 2007 database were selected and customized to provide the following features: schedule appointments, administer and monitor online secure assessments, issue subject incentives, and securely transmit electronic documents between sites. Each tool was configured by users with limited programming expertise. As of June 2011, the system has successfully been used with 125 participants in 5 communities, who have completed 536 sets of assessment questionnaires, 8 community therapists, and 11 research staff at the research coordinating center. Total automation of processes is not possible with the current set of tools as each is loosely affiliated, creating some inefficiency. This system is best suited to investigations with a single data source e.g., psychosocial questionnaires. New web-based applications can be used by investigators with limited programming experience to implement user-friendly, efficient, and cost-effective tools for multi-site clinical trials with small distant communities. Such systems allow the inclusion in research of populations that are not usually involved in clinical trials.
Information processing systems, reasoning modules, and reasoning system design methods
Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.
2016-08-23
Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Information processing systems, reasoning modules, and reasoning system design methods
Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.
2015-08-18
Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Information processing systems, reasoning modules, and reasoning system design methods
Hohimer, Ryan E; Greitzer, Frank L; Hampton, Shawn D
2014-03-04
Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Beck, Peter; Truskaller, Thomas; Rakovac, Ivo; Cadonna, Bruno; Pieber, Thomas R
2006-01-01
In this paper we describe the approach to build a web-based clinical data management infrastructure on top of an entity-attribute-value (EAV) database which provides for flexible definition and extension of clinical data sets as well as efficient data handling and high performance query execution. A "mixed" EAV implementation provides a flexible and configurable data repository and at the same time utilizes the performance advantages of conventional database tables for rarely changing data structures. A dynamically configurable data dictionary contains further information for data validation. The online user interface can also be assembled dynamically. A data transfer object which encapsulates data together with all required metadata is populated by the backend and directly used to dynamically render frontend forms and handle incoming data. The "mixed" EAV model enables flexible definition and modification of clinical data sets while reducing performance drawbacks of pure EAV implementations to a minimum. The system currently is in use in an electronic patient record with focus on flexibility and a quality management application (www.healthgate.at) with high performance requirements.
50th Annual Fuze Conference. Session 1 and 2
2006-05-11
PROGRAM OFFICE AMSRD-AAR-AIJ J. Goldman X6060 STRATEGIC MGT OFFICE AMSRD-AAR-EMS D. Denery X6081 KNOWLEDGE MANAGEMENT OFFICE AMSRD-AAR-EMK G. Albinson...Tail-Mounted Configuration (MK-82 Demo) UHF to L-Band Pulse Doppler Radar Using Low Cost COTS Components Nose and Tail Mount Configurations Only
Structural Health Monitoring Analysis for the Orbiter Wing Leading Edge
NASA Technical Reports Server (NTRS)
Yap, Keng C.
2010-01-01
This viewgraph presentation reviews Structural Health Monitoring Analysis for the Orbiter Wing Leading Edge. The Wing Leading Edge Impact Detection System (WLE IDS) and the Impact Analysis Process are also described to monitor WLE debris threats. The contents include: 1) Risk Management via SHM; 2) Hardware Overview; 3) Instrumentation; 4) Sensor Configuration; 5) Debris Hazard Monitoring; 6) Ascent Response Summary; 7) Response Signal; 8) Distribution of Flight Indications; 9) Probabilistic Risk Analysis (PRA); 10) Model Correlation; 11) Impact Tests; 12) Wing Leading Edge Modeling; 13) Ascent Debris PRA Results; and 14) MM/OD PRA Results.
Application research of Ganglia in Hadoop monitoring and management
NASA Astrophysics Data System (ADS)
Li, Gang; Ding, Jing; Zhou, Lixia; Yang, Yi; Liu, Lei; Wang, Xiaolei
2017-03-01
There are many applications of Hadoop System in the field of large data, cloud computing. The test bench of storage and application in seismic network at Earthquake Administration of Tianjin use with Hadoop system, which is used the open source software of Ganglia to operate and monitor. This paper reviews the function, installation and configuration process, application effect of operating and monitoring in Hadoop system of the Ganglia system. It briefly introduces the idea and effect of Nagios software monitoring Hadoop system. It is valuable for the industry in the monitoring system of cloud computing platform.
Modal Identification Experiment accommodations review
NASA Technical Reports Server (NTRS)
Klich, Phillip J.; Stillwagen, Frederic H.; Mutton, Philip
1994-01-01
The Modal Identification Experiment (MIE) will monitor the structure of the Space Station Freedom (SSF), and measure its response to a sequence of induced disturbances. The MIE will determine the frequency, damping, and shape of the important modes during the SSF assembly sequence including the Permanently Manned Configuration. This paper describes the accommodations for the proposed instrumentation, the data processing hardware, and the communications data rates. An overview of the MIE operational modes for measuring SSF acceleration forces with accelerometers is presented. The SSF instrumentation channel allocations and the Data Management System (DMS) services required for MIE are also discussed.
Cryogenic fluid management program flight concept definition
NASA Technical Reports Server (NTRS)
Kroeger, Erich
1987-01-01
The Lewis Research Center's cryogenic fluid management program flight concept definition is presented in viewgraph form. Diagrams are given of the cryogenic fluid management subpallet and its configuration with the Delta launch vehicle. Information is given in outline form on feasibility studies, requirements definition, and flight experiments design.
NASA Astrophysics Data System (ADS)
Pruin, B.; Martini, A.; Shanmugam, P.; Lopes, C.
2015-04-01
The Swarm mission consists of 3 satellites, each carrying an identical set of instruments. The scientific algorithms for processing are organized in 11 separate processing steps including automated product quality control. In total, the mission data consists of data products of several hundred distinct types from raw to level 2 product types and auxiliary data. The systematic production for Swarm within the ESA Archiving and Payload Data Facility (APDF) is performed up to level 2. The production up to L2 (CAT2-mature algorithm) is performed completely within the APDF. A separate systematic production chain from L1B to L2 (CAT1-evolving algorithm) is performed by an external facility (L2PS) with output files archived within the APDF as well. The APDF also performs re-processing exercises. Re-processing may start directly from the acquired data or from any other intermediate level resulting in the need for a refined product version and baseline management. Storage, dissemination and circulation functionality is configurable in the ESA generic multi-mission elements and does not require any software coding. The control of the production is more involved. While the interface towards the algorithmic entities is standardized due to the introduction of a generic IPF interface by ESA, the orchestration of the individual IPFs into the overall workflows is distinctly mission-specific and not as amenable to standardization. The ESA MMFI production management system provides extension points to integrate additional logical elements for the build-up of complex orchestrated workflows. These extension points have been used to inject the Swarm-specific production logic into the system. A noteworthy fact about the APDF is that the dissemination elements are hosted in a high bandwidth infrastructure procured as a managed service, thus affording users a considerable access bandwidth. This paper gives an overview of the Swarm APDF data flows. It describes the elements of the solution with particular focus on how the available generic multi-mission functionality of the ESA MMFI was utilized and where there was a need to implement missionspecific extensions and plug-ins. The paper concludes with some statistics on the system output during commissioning and early operational phases as well as some general considerations on the utilization of a framework like the ESA MMFI, discussing benefits and pitfalls of the approach.
NASA Astrophysics Data System (ADS)
Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang
2010-11-01
This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.
A novel configurable VLSI architecture design of window-based image processing method
NASA Astrophysics Data System (ADS)
Zhao, Hui; Sang, Hongshi; Shen, Xubang
2018-03-01
Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure
Throughput Benefit Assessment for Tactical Runway Configuration Management (TRCM)
NASA Technical Reports Server (NTRS)
Phojanamongkolkij, Nipa; Oseguera-Lohr, Rosa M.; Lohr, Gary W.; Fenbert, James W.
2014-01-01
The System-Oriented Runway Management (SORM) concept is a collection of needed capabilities focused on a more efficient use of runways while considering all of the factors that affect runway use. Tactical Runway Configuration Management (TRCM), one of the SORM capabilities, provides runway configuration and runway usage recommendations, monitoring the active runway configuration for suitability given existing factors, based on a 90 minute planning horizon. This study evaluates the throughput benefits using a representative sample of today's traffic volumes at three airports: Memphis International Airport (MEM), Dallas-Fort Worth International Airport (DFW), and John F. Kennedy International Airport (JFK). Based on this initial assessment, there are statistical throughput benefits for both arrivals and departures at MEM with an average of 4% for arrivals, and 6% for departures. For DFW, there is a statistical benefit for arrivals with an average of 3%. Although there is an average of 1% benefit observed for departures, it is not statistically significant. For JFK, there is a 12% benefit for arrivals, but a 2% penalty for departures. The results obtained are for current traffic volumes and should show greater benefit for increased future demand. This paper also proposes some potential TRCM algorithm improvements for future research. A continued research plan is being worked to implement these improvements and to re-assess the throughput benefit for today and future projected traffic volumes.
Adaptive momentum management for large space structures
NASA Technical Reports Server (NTRS)
Hahn, E.
1987-01-01
Momentum management is discussed for a Large Space Structure (LSS) with the structure selected configuration being the Initial Orbital Configuration (IOC) of the dual keel space station. The external forces considered were gravity gradient and aerodynamic torques. The goal of the momentum management scheme developed is to remove the bias components of the external torques and center the cyclic components of the stored angular momentum. The scheme investigated is adaptive to uncertainties of the inertia tensor and requires only approximate knowledge of principle moments of inertia. Computational requirements are minimal and should present no implementation problem in a flight type computer and the method proposed is shown to be effective in the presence of attitude control bandwidths as low as .01 radian/sec.
Adaptive momentum management for the dual keel Space Station
NASA Technical Reports Server (NTRS)
Hopkins, M.; Hahn, E.
1987-01-01
The report discusses momentum management for a large space structure with the structure selected configuration being the Initial Orbital Configuration of the dual-keel Space Station. The external torques considered were gravity gradient and aerodynamic torques. The goal of the momentum management scheme developed is to remove the bias components of the external torques and center the cyclic components of the stored angular momentum. The scheme investigated is adaptive to uncertainties of the inertia tensor and requires only approximate knowledge of principal moments of inertia. Computational requirements are minimal and should present no implementation problem in a flight-type computer. The method proposed is shown to be effective in the presence of attitude control bandwidths as low as 0.01 radian/sec.
Holistic processing, contact, and the other-race effect in face recognition.
Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle
2014-12-01
Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Lorenzino, Martina; Caminati, Martina; Caudek, Corrado
2018-05-25
One of the most important questions in face perception research is to understand what information is extracted from a face in order to recognize its identity. Recognition of facial identity has been attributed to a special sensitivity to "configural" information. However, recent studies have challenged the configural account by showing that participants are poor in discriminating variations of metric distances among facial features, especially for familiar as opposed to unfamiliar faces, whereas a configural account predicts the opposite. We aimed to extend these previous results by examining classes of unfamiliar faces with which we have different levels of expertise. We hypothesized an inverse relation between sensitivity to configural information and expertise with a given class of faces, but only for neutral expressions. By first matching perceptual discriminability, we measured tolerance to subtle configural transformations with same-race (SR) versus other-race (OR) faces, and with upright versus upside-down faces. Consistently with our predictions, we found a lower sensitivity to at-threshold configural changes for SR compared to OR faces. We also found that, for our stimuli, the face inversion effect disappeared for neutral but not for emotional faces - a result that can also be attributed to a lower sensitivity to configural transformations for faces presented in a more familiar orientation. The present findings question a purely configural account of face processing and suggest that the role of spatial-relational information in face processing varies according to the functional demands of the task and to the characteristics of the stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
Development and implementation of a PACS network and resource manager
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Taira, Ricky K.; Dwyer, Samuel J., III; Huang, H. K.
1992-07-01
Clinical acceptance of PACS is predicated upon maximum uptime. Upon component failure, detection, diagnosis, reconfiguration and repair must occur immediately. Our current PACS network is large, heterogeneous, complex and wide-spread geographically. The overwhelming number of network devices, computers and software processes involved in a departmental or inter-institutional PACS makes development of tools for network and resource management critical. The authors have developed and implemented a comprehensive solution (PACS Network-Resource Manager) using the OSI Network Management Framework with network element agents that respond to queries and commands for network management stations. Managed resources include: communication protocol layers for Ethernet, FDDI and UltraNet; network devices; computer and operating system resources; and application, database and network services. The Network-Resource Manager is currently being used for warning, fault, security violation and configuration modification event notification. Analysis, automation and control applications have been added so that PACS resources can be dynamically reconfigured and so that users are notified when active involvement is required. Custom data and error logging have been implemented that allow statistics for each PACS subsystem to be charted for performance data. The Network-Resource Manager allows our departmental PACS system to be monitored continuously and thoroughly, with a minimal amount of personal involvement and time.
Space Station Freedom power management and distribution system design
NASA Technical Reports Server (NTRS)
Teren, Fred
1989-01-01
The design is described of the Space Station Freedom Power Management and Distribution (PMAD) System. In addition, the significant trade studies which were conducted are described, which led to the current PMAD system configuration.
Integration of snow management practices into a detailed snow pack model
NASA Astrophysics Data System (ADS)
Spandre, Pierre; Morin, Samuel; Lafaysse, Matthieu; Lejeune, Yves; François, Hugues; George-Marcelpoil, Emmanuelle
2016-04-01
The management of snow on ski slopes is a key socio-economic and environmental issue in mountain regions. Indeed the winter sports industry has become a very competitive global market although this economy remains particularly sensitive to weather and snow conditions. The understanding and implementation of snow management in detailed snowpack models is a major step towards a more realistic assessment of the evolution of snow conditions in ski resorts concerning past, present and future climate conditions. Here we describe in a detailed manner the integration of snow management processes (grooming, snowmaking) into the snowpack model Crocus (Spandre et al., Cold Reg. Sci. Technol., in press). The effect of the tiller is explicitly taken into account and its effects on snow properties (density, snow microstructure) are simulated in addition to the compaction induced by the weight of the grooming machine. The production of snow in Crocus is carried out with respect to specific rules and current meteorological conditions. Model configurations and results are described in detail through sensitivity tests of the model of all parameters related to snow management processes. In-situ observations were carried out in four resorts in the French Alps during the 2014-2015 winter season considering for each resort natural, groomed only and groomed plus snowmaking conditions. The model provides realistic simulations of the snowpack properties with respect to these observations. The main uncertainty pertains to the efficiency of the snowmaking process. The observed ratio between the mass of machine-made snow on ski slopes and the water mass used for production was found to be lower than was expected from the literature, in every resort. The model now referred to as "Crocus-Resort" has been proven to provide realistic simulations of snow conditions on ski slopes and may be used for further investigations. Spandre, P., S. Morin, M. Lafaysse, Y. Lejeune, H. François and E. George-Marcelpoil, Integration of snow management processes into a detailed snowpack model, Cold Reg. Sci. Technol., in press.
Cure-in-place process for seals
Hirasuna, Alan R.
1981-01-01
A cure-in-place process which allows a rubber seal element to be deformed to its service configuration before it is cross-linked and, hence, is a plastic and does not build up internal stress as a result of the deformation. This provides maximum residual strength to resist the differential pressure. Furthermore, the process allows use of high modulus formulations of the rubber seal element which would otherwise crack if cured and then deformed to its service configuration, resulting in a seal which has better gap bridging capability. Basically, the process involves positioning an uncured seal element in place, deforming it to its service configuration, heating the seal element, curing it in place, and then fully seating the seal.
Dutta, Abhijit; Dowe, Nancy; Ibsen, Kelly N; Schell, Daniel J; Aden, Andy
2010-01-01
Numerous routes are being explored to lower the cost of cellulosic ethanol production and enable large-scale production. One critical area is the development of robust cofermentative organisms to convert the multiple, mixed sugars found in biomass feedstocks to ethanol at high yields and titers without the need for processing to remove inhibitors. Until such microorganisms are commercialized, the challenge is to design processes that exploit the current microorganisms' strengths. This study explored various process configurations tailored to take advantage of the specific capabilities of three microorganisms, Z. mobilis 8b, S. cerevisiae, and S. pastorianus. A technoeconomic study, based on bench-scale experimental data generated by integrated process testing, was completed to understand the resulting costs of the different process configurations. The configurations included whole slurry fermentation with a coculture, and separate cellulose simultaneous saccharification and fermentation (SSF) and xylose fermentations with none, some or all of the water to the SSF replaced with the fermented liquor from the xylose fermentation. The difference between the highest and lowest ethanol cost for the different experimental process configurations studied was $0.27 per gallon ethanol. Separate fermentation of solid and liquor streams with recycle of fermented liquor to dilute the solids gave the lowest ethanol cost, primarily because this option achieved the highest concentrations of ethanol after fermentation. Further studies, using methods similar to ones employed here, can help understand and improve the performance and hence the economics of integrated processes involving enzymes and fermentative microorganisms.
NASA Astrophysics Data System (ADS)
Chen, G. B.; Zhong, Y. K.; Zheng, X. L.; Li, Q. F.; Xie, X. M.; Gan, Z. H.; Huang, Y. H.; Tang, K.; Kong, B.; Qiu, L. M.
2003-12-01
A novel gas-phase inlet configuration in the natural circulation system instead of the liquid-phase inlet is introduced to cool down a cryogenic pump system from room temperature to cryogenic temperatures, effectively. The experimental apparatus is illustrated and test process is described. Heat transfer and pressure drop data during the cool-down process are recorded and portrayed. By contrast with liquid-phase inlet configuration, experimental results demonstrate that the natural circulation with the gas-phase inlet configuration is an easier and more controllable way to cool down the pump system and maintain it at cryogenic temperatures.
ERIC Educational Resources Information Center
Johnson, Marcus L.; Lowder, Matthew W.; Gordon, Peter C.
2011-01-01
In 2 experiments, the authors used an eye tracking while reading methodology to examine how different configurations of common noun phrases versus unusual noun phrases (NPs) influenced the difference in processing difficulty between sentences containing object- and subject-extracted relative clauses. Results showed that processing difficulty was…
Response of Seismometer with Symmetric Triaxial Sensor Configuration to Complex Ground Motion
NASA Astrophysics Data System (ADS)
Graizer, V.
2007-12-01
Most instruments used in seismological practice to record ground motion in all directions use three sensors oriented toward North, East and upward. In this standard configuration horizontal and vertical sensors differ in their construction because of gravity acceleration always applied to a vertical sensor. An alternative way of symmetric sensor configuration was first introduced by Galperin (1955) for petroleum exploration. In this arrangement three identical sensors are also positioned orthogonally to each other but are tilted at the same angle of 54.7 degrees to the vertical axis (triaxial system of coordinate balanced on its corner). Records obtained using symmetric configuration must be rotated into an earth referenced X, Y, Z coordinate system. A number of recent seismological instruments (e.g., broadband seismometers Streckeisen STS-2, Trillium of Nanometrics and Cronos of Kinemetrics) are using symmetric sensor configuration. In most of seismological studies it is assumed that rotational (rocking and torsion) components of earthquake ground motion are small enough to be neglected. However, recently examples were shown when rotational components are significant relative to translational components of motions. Response of pendulums installed in standard configuration (vertical and two horizontals) to complex input motion that includes rotations has been studied in a number of publications. We consider the response of pendulums in a symmetric sensor configuration to complex input motions including rotations, and the resultant triaxial system response. Possible implications of using symmetric sensor configuration in strong motion studies are discussed. Considering benefits of equal design of all three sensors in symmetric configuration, and as a result potentially lower cost of the three-component accelerograph, it may be useful for strong motion measurements not requiring high resolution post signal processing. The disadvantage of this configuration is that if one of the sensors is not working properly or there is a misalignment of sensors, it results in degradation of all three components. Symmetric sensor configuration requires identical processing of each channel putting a number of limitations on further processing of strong motion records.
FEDEF: A High Level Architecture Federate Development Framework
2010-09-01
require code changes for operability between HLA specifications. Configuration of federate requirements such as publications, subscriptions, time ... management , and management protocol should occur outside of federate source code, allowing for federate reusability without code modification and re
Rapid Propellant Loading Approach Exploration
2010-11-01
the impact upon ground operations of three configuration options. Ground operations management was addressed through a series of studies performed...and operations management system can enable safe rapid propellant loading operations with limited operator knowledge and involvement. A single
NASA Astrophysics Data System (ADS)
Rynge, M.; Juve, G.; Kinney, J.; Good, J.; Berriman, B.; Merrihew, A.; Deelman, E.
2014-05-01
In this paper, we describe how to leverage cloud resources to generate large-scale mosaics of the galactic plane in multiple wavelengths. Our goal is to generate a 16-wavelength infrared Atlas of the Galactic Plane at a common spatial sampling of 1 arcsec, processed so that they appear to have been measured with a single instrument. This will be achieved by using the Montage image mosaic engine process observations from the 2MASS, GLIMPSE, MIPSGAL, MSX and WISE datasets, over a wavelength range of 1 μm to 24 μm, and by using the Pegasus Workflow Management System for managing the workload. When complete, the Atlas will be made available to the community as a data product. We are generating images that cover ±180° in Galactic longitude and ±20° in Galactic latitude, to the extent permitted by the spatial coverage of each dataset. Each image will be 5°x5° in size (including an overlap of 1° with neighboring tiles), resulting in an atlas of 1,001 images. The final size will be about 50 TBs. This paper will focus on the computational challenges, solutions, and lessons learned in producing the Atlas. To manage the computation we are using the Pegasus Workflow Management System, a mature, highly fault-tolerant system now in release 4.2.2 that has found wide applicability across many science disciplines. A scientific workflow describes the dependencies between the tasks and in most cases the workflow is described as a directed acyclic graph, where the nodes are tasks and the edges denote the task dependencies. A defining property for a scientific workflow is that it manages data flow between tasks. Applied to the galactic plane project, each 5 by 5 mosaic is a Pegasus workflow. Pegasus is used to fetch the source images, execute the image mosaicking steps of Montage, and store the final outputs in a storage system. As these workflows are very I/O intensive, care has to be taken when choosing what infrastructure to execute the workflow on. In our setup, we choose to use dynamically provisioned compute clusters running on the Amazon Elastic Compute Cloud (EC2). All our instances are using the same base image, which is configured to come up as a master node by default. The master node is a central instance from where the workflow can be managed. Additional worker instances are provisioned and configured to accept work assignments from the master node. The system allows for adding/removing workers in an ad hoc fashion, and could be run in large configurations. To-date we have performed 245,000 CPU hours of computing and generated 7,029 images and totaling 30 TB. With the current set up our runtime would be 340,000 CPU hours for the whole project. Using spot m2.4xlarge instances, the cost would be approximately $5,950. Using faster AWS instances, such as cc2.8xlarge could potentially decrease the total CPU hours and further reduce the compute costs. The paper will explore these tradeoffs.
Engineering Changes in Product Design - A Review
NASA Astrophysics Data System (ADS)
Karthik, K.; Janardhan Reddy, K., Dr
2016-09-01
Changes are fundamental to product development. Engineering changes are unavoidable and can arise at any phase of the product life cycle. The consideration of market requirements, customer/user feedbacks, manufacturing constraints, design innovations etc., turning them into viable products can be accomplished when product change is managed properly. In the early design cycle, informal changes are accepted. However, changes become formal when its complexity and cost increases, and as product matures. To maximize the market shares, manufacturers have to effectively and efficiently manage engineering changes by means of Configuration Control. The paper gives a broad overview about ‘Engineering Change Management’ (ECM) through configuration management and its implications in product design. The aim is to give an idea and understanding about the engineering changes in product design scenario to the new researchers. This paper elaborates the significant aspect of managing the engineering changes and the importance of ECM in a product life cycle.
Are conservation organizations configured for effective adaptation to global change?
Armsworth, Paul R.; Larson, Eric R.; Jackson, Stephen T.; Sax, Dov F.; Simonin, Paul W.; Blossey, Bernd; Green, Nancy; Lester, Liza; Klein, Mary L.; Ricketts, Taylor H.; Runge, Michael C.; Shaw, M. Rebecca
2015-01-01
Conservation organizations must adapt to respond to the ecological impacts of global change. Numerous changes to conservation actions (eg facilitated ecological transitions, managed relocations, or increased corridor development) have been recommended, but some institutional restructuring within organizations may also be needed. Here we discuss the capacity of conservation organizations to adapt to changing environmental conditions, focusing primarily on public agencies and nonprofits active in land protection and management in the US. After first reviewing how these organizations anticipate and detect impacts affecting target species and ecosystems, we then discuss whether they are sufficiently flexible to prepare and respond by reallocating funding, staff, or other resources. We raise new hypotheses about how the configuration of different organizations enables them to protect particular conservation targets and manage for particular biophysical changes that require coordinated management actions over different spatial and temporal scales. Finally, we provide a discussion resource to help conservation organizations assess their capacity to adapt.
How Configuration Management (CM) Can Help Project Teams To Innovate and Communicate
NASA Technical Reports Server (NTRS)
Cioletti, Louis
2009-01-01
Traditionally, CM is relegated to a support role in project management activities. CM s traditional functions of identification, change control, status accounting, and audits/verification are still necessary and play a vital role. However, this presentation proposes CM s role in a new and innovative manner that will significantly improve communication throughout the organization and, in turn, augment the project s success. CM s new role is elevated to the project management level, above the engineering or sub-project level in the Work Breakdown Structure (WBS), where it can more effectively accommodate changes, reduce corrective actions, and ensure that requirements are clear, concise, and valid, and that results conform to the requirements. By elevating CM s role in project management and orchestrating new measures, a new communication will emerge that will improve information integrity, structured baselines, interchangeability/traceability, metrics, conformance to standards, and standardize the best practices in the organization. Overall project performance (schedule, quality, and cost) can be no better than the ability to communicate requirements which, in turn, is no better than the CM process to communicate project decisions and the correct requirements.
Service Management Database for DSN Equipment
NASA Technical Reports Server (NTRS)
Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Wolgast, Paul; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed
2009-01-01
This data- and event-driven persistent storage system leverages the use of commercial software provided by Oracle for portability, ease of maintenance, scalability, and ease of integration with embedded, client-server, and multi-tiered applications. In this role, the Service Management Database (SMDB) is a key component of the overall end-to-end process involved in the scheduling, preparation, and configuration of the Deep Space Network (DSN) equipment needed to perform the various telecommunication services the DSN provides to its customers worldwide. SMDB makes efficient use of triggers, stored procedures, queuing functions, e-mail capabilities, data management, and Java integration features provided by the Oracle relational database management system. SMDB uses a third normal form schema design that allows for simple data maintenance procedures and thin layers of integration with client applications. The software provides an integrated event logging system with ability to publish events to a JMS messaging system for synchronous and asynchronous delivery to subscribed applications. It provides a structured classification of events and application-level messages stored in database tables that are accessible by monitoring applications for real-time monitoring or for troubleshooting and analysis over historical archives.
Spatial configuration and distribution of forest patches in Champaign County, Illinois: 1940 to 1993
J. Danilo Chinea
1997-01-01
Spatial configuration and distribution of landscape elements have implications for the dynamics of forest ecosystems, and, therefore, for the management of these resources. The forest cover of Champaign County, in east-central Illinois, was mapped from 1940 and 1993 aerial photography and entered in a geographical information system database. In 1940, 208 forest...
Hydrophilic structures for condensation management in appliances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuehl, Steven John; Vonderhaar, John J.; Wu, Guolian
2016-02-02
An appliance that includes a cabinet having an exterior surface; a refrigeration compartment located within the cabinet; and a hydrophilic structure disposed on the exterior surface. The hydrophilic structure is configured to spread condensation. The appliance further includes a wicking structure located in proximity to the hydrophilic structure, and the wicking structure is configured to receive the condensation.
ERIC Educational Resources Information Center
Conkright, Thomas D.; Joliat, Judy
1996-01-01
Discusses the challenges, solutions, and compromises involved in creating computer-delivered training courseware for Apollo Travel Services, a company whose 50,000 agents must access a mainframe from many different computing configurations. Initial difficulties came in trying to manage random access memory and quicken response time, but the future…
Failure detection in high-performance clusters and computers using chaotic map computations
Rao, Nageswara S.
2015-09-01
A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.
Intelligent Network-Centric Sensors Development Program
2012-07-31
Image sensor Configuration: ; Cone 360 degree LWIR PFx Sensor: •■. Image sensor . Configuration: Image MWIR Configuration; Cone 360 degree... LWIR PFx Sensor: Video Configuration: Cone 360 degree SW1R, 2. Reasoning Process to Match Sensor Systems to Algorithms The ontological...effects of coherent imaging because of aberrations. Another reason is the specular nature of active imaging. Both contribute to the nonuniformity
Piepers, Daniel W.; Robbins, Rachel A.
2012-01-01
It is widely agreed that the human face is processed differently from other objects. However there is a lack of consensus on what is meant by a wide array of terms used to describe this “special” face processing (e.g., holistic and configural) and the perceptually relevant information within a face (e.g., relational properties and configuration). This paper will review existing models of holistic/configural processing, discuss how they differ from one another conceptually, and review the wide variety of measures used to tap into these concepts. In general we favor a model where holistic processing of a face includes some or all of the interrelations between features and has separate coding for features. However, some aspects of the model remain unclear. We propose the use of moving faces as a way of clarifying what types of information are included in the holistic representation of a face. PMID:23413184
Morales-Asencio, Jose M; Kaknani-Uttumchandani, Shakira; Cuevas-Fernández-Gallego, Magdalena; Palacios-Gómez, Leopoldo; Gutiérrez-Sequera, José L; Silvano-Arranz, Agustina; Batres-Sicilia, Juan Pedro; Delgado-Romero, Ascensión; Cejudo-Lopez, Ángela; Trabado-Herrera, Manuel; García-Lara, Esteban L; Martin-Santos, Francisco J; Morilla-Herrera, Juan C
2015-10-01
Complex chronic diseases are a challenge for the current configuration of health services. Case management is a service frequently provided for people with chronic conditions, and despite its effectiveness in many outcomes, such as mortality or readmissions, uncertainty remains about the most effective form of team organization, structures and the nature of the interventions. Many processes and outcomes of case management for people with complex chronic conditions cannot be addressed with the information provided by electronic clinical records. Registries are frequently used to deal with this weakness. The aim of this study was to generate a registry-based information system of patients receiving case management to identify their clinical characteristics, their context of care, events identified during their follow-up, interventions developed by case managers and services used. The study was divided into three phases, covering the detection of information needs, the design and its implementation in the health care system, using literature review and expert consensus methods to select variables that would be included in the registry. A total of 102 variables representing structure, processes and outcomes of case management were selected for their inclusion in the registry after the consensus phase. A web-based registry with modular and layered architecture was designed. The framework follows a pattern based on the model-view-controller approach. In its first 6 months after the implementation, 102 case managers have introduced an average number of 6.49 patients each one. The registry permits a complete and in-depth analysis of the characteristics of the patients who receive case management, the interventions delivered and some major outcomes as mortality, readmissions or adverse events. © 2015 John Wiley & Sons, Ltd.
Process control monitoring systems, industrial plants, and process control monitoring methods
Skorpik, James R [Kennewick, WA; Gosselin, Stephen R [Richland, WA; Harris, Joe C [Kennewick, WA
2010-09-07
A system comprises a valve; a plurality of RFID sensor assemblies coupled to the valve to monitor a plurality of parameters associated with the valve; a control tag configured to wirelessly communicate with the respective tags that are coupled to the valve, the control tag being further configured to communicate with an RF reader; and an RF reader configured to selectively communicate with the control tag, the reader including an RF receiver. Other systems and methods are also provided.
Online, On Demand Access to Coastal Digital Elevation Models
NASA Astrophysics Data System (ADS)
Long, J.; Bristol, S.; Long, D.; Thompson, S.
2014-12-01
Process-based numerical models for coastal waves, water levels, and sediment transport are initialized with digital elevation models (DEM) constructed by interpolating and merging bathymetric and topographic elevation data. These gridded surfaces must seamlessly span the land-water interface and may cover large regions where the individual raw data sources are collected at widely different spatial and temporal resolutions. In addition, the datasets are collected from different instrument platforms with varying accuracy and may or may not overlap in coverage. The lack of available tools and difficulties in constructing these DEMs lead scientists to 1) rely on previously merged, outdated, or over-smoothed DEMs; 2) discard more recent data that covers only a portion of the DEM domain; and 3) use inconsistent methodologies to generate DEMs. The objective of this work is to address the immediate need of integrating land and water-based elevation data sources and streamline the generation of a seamless data surface that spans the terrestrial-marine boundary. To achieve this, the U.S. Geological Survey (USGS) is developing a web processing service to format and initialize geoprocessing tasks designed to create coastal DEMs. The web processing service is maintained within the USGS ScienceBase data management system and has an associated user interface. Through the map-based interface, users define a geographic region that identifies the bounds of the desired DEM and a time period of interest. This initiates a query for elevation datasets within federal science agency data repositories. A geoprocessing service is then triggered to interpolate, merge, and smooth the data sources creating a DEM based on user-defined configuration parameters. Uncertainty and error estimates for the DEM are also returned by the geoprocessing service. Upon completion, the information management platform provides access to the final gridded data derivative and saves the configuration parameters for future reference. The resulting products and tools developed here could be adapted to future data sources and projects beyond the coastal environment.
Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel
2013-01-01
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches).
Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel
2013-01-01
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches). PMID:23483997
Fähnrich, C; Denecke, K; Adeoye, O O; Benzler, J; Claus, H; Kirchner, G; Mall, S; Richter, R; Schapranow, M P; Schwarz, N; Tom-Aba, D; Uflacker, M; Poggensee, G; Krause, G
2015-03-26
In the context of controlling the current outbreak of Ebola virus disease (EVD), the World Health Organization claimed that 'critical determinant of epidemic size appears to be the speed of implementation of rigorous control measures', i.e. immediate follow-up of contact persons during 21 days after exposure, isolation and treatment of cases, decontamination, and safe burials. We developed the Surveillance and Outbreak Response Management System (SORMAS) to improve efficiency and timeliness of these measures. We used the Design Thinking methodology to systematically analyse experiences from field workers and the Ebola Emergency Operations Centre (EOC) after successful control of the EVD outbreak in Nigeria. We developed a process model with seven personas representing the procedures of EVD outbreak control. The SORMAS system architecture combines latest In-Memory Database (IMDB) technology via SAP HANA (in-memory, relational database management system), enabling interactive data analyses, and established SAP cloud tools, such as SAP Afaria (a mobile device management software). The user interface consists of specific front-ends for smartphones and tablet devices, which are independent from physical configurations. SORMAS allows real-time, bidirectional information exchange between field workers and the EOC, ensures supervision of contact follow-up, automated status reports, and GPS tracking. SORMAS may become a platform for outbreak management and improved routine surveillance of any infectious disease. Furthermore, the SORMAS process model may serve as framework for EVD outbreak modeling.
Coronado Mondragon, Adrian E; Coronado Mondragon, Christian E; Coronado, Etienne S
2015-01-01
Flexibility and innovation at creating shapes, adapting processes, and modifying materials characterize composites materials, a "high-tech" industry. However, the absence of standard manufacturing processes and the selection of materials with defined properties hinder the configuration of the composites materials supply chain. An interesting alternative for a "high-tech" industry such as composite materials would be to review supply chain lessons and practices in "low-tech" industries such as food. The main motivation of this study is to identify lessons and practices that comprise innovations in the supply chain of a firm in a perceived "low-tech" industry that can be used to provide guidelines in the design of the supply chain of a "high-tech" industry, in this case composite materials. This work uses the case study/site visit with analogy methodology to collect data from a Spanish leading producer of fresh fruit juice which is sold in major European markets and makes use of a cold chain. The study highlights supply base management and visibility/traceability as two elements of the supply chain in a "low-tech" industry that can provide guidelines that can be used in the configuration of the supply chain of the composite materials industry.
The Mutable Nature of Risk and Acceptability: A Hybrid Risk Governance Framework.
Wong, Catherine Mei Ling
2015-11-01
This article focuses on the fluid nature of risk problems and the challenges it presents to establishing acceptability in risk governance. It introduces an actor-network theory (ANT) perspective as a way to deal with the mutable nature of risk controversies and the configuration of stakeholders. To translate this into a practicable framework, the article proposes a hybrid risk governance framework that combines ANT with integrative risk governance, deliberative democracy, and responsive regulation. This addresses a number of the limitations in existing risk governance models, including: (1) the lack of more substantive public participation throughout the lifecycle of a project; (2) hijacking of deliberative forums by particular groups; and (3) the treatment of risk problems and their associated stakeholders as immutable entities. The framework constitutes a five-stage process of co-selection, co-design, co-planning, and co-regulation to facilitate the co-production of collective interests and knowledge, build capacities, and strengthen accountability in the process. The aims of this article are twofold: conceptually, it introduces a framework of risk governance that accounts for the mutable nature of risk problems and configuration of stakeholders. In practice, this article offers risk managers and practitioners of risk governance a set of procedures with which to operationalize this conceptual approach to risk and stakeholder engagement. © 2015 Society for Risk Analysis.
Unified Digital Image Display And Processing System
NASA Astrophysics Data System (ADS)
Horii, Steven C.; Maguire, Gerald Q.; Noz, Marilyn E.; Schimpf, James H.
1981-11-01
Our institution like many others, is faced with a proliferation of medical imaging techniques. Many of these methods give rise to digital images (e.g. digital radiography, computerized tomography (CT) , nuclear medicine and ultrasound). We feel that a unified, digital system approach to image management (storage, transmission and retrieval), image processing and image display will help in integrating these new modalities into the present diagnostic radiology operations. Future techniques are likely to employ digital images, so such a system could readily be expanded to include other image sources. We presently have the core of such a system. We can both view and process digital nuclear medicine (conventional gamma camera) images, positron emission tomography (PET) and CT images on a single system. Images from our recently installed digital radiographic unit can be added. Our paper describes our present system, explains the rationale for its configuration, and describes the directions in which it will expand.
2014-11-07
Operations are underway to weigh NASA's Soil Moisture Active Passive, or SMAP, spacecraft in the clean room of the Astrotech payload processing facility on Vandenberg Air Force Base in California. The weighing of a spacecraft is standard procedure during prelaunch processing. SMAP will launch on a Delta II 7320 configuration vehicle featuring a United Launch Alliance first stage booster powered by an Aerojet Rocketdyne RS-27A main engine and three Alliant Techsystems, or ATK, strap-on solid rocket motors. Once on station in Earth orbit, SMAP will provide global measurements of soil moisture and its freeze/thaw state. NASA's Jet Propulsion Laboratory that built the observatory and its radar instrument also is responsible for SMAP project management and mission operations. Launch from Space Launch Complex 2 is targeted for Jan. 29, 2015.
2014-11-07
Preparations are underway to weigh NASA's Soil Moisture Active Passive, or SMAP, spacecraft in the clean room of the Astrotech payload processing facility on Vandenberg Air Force Base in California. The weighing of a spacecraft is standard procedure during prelaunch processing. SMAP will launch on a Delta II 7320 configuration vehicle featuring a United Launch Alliance first stage booster powered by an Aerojet Rocketdyne RS-27A main engine and three Alliant Techsystems, or ATK, strap-on solid rocket motors. Once on station in Earth orbit, SMAP will provide global measurements of soil moisture and its freeze/thaw state. NASA's Jet Propulsion Laboratory that built the observatory and its radar instrument also is responsible for SMAP project management and mission operations. Launch from Space Launch Complex 2 is targeted for Jan. 29, 2015.
2014-11-07
NASA's Soil Moisture Active Passive, or SMAP, spacecraft is lifted from its workstand in the clean room of the Astrotech payload processing facility on Vandenberg Air Force Base in California during operations to determine its weight. The weighing of a spacecraft is standard procedure during prelaunch processing. SMAP will launch on a Delta II 7320 configuration vehicle featuring a United Launch Alliance first stage booster powered by an Aerojet Rocketdyne RS-27A main engine and three Alliant Techsystems, or ATK, strap-on solid rocket motors. Once on station in Earth orbit, SMAP will provide global measurements of soil moisture and its freeze/thaw state. NASA's Jet Propulsion Laboratory that built the observatory and its radar instrument also is responsible for SMAP project management and mission operations. Launch from Space Launch Complex 2 is targeted for Jan. 29, 2015.
Tags, wireless communication systems, tag communication methods, and wireless communications methods
Scott,; Jeff W. , Pratt; Richard, M [Richland, WA
2006-09-12
Tags, wireless communication systems, tag communication methods, and wireless communications methods are described. In one aspect, a tag includes a plurality of antennas configured to receive a plurality of first wireless communication signals comprising data from a reader, a plurality of rectifying circuits coupled with. respective individual ones of the antennas and configured to provide rectified signals corresponding to the first wireless communication signals, wherein the rectified signals are combined to produce a composite signal, an adaptive reference circuit configured to vary a reference signal responsive to the composite signal, a comparator coupled with the adaptive reference circuit and the rectifying circuits and configured to compare the composite signal with respect to the reference signal and to output the data responsive to the comparison, and processing circuitry configured to receive the data from the comparator and to process the data.
Space construction system analysis study: Project systems and missions descriptions
NASA Technical Reports Server (NTRS)
1979-01-01
Three project systems are defined and summarized. The systems are: (1) a Solar Power Satellite (SPS) Development Flight Test Vehicle configured for fabrication and compatible with solar electric propulsion orbit transfer; (2) an Advanced Communications Platform configured for space fabrication and compatible with low thrust chemical orbit transfer propulsion; and (3) the same Platform, configured to be space erectable but still compatible with low thrust chemical orbit transfer propulsion. These project systems are intended to serve as configuration models for use in detailed analyses of space construction techniques and processes. They represent feasible concepts for real projects; real in the sense that they are realistic contenders on the list of candidate missions currently projected for the national space program. Thus, they represent reasonable configurations upon which to base early studies of alternative space construction processes.
Building configuration and seismic design: The architecture of earthquake resistance
NASA Astrophysics Data System (ADS)
Arnold, C.; Reitherman, R.; Whitaker, D.
1981-05-01
The architecture of a building in relation to its ability to withstand earthquakes was determined. Aspects of round motion which are significant to building behavior are discussed. Results of a survey of configuration decisions that affect the performance of buildings with a focus on the architectural aspects of configuration design are provided. Configuration derivation, building type as it relates to seismic design, and seismic design, and seismic issues in the design process are examined. Case studies of the Veterans' Administration Hospital in Loma Linda, California, and the Imperial Hotel in Tokyo, Japan, are presented. The seismic design process is described paying special attention to the configuration issues. The need is stressed for guidelines, codes, and regulations to ensure design solutions that respect and balance the full range of architectural, engineering, and material influences on seismic hazards.
NASA Astrophysics Data System (ADS)
Alvarez, Alejandro; Beche, Alexandre; Furano, Fabrizio; Hellmich, Martin; Keeble, Oliver; Rocha, Ricardo
2012-12-01
The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure. It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During the last year we have been working on providing stable, high performant data access to our storage system using standard protocols, while extending the storage management functionality and adapting both configuration and deployment procedures to reuse commonly used building blocks. In this contribution we cover in detail the extensive evaluation we have performed of our new HTTP/WebDAV and NFS 4.1 frontends, in terms of functionality and performance. We summarize the issues we faced and the solutions we developed to turn them into valid alternatives to the existing grid protocols - namely the additional work required to provide multi-stream transfers for high performance wide area access, support for third party copies, credential delegation or the required changes in the experiment and fabric management frameworks and tools. We describe new functionality that has been added to ease system administration, such as different filesystem weights and a faster disk drain, and new configuration and monitoring solutions based on the industry standards Puppet and Nagios. Finally, we explain some of the internal changes we had to do in the DPM architecture to better handle the additional load from the analysis use cases.
NASA Astrophysics Data System (ADS)
Kumar, M.; Seyednasrollah, B.; Link, T. E.
2013-12-01
In upland snowfed forested watersheds, where the majority of melt recharge occurs, there is growing interest among water and forest managers to strike a balance between maximizing forest productivity and minimizing impacts on water resources. Implementation of forest management strategies that involve reduction of forest cover generally result in increased water yield and peak flows from forests, which has potentially detrimental consequences including increased erosion, stream destabilization, water shortages in late melt season, and degradation of water quality and ecosystem health. These ill effects can be partially negated by implementing optimal gap patterns and vegetation densities through forest management, that may minimize net radiation on snow-covered forest floor (NRSF). A small NRSF can moderate peak flows and increase water availability late in the melt season. Since forest canopies reduce direct solar (0.28 - 3.5 μm) radiation but increase longwave (3.5-100 μm) radiation at the snow surface, by performing detailed quantification of individual radiation components for a range of vegetation density and and gap configurations, we identify the optimal vegetation configurations. We also evaluate the role of site location, its topographic setting, local meteorological conditions and vegetation morphological characteristics, on the optimal configurations. The results can be used to assist forest managers to quantify the radiative regime alteration for various thinning and gap-creation scenarios, as a function of latitudinal, topographic, climatic and vegetation characteristics.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-12
... Organization and provided application support and information technology services supporting the subject firm..., including on-site leased workers from Kelly Services and Cognizant Technology Solutions, Shelton... Processing Group and Systems Configuration Organization, Including On-Site Leased Workers From Kelly Services...
Landscape ecology and forest management
Thomas R. Crow
1999-01-01
Almost all forest management activities affect landscape pattern to some extent. Among the most obvious impacts are those associated with forest harvesting and road building. These activities profoundly affect the size, shape, and configuration of patches in the landscape matrix. Even-age management such as clearcutting has been applied in blocks of uniform size, shape...
NASA Astrophysics Data System (ADS)
1992-05-01
The function of the Space Station Furnace Facility (SSFF) is to support materials research into the crystal growth and solidification processes of electronic and photonic materials, metals and alloys, and glasses and ceramics. To support this broad base of research requirements, the SSFF will employ a variety of furnace modules which will be operated, regulated, and supported by a core of common subsystems. Furnace modules may be reconfigured or specifically developed to provide unique solidification conditions for each set of experiments. The SSFF modular approach permits the addition of new or scaled-up furnace modules to support the evolution of the facility as new science requirements are identified. The SSFF Core is of modular design to permit augmentation for enhanced capabilities. The fully integrated configuration of the SSFF will consist of three racks with the capability of supporting up to two furnace modules per rack. The initial configuration of the SSFF will consist of two of the three racks and one furnace module. This Experiment/Facility Requirements Document (E/FRD) describes the integrated facility requirements for the Space Station Freedom (SSF) Integrated Configuration-1 (IC1) mission. The IC1 SSFF will consist of two racks: the Core Rack, with the centralized subsystem equipment; and the Experiment Rack-1, with Furnace Module-1 and the distributed subsystem equipment to support the furnace. The SSFF support functions are provided by the following Core subsystems: power conditioning and distribution subsystem (SSFF PCDS); data management subsystem (SSFF DMS); thermal control Subsystem (SSFF TCS); gas distribution subsystem (SSFF GDS); and mechanical structures subsystem (SSFF MSS).
NASA Technical Reports Server (NTRS)
1992-01-01
The function of the Space Station Furnace Facility (SSFF) is to support materials research into the crystal growth and solidification processes of electronic and photonic materials, metals and alloys, and glasses and ceramics. To support this broad base of research requirements, the SSFF will employ a variety of furnace modules which will be operated, regulated, and supported by a core of common subsystems. Furnace modules may be reconfigured or specifically developed to provide unique solidification conditions for each set of experiments. The SSFF modular approach permits the addition of new or scaled-up furnace modules to support the evolution of the facility as new science requirements are identified. The SSFF Core is of modular design to permit augmentation for enhanced capabilities. The fully integrated configuration of the SSFF will consist of three racks with the capability of supporting up to two furnace modules per rack. The initial configuration of the SSFF will consist of two of the three racks and one furnace module. This Experiment/Facility Requirements Document (E/FRD) describes the integrated facility requirements for the Space Station Freedom (SSF) Integrated Configuration-1 (IC1) mission. The IC1 SSFF will consist of two racks: the Core Rack, with the centralized subsystem equipment; and the Experiment Rack-1, with Furnace Module-1 and the distributed subsystem equipment to support the furnace. The SSFF support functions are provided by the following Core subsystems: power conditioning and distribution subsystem (SSFF PCDS); data management subsystem (SSFF DMS); thermal control Subsystem (SSFF TCS); gas distribution subsystem (SSFF GDS); and mechanical structures subsystem (SSFF MSS).
Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.
Feldmann-Wüstefeld, Tobias; Schubö, Anna
2014-04-01
Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.
Site systems engineering fiscal year 1999 multi-year work plan (MYWP) update for WBS 1.8.2.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
GRYGIEL, M.L.
1998-10-08
Manage the Site Systems Engineering process to provide a traceable integrated requirements-driven, and technically defensible baseline. Through the Site Integration Group(SIG), Systems Engineering ensures integration of technical activities across all site projects. Systems Engineering's primary interfaces are with the RL Project Managers, the Project Direction Office and with the Project Major Subcontractors, as well as with the Site Planning organization. Systems Implementation: (1) Develops, maintains, and controls the site integrated technical baseline, ensures the Systems Engineering interfaces between projects are documented, and maintain the Site Environmental Management Specification. (2) Develops and uses dynamic simulation models for verification of the baselinemore » and analysis of alternatives. (3) Performs and documents fictional and requirements analyses. (4) Works with projects, technology management, and the SIG to identify and resolve technical issues. (5) Supports technical baseline information for the planning and budgeting of the Accelerated Cleanup Plan, Multi-Year Work Plans, Project Baseline Summaries as well as performance measure reporting. (6) Works with projects to ensure the quality of data in the technical baseline. (7) Develops, maintains and implements the site configuration management system.« less
Increasing Usability in Ocean Observing Systems
NASA Astrophysics Data System (ADS)
Chase, A. C.; Gomes, K.; O'Reilly, T.
2005-12-01
As observatory systems move to more advanced techniques for instrument configuration and data management, standardized frameworks are being developed to benefit from commodities of scale. ACE (A Configuror and Editor) is a tool that was developed for SIAM (Software Infrastructure and Application for MOOS), a framework for the seamless integration of self-describing plug-and-work instruments into the Monterey Ocean Observing System. As a comprehensive solution, the SIAM infrastructure requires a number of processes to be run to configure an instrument for use within its framework. As solutions move from the lab to the field, the steps needed to implement the solution must be made bulletproof so that they may be used in the field with confidence. Loosely defined command line interfaces don't always provide enough user feedback and business logic can be difficult to maintain over a series of scripts. ACE is a tool developed for guiding the user through a number of complicated steps, removing the reliance on command-line utilities and reducing the difficulty of completing the necessary steps, while also preventing operator error and enforcing system constraints. Utilizing the cross-platform nature of the Java programming language, ACE provides a complete solution for deploying an instrument within the SIAM infrastructure without depending on special software being installed on the users computer. Requirements such as the installation of a Unix emulator for users running Windows machines, and the installation of, and ability to use, a CVS client, have all been removed by providing the equivalent functionality from within ACE. In order to achieve a "one stop shop" for configuring instruments, ACE had to be written to handle a wide variety of functionality including: compiling java code, interacting with a CVS server and maintaining client-side CVS information, editing XML, interacting with a server side database, and negotiating serial port communications through Java. This paper will address the relative tradeoffs of including all the afore-mentioned functionality in a single tool, its affects on user adoption of the framework (SIAM) it provides access to, as well as further discussion of some of the functionality generally pertinent to data management (XML editing, source code management and compilation, etc).
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
Inverse Analysis to Formability Design in a Deep Drawing Process
NASA Astrophysics Data System (ADS)
Buranathiti, Thaweepat; Cao, Jian
Deep drawing process is an important process adding values to flat sheet metals in many industries. An important concern in the design of a deep drawing process generally is formability. This paper aims to present the connection between formability and inverse analysis (IA), which is a systematical means for determining an optimal blank configuration for a deep drawing process. In this paper, IA is presented and explored by using a commercial finite element software package. A number of numerical studies on the effect of blank configurations to the quality of a part produced by a deep drawing process were conducted and analyzed. The quality of the drawing processes is numerically analyzed by using an explicit incremental nonlinear finite element code. The minimum distance between elemental principal strains and the strain-based forming limit curve (FLC) is defined as tearing margin to be the key performance index (KPI) implying the quality of the part. The initial blank configuration has shown that it plays a highly important role in the quality of the product via the deep drawing process. In addition, it is observed that if a blank configuration is not greatly deviated from the one obtained from IA, the blank can still result a good product. The strain history around the bottom fillet of the part is also observed. The paper concludes that IA is an important part of the design methodology for deep drawing processes.
Measured energy savings and performance of power-managed personal computers and monitors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordman, B.; Piette, M.A.; Kinney, K.
1996-08-01
Personal computers and monitors are estimated to use 14 billion kWh/year of electricity, with power management potentially saving $600 million/year by the year 2000. The effort to capture these savings is lead by the US Environmental Protection Agency`s Energy Star program, which specifies a 30W maximum demand for the computer and for the monitor when in a {open_quote}sleep{close_quote} or idle mode. In this paper the authors discuss measured energy use and estimated savings for power-managed (Energy Star compliant) PCs and monitors. They collected electricity use measurements of six power-managed PCs and monitors in their office and five from two othermore » research projects. The devices are diverse in machine type, use patterns, and context. The analysis method estimates the time spent in each system operating mode (off, low-, and full-power) and combines these with real power measurements to derive hours of use per mode, energy use, and energy savings. Three schedules are explored in the {open_quotes}As-operated,{close_quotes} {open_quotes}Standardized,{close_quotes} and `Maximum` savings estimates. Energy savings are established by comparing the measurements to a baseline with power management disabled. As-operated energy savings for the eleven PCs and monitors ranged from zero to 75 kWh/year. Under the standard operating schedule (on 20% of nights and weekends), the savings are about 200 kWh/year. An audit of power management features and configurations for several dozen Energy Star machines found only 11% of CPU`s fully enabled and about two thirds of monitors were successfully power managed. The highest priority for greater power management savings is to enable monitors, as opposed to CPU`s, since they are generally easier to configure, less likely to interfere with system operation, and have greater savings. The difficulties in properly configuring PCs and monitors is the largest current barrier to achieving the savings potential from power management.« less
Camacho, Carlos; Palacios, Sebastián; Sáez, Pedro; Sánchez, Sonia; Potti, Jaime
2014-01-01
Landscape conversion by humans may have detrimental effects on animal populations inhabiting managed ecosystems, but human-altered areas may also provide suitable environments for tolerant species. We investigated the spatial ecology of a highly mobile nocturnal avian species-the red-necked nightjar (Caprimulgus ruficollis)-in two contrastingly managed areas in Southwestern Spain to provide management recommendations for species having multiple habitat requirements. Based on habitat use by radiotagged nightjars, we created maps of functional heterogeneity in both areas so that the movements of breeding individuals could be modeled using least-cost path analyses. In both the natural and the managed area, nightjars used remnants of native shrublands as nesting sites, while pinewood patches (either newly planted or natural mature) and roads were selected as roosting and foraging habitats, respectively. Although the fraction of functional habitat was held relatively constant (60.9% vs. 74.1% in the natural and the managed area, respectively), landscape configuration changed noticeably. As a result, least-cost routes (summed linear distances) from nest locations to the nearest roost and foraging sites were three times larger in the natural than in the managed area (mean ± SE: 1356±76 m vs. 439±32 m). It seems likely that the increased proximity of functional habitats in the managed area relative to the natural one is underlying the significantly higher abundances of nightjars observed therein, where breeders should travel shorter distances to link together essential resources, thus likely reducing their energy expenditure and mortality risks. Our results suggest that landscape configuration, but not habitat availability, is responsible for the observed differences between the natural and the managed area in the abundance and movements of breeding nightjars, although no effect on body condition was detected. Agricultural landscapes could be moderately managed to preserve small native remnants and to favor the juxtaposition of functional habitats to benefit those farm species relying on patchy resources.
Correlation as a Determinant of Configurational Entropy in Supramolecular and Protein Systems
2015-01-01
For biomolecules in solution, changes in configurational entropy are thought to contribute substantially to the free energies of processes like binding and conformational change. In principle, the configurational entropy can be strongly affected by pairwise and higher-order correlations among conformational degrees of freedom. However, the literature offers mixed perspectives regarding the contributions that changes in correlations make to changes in configurational entropy for such processes. Here we take advantage of powerful techniques for simulation and entropy analysis to carry out rigorous in silico studies of correlation in binding and conformational changes. In particular, we apply information-theoretic expansions of the configurational entropy to well-sampled molecular dynamics simulations of a model host–guest system and the protein bovine pancreatic trypsin inhibitor. The results bear on the interpretation of NMR data, as they indicate that changes in correlation are important determinants of entropy changes for biologically relevant processes and that changes in correlation may either balance or reinforce changes in first-order entropy. The results also highlight the importance of main-chain torsions as contributors to changes in protein configurational entropy. As simulation techniques grow in power, the mathematical techniques used here will offer new opportunities to answer challenging questions about complex molecular systems. PMID:24702693
Dimitriou, D; Leonard, H C; Karmiloff-Smith, A; Johnson, M H; Thomas, M S C
2015-05-01
Configural processing in face recognition is a sensitivity to the spacing between facial features. It has been argued both that its presence represents a high level of expertise in face recognition, and also that it is a developmentally vulnerable process. We report a cross-syndrome investigation of the development of configural face recognition in school-aged children with autism, Down syndrome and Williams syndrome compared with a typically developing comparison group. Cross-sectional trajectory analyses were used to compare configural and featural face recognition utilising the 'Jane faces' task. Trajectories were constructed linking featural and configural performance either to chronological age or to different measures of mental age (receptive vocabulary, visuospatial construction), as well as the Benton face recognition task. An emergent inversion effect across age for detecting configural but not featural changes in faces was established as the marker of typical development. Children from clinical groups displayed atypical profiles that differed across all groups. We discuss the implications for the nature of face processing within the respective developmental disorders, and how the cross-sectional syndrome comparison informs the constraints that shape the typical development of face recognition. © 2014 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Effect of the Machining Processes on Low Cycle Fatigue Behavior of a Powder Metallurgy Disk
NASA Technical Reports Server (NTRS)
Telesman, J.; Kantzos, P.; Gabb, T. P.; Ghosn, L. J.
2010-01-01
A study has been performed to investigate the effect of various machining processes on fatigue life of configured low cycle fatigue specimens machined out of a NASA developed LSHR P/M nickel based disk alloy. Two types of configured specimen geometries were employed in the study. To evaluate a broach machining processes a double notch geometry was used with both notches machined using broach tooling. EDM machined notched specimens of the same configuration were tested for comparison purposes. Honing finishing process was evaluated by using a center hole specimen geometry. Comparison testing was again done using EDM machined specimens of the same geometry. The effect of these machining processes on the resulting surface roughness, residual stress distribution and microstructural damage were characterized and used in attempt to explain the low cycle fatigue results.
Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform
NASA Astrophysics Data System (ADS)
Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian
2017-04-01
The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust configuration is based on cloud computing and allows the installation on a private or public cloud infrastructure. In this configuration, the processing resources can be dynamically allocated and the execution time can be considerably improved by the available virtual resources and the number of parallelizable sequences in the processing flow. The presentation highlights the benefits and issues of the proposed solution by analyzing some significant experimental use cases. Main references for further information: [1] BigEarth project, http://cgis.utcluj.ro/projects/bigearth [2] Constantin Nandra, Dorian Gorgan: "Defining Earth data batch processing tasks by means of a flexible workflow description language", ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-4, 59-66, (2016). [3] Victor Bacu, Teodor Stefanut, Dorian Gorgan, "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015).
Making automated computer program documentation a feature of total system design
NASA Technical Reports Server (NTRS)
Wolf, A. W.
1970-01-01
It is pointed out that in large-scale computer software systems, program documents are too often fraught with errors, out of date, poorly written, and sometimes nonexistent in whole or in part. The means are described by which many of these typical system documentation problems were overcome in a large and dynamic software project. A systems approach was employed which encompassed such items as: (1) configuration management; (2) standards and conventions; (3) collection of program information into central data banks; (4) interaction among executive, compiler, central data banks, and configuration management; and (5) automatic documentation. A complete description of the overall system is given.
Integrated cluster management at Manchester
NASA Astrophysics Data System (ADS)
McNab, Andrew; Forti, Alessandra
2012-12-01
We describe an integrated management system using third-party, open source components used in operating a large Tier-2 site for particle physics. This system tracks individual assets and records their attributes such as MAC and IP addresses; derives DNS and DHCP configurations from this database; creates each host's installation and re-configuration scripts; monitors the services on each host according to the records of what should be running; and cross references tickets with asset records and per-asset monitoring pages. In addition, scripts which detect problems and automatically remove hosts record these new states in the database which are available to operators immediately through the same interface as tickets and monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
RIECK, C.A.
1999-02-23
This Software Configuration Management Plan (SCMP) provides the instructions for change control of the W-211 Project, Retrieval Control System (RCS) software after initial approval/release but prior to the transfer of custody to the waste tank operations contractor. This plan applies to the W-211 system software developed by the project, consisting of the computer human-machine interface (HMI) and programmable logic controller (PLC) software source and executable code, for production use by the waste tank operations contractor. The plan encompasses that portion of the W-211 RCS software represented on project-specific AUTOCAD drawings that are released as part of the C1 definitive designmore » package (these drawings are identified on the drawing list associated with each C-1 package), and the associated software code. Implementation of the plan is required for formal acceptance testing and production release. The software configuration management plan does not apply to reports and data generated by the software except where specifically identified. Control of information produced by the software once it has been transferred for operation is the responsibility of the receiving organization.« less
Agarwal, Vivek; Buttles, John W.; Beaty, Lawrence H.; ...
2016-10-05
In the current competitive energy market, the nuclear industry is committed to lower the operations and maintenance cost; increase productivity and efficiency while maintaining safe and reliable operation. The present operating model of nuclear power plants is dependent on large technical staffs that put the nuclear industry at long-term economic disadvantage. Technology can play a key role in nuclear power plant configuration management in offsetting labor costs by automating manually performed plant activities. The technology being developed, tested, and demonstrated in this paper will enable the continued safe operation of today’s fleet of light water reactors by providing the technicalmore » means to monitor components in plants today that are only routinely monitored through manual activities. The wireless enabled valve position indicators that are the subject of this paper are able to provide a valid position indication available continuously, rather than only periodically. As a result, a real-time (online) availability of valve positions using an affordable technologies are vital to plant configuration when compared with long-term labor rates, and provide information that can be used for a variety of plant engineering, maintenance, and management applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Vivek; Buttles, John W.; Beaty, Lawrence H.
In the current competitive energy market, the nuclear industry is committed to lower the operations and maintenance cost; increase productivity and efficiency while maintaining safe and reliable operation. The present operating model of nuclear power plants is dependent on large technical staffs that put the nuclear industry at long-term economic disadvantage. Technology can play a key role in nuclear power plant configuration management in offsetting labor costs by automating manually performed plant activities. The technology being developed, tested, and demonstrated in this paper will enable the continued safe operation of today’s fleet of light water reactors by providing the technicalmore » means to monitor components in plants today that are only routinely monitored through manual activities. The wireless enabled valve position indicators that are the subject of this paper are able to provide a valid position indication available continuously, rather than only periodically. As a result, a real-time (online) availability of valve positions using an affordable technologies are vital to plant configuration when compared with long-term labor rates, and provide information that can be used for a variety of plant engineering, maintenance, and management applications.« less
Graphical User Interface Development and Design to Support Airport Runway Configuration Management
NASA Technical Reports Server (NTRS)
Jones, Debra G.; Lenox, Michelle; Onal, Emrah; Latorella, Kara A.; Lohr, Gary W.; Le Vie, Lisa
2015-01-01
The objective of this effort was to develop a graphical user interface (GUI) for the National Aeronautics and Space Administration's (NASA) System Oriented Runway Management (SORM) decision support tool to support runway management. This tool is expected to be used by traffic flow managers and supervisors in the Airport Traffic Control Tower (ATCT) and Terminal Radar Approach Control (TRACON) facilities.
Synergy with HST and JWST Data Management Systems
NASA Astrophysics Data System (ADS)
Greene, Gretchen; Space Telescope Data Management Team
2014-01-01
The data processing and archive systems for the JWST will contain a petabyte of science data and the best news is that users will have fast access to the latest calibrations through a variety of new services. With a synergistic approach currently underway with the STScI science operations between the Hubble Space Telescope and James Webb Space Telescope data management subsystems (DMS), operational verification is right around the corner. Next year the HST archive will provide scientists on-demand fully calibrated data products via the Mikulski Archive for Space Telescopes (MAST), which takes advantage of an upgraded DMS. This enhanced system, developed jointly with the JWST DMS is based on a new CONDOR distributed processing system capable of reprocessing data using a prioritization queue which runs in the background. A Calibration Reference Data System manages the latest optimal configuration for each scientific instrument pipeline. Science users will be able to search and discover the growing MAST archive calibrated datasets from these missions along with the other multiple mission holdings both local to MAST and available through the Virtual Observatory. JWST data systems will build upon the successes and lessons learned from the HST legacy and move us forward into the next generation of multi-wavelength archive research.
Virtual Design of a 4-Bed Molecular Sieve for Exploration
NASA Technical Reports Server (NTRS)
Giesy, Timothy J.; Coker, Robert F.; O'Connor, Brian F.; Knox, James C.
2017-01-01
Simulations of six new 4-Bed Molecular Sieve configurations have been performed using a COMSOL (COMSOL Multiphysics - commercial software) model. The preliminary results show that reductions in desiccant bed size and sorbent bed size when compared to the International Space Station configuration are feasible while still yielding a process that handles at least 4.0 kilograms a day CO2. The results also show that changes to the CO2 sorbent are likewise feasible. Decreasing the bed sizes was found to have very little negative effect on the adsorption process; breakthrough of CO2 in the sorbent bed was observed for two of the configurations, but a small degree of CO2 breakthrough is acceptable, and water breakthrough in the desiccant beds was not observed. Both configurations for which CO2 breakthrough was observed still yield relatively high CO2 efficiency, and future investigations will focus on bed size in order to find the optimum configuration.
Virtual Design of a 4-Bed Molecular Sieve for Exploration
NASA Technical Reports Server (NTRS)
Giesy, Timothy J.; Coker, Robert F.; O'Connor, Brian F.; Knox, James C.
2017-01-01
Simulations of six new 4-Bed Molecular Sieve configurations have been performed using a COMSOL model. The preliminary results show that reductions in desiccant bed size and sorbent bed size when compared to the International Space Station configuration are feasible while still yielding a process that handles at least 4.0 kg/day CO2. The results also show that changes to the CO2 sorbent are likewise feasible. Decreasing the bed sizes was found to have very little negative effect on the adsorption process; breakthrough of CO2 in the sorbent bed was observed for two of the configurations, but water breakthrough in the desiccant beds was not observed. Nevertheless, both configurations for which CO2 breakthrough was observed still yield relatively high CO2 efficiency, and future investigations will focus on bed size in order to find the optimum configuration.
NASA Astrophysics Data System (ADS)
Fales, B. Scott; Shu, Yinan; Levine, Benjamin G.; Hohenstein, Edward G.
2017-09-01
A new complete active space configuration interaction (CASCI) method was recently introduced that uses state-averaged natural orbitals from the configuration interaction singles method (configuration interaction singles natural orbital CASCI, CISNO-CASCI). This method has been shown to perform as well or better than state-averaged complete active space self-consistent field for a variety of systems. However, further development and testing of this method have been limited by the lack of available analytic first derivatives of the CISNO-CASCI energy as well as the derivative coupling between electronic states. In the present work, we present a Lagrangian-based formulation of these derivatives as well as a highly efficient implementation of the resulting equations accelerated with graphical processing units. We demonstrate that the CISNO-CASCI method is practical for dynamical simulations of photochemical processes in molecular systems containing hundreds of atoms.
Fales, B Scott; Shu, Yinan; Levine, Benjamin G; Hohenstein, Edward G
2017-09-07
A new complete active space configuration interaction (CASCI) method was recently introduced that uses state-averaged natural orbitals from the configuration interaction singles method (configuration interaction singles natural orbital CASCI, CISNO-CASCI). This method has been shown to perform as well or better than state-averaged complete active space self-consistent field for a variety of systems. However, further development and testing of this method have been limited by the lack of available analytic first derivatives of the CISNO-CASCI energy as well as the derivative coupling between electronic states. In the present work, we present a Lagrangian-based formulation of these derivatives as well as a highly efficient implementation of the resulting equations accelerated with graphical processing units. We demonstrate that the CISNO-CASCI method is practical for dynamical simulations of photochemical processes in molecular systems containing hundreds of atoms.
Determining Window Placement and Configuration for the Small Pressurized Rover (SPR)
NASA Technical Reports Server (NTRS)
Thompson, Shelby; Litaker, Harry; Howard, Robert
2009-01-01
This slide presentation reviews the process of the evaluation of window placement and configuration for the cockpit of the Lunar Electric Rover (LER). The purpose of the evaluation was to obtain human-in-the-loop data on window placement and configuration for the cockpit of the LER.
NASA Astrophysics Data System (ADS)
Ma, Guolong; Li, Liqun; Chen, Yanbin
2017-06-01
Butt joints of 2 mm thick stainless steel with 0.5 mm gap were fabricated by dual beam laser welding with filler wire technique. The wire melting and transfer behaviors with different beam configurations were investigated detailedly in a stable liquid bridge mode and an unstable droplet mode. A high speed video system assisted by a high pulse diode laser as an illumination source was utilized to record the process in real time. The difference of welding stability between single and dual beam laser welding with filler wire was also compartively studied. In liquid bridge transfer mode, the results indicated that the transfer process and welding stability were disturbed in the form of "broken-reformed" liquid bridge in tandem configuration, while improved by stabilizing the molten pool dynamics with a proper fluid pattern in side-by-side configuration, compared to sigle beam laser welding with filler wire. The droplet transfer period and critical radius were studied in droplet transfer mode. The transfer stability of side-by-side configuration with the minium transfer period and critical droplet size was better than the other two configurations. This was attributed to that the action direction and good stability of the resultant force which were beneficial to transfer process in this case. The side-by-side configuration showed obvious superiority on improving welding stability in both transfer modes. An acceptable weld bead was successfully generated even in undesirable droplet transfer mode under the present conditions.
Referent control and motor equivalence of reaching from standing
Tomita, Yosuke; Feldman, Anatol G.
2016-01-01
Motor actions may result from central changes in the referent body configuration, defined as the body posture at which muscles begin to be activated or deactivated. The actual body configuration deviates from the referent configuration, particularly because of body inertia and environmental forces. Within these constraints, the system tends to minimize the difference between these configurations. For pointing movement, this strategy can be expressed as the tendency to minimize the difference between the referent trajectory (RT) and actual trajectory (QT) of the effector (hand). This process may underlie motor equivalent behavior that maintains the pointing trajectory regardless of the number of body segments involved. We tested the hypothesis that the minimization process is used to produce pointing in standing subjects. With eyes closed, 10 subjects reached from a standing position to a remembered target located beyond arm length. In randomly chosen trials, hip flexion was unexpectedly prevented, forcing subjects to take a step during pointing to prevent falling. The task was repeated when subjects were instructed to intentionally take a step during pointing. In most cases, reaching accuracy and trajectory curvature were preserved due to adaptive condition-specific changes in interjoint coordination. Results suggest that referent control and the minimization process associated with it may underlie motor equivalence in pointing. NEW & NOTEWORTHY Motor actions may result from minimization of the deflection of the actual body configuration from the centrally specified referent body configuration, in the limits of neuromuscular and environmental constraints. The minimization process may maintain reaching trajectory and accuracy regardless of the number of body segments involved (motor equivalence), as confirmed in this study of reaching from standing in young healthy individuals. Results suggest that the referent control process may underlie motor equivalence in reaching. PMID:27784802
NASA Technical Reports Server (NTRS)
Wright, Michael R.
1999-01-01
With over two dozen missions since the first in 1986, the Hitchhiker project has a reputation for providing quick-reaction, low-cost flight services for Shuttle Small Payloads Project (SSPP) customers. Despite the successes, several potential improvements in customer payload integration and test (I&T) deserve consideration. This paper presents suggestions to Hitchhiker customers on how to help make the I&T process run smoother. Included are: customer requirements and interface definition, pre-integration test and evaluation, configuration management, I&T overview and planning, problem mitigation, and organizational communication. In this era of limited flight opportunities and new ISO-based requirements, issues such as these have become more important than ever.
Addressing hypertext design and conversion issues
NASA Technical Reports Server (NTRS)
Glusko, Robert J.
1990-01-01
Hypertext is a network of information units connected by relational links. A hypertext system is a configuration of hardware and software that presents a hypertext to users and allows them to manage and access the information that it contains. Hypertext is also a user interface concept that closely supports the ways that people use printed information. Hypertext concepts encourage modularity and the elimination of redundancy in data bases because information can be stored only once but viewed in any appropriate context. Hypertext is such a hot idea because it is an enabling technology in that workstations and personal computers finally provide enough local processing power for hypertext user interfaces.
Low cost fuel cell diffusion layer configured for optimized anode water management
Owejan, Jon P; Nicotera, Paul D; Mench, Matthew M; Evans, Robert E
2013-08-27
A fuel cell comprises a cathode gas diffusion layer, a cathode catalyst layer, an anode gas diffusion layer, an anode catalyst layer and an electrolyte. The diffusion resistance of the anode gas diffusion layer when operated with anode fuel is higher than the diffusion resistance of the cathode gas diffusion layer. The anode gas diffusion layer may comprise filler particles having in-plane platelet geometries and be made of lower cost materials and manufacturing processes than currently available commercial carbon fiber substrates. The diffusion resistance difference between the anode gas diffusion layer and the cathode gas diffusion layer may allow for passive water balance control.
Identity configurations: a new perspective on identity formation in contemporary society.
Schachter, Elli P
2004-02-01
This paper deals with the theoretical construct of "identity configuration." It portrays the different possible ways in which individuals configure the relationship among potentially conflicting identifications in the process of identity formation. In order to explicate these configurations, I analyzed narratives of identity development retold by individuals describing personal identity conflicts that arise within a larger context of sociocultural conflict. Thirty Jewish modern orthodox young adults were interviewed regarding a potentially conflictual identity issue (i.e. their religious and sexual development). Their deliberations, as described in the interviews, were examined, and four different configurations were identified: a configuration based on choice and suppression; an assimilative and synthesizing configuration; a confederacy of identifications; and a configuration based on the thrill of dissonance. The different configurations are illustrated through exemplars, and the possible implications of the concept of "configuration" for identity theory are discussed.