Sample records for software development lifecycle

  1. Idea Paper: The Lifecycle of Software for Scientific Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubey, Anshu; McInnes, Lois C.

    The software lifecycle is a well researched topic that has produced many models to meet the needs of different types of software projects. However, one class of projects, software development for scientific computing, has received relatively little attention from lifecycle researchers. In particular, software for end-to-end computations for obtaining scientific results has received few lifecycle proposals and no formalization of a development model. An examination of development approaches employed by the teams implementing large multicomponent codes reveals a great deal of similarity in their strategies. This idea paper formalizes these related approaches into a lifecycle model for end-to-end scientific applicationmore » software, featuring loose coupling between submodels for development of infrastructure and scientific capability. We also invite input from stakeholders to converge on a model that captures the complexity of this development processes and provides needed lifecycle guidance to the scientific software community.« less

  2. Knowledge based system verification and validation as related to automation of space station subsystems: Rationale for a knowledge based system lifecycle

    NASA Technical Reports Server (NTRS)

    Richardson, Keith; Wong, Carla

    1988-01-01

    The role of verification and validation (V and V) in software has been to support and strengthen the software lifecycle and to ensure that the resultant code meets the standards of the requirements documents. Knowledge Based System (KBS) V and V should serve the same role, but the KBS lifecycle is ill-defined. The rationale of a simple form of the KBS lifecycle is presented, including accommodation to certain critical KBS differences from software development.

  3. Development of a comprehensive software engineering environment

    NASA Technical Reports Server (NTRS)

    Hartrum, Thomas C.; Lamont, Gary B.

    1987-01-01

    The generation of a set of tools for software lifecycle is a recurring theme in the software engineering literature. The development of such tools and their integration into a software development environment is a difficult task because of the magnitude (number of variables) and the complexity (combinatorics) of the software lifecycle process. An initial development of a global approach was initiated in 1982 as the Software Development Workbench (SDW). Continuing efforts focus on tool development, tool integration, human interfacing, data dictionaries, and testing algorithms. Current efforts are emphasizing natural language interfaces, expert system software development associates and distributed environments with Ada as the target language. The current implementation of the SDW is on a VAX-11/780. Other software development tools are being networked through engineering workstations.

  4. Information system life-cycle and documentation standards, volume 1

    NASA Technical Reports Server (NTRS)

    Callender, E. David; Steinbacher, Jody

    1989-01-01

    The Software Management and Assurance Program (SMAP) Information System Life-Cycle and Documentation Standards Document describes the Version 4 standard information system life-cycle in terms of processes, products, and reviews. The description of the products includes detailed documentation standards. The standards in this document set can be applied to the life-cycle, i.e., to each phase in the system's development, and to the documentation of all NASA information systems. This provides consistency across the agency as well as visibility into the completeness of the information recorded. An information system is software-intensive, but consists of any combination of software, hardware, and operational procedures required to process, store, or transmit data. This document defines a standard life-cycle model and content for associated documentation.

  5. Computer-aided software development process design

    NASA Technical Reports Server (NTRS)

    Lin, Chi Y.; Levary, Reuven R.

    1989-01-01

    The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.

  6. Telescience Resource Kit Software Lifecycle

    NASA Technical Reports Server (NTRS)

    Griner, Carolyn S.; Schneider, Michelle

    1998-01-01

    The challenge of a global operations capability led to the Telescience Resource Kit (TReK) project, an in-house software development project of the Mission Operations Laboratory (MOL) at NASA's Marshall Space Flight Center (MSFC). The TReK system is being developed as an inexpensive comprehensive personal computer- (PC-) based ground support system that can be used by payload users from their home sites to interact with their payloads on board the International Space Station (ISS). The TReK project is currently using a combination of the spiral lifecycle model and the incremental lifecycle model. As with any software development project, there are four activities that can be very time consuming: Software design and development, project documentation, testing, and umbrella activities, such as quality assurance and configuration management. In order to produce a quality product, it is critical that each of these activities receive the appropriate amount of attention. For TReK, the challenge was to lay out a lifecycle and project plan that provides full support for these activities, is flexible, provides a way to deal with changing risks, can accommodate unknowns, and can respond to changes in the environment quickly. This paper will provide an overview of the TReK lifecycle, a description of the project's environment, and a general overview of project activities.

  7. Model of the Product Development Lifecycle.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Sunny L.; Roe, Natalie H.; Wood, Evan

    2015-10-01

    While the increased use of Commercial Off-The-Shelf information technology equipment has presented opportunities for improved cost effectiveness and flexibility, the corresponding loss of control over the product's development creates unique vulnerabilities and security concerns. Of particular interest is the possibility of a supply chain attack. A comprehensive model for the lifecycle of hardware and software products is proposed based on a survey of existing literature from academic, government, and industry sources. Seven major lifecycle stages are identified and defined: (1) Requirements, (2) Design, (3) Manufacturing for hardware and Development for software, (4) Testing, (5) Distribution, (6) Use and Maintenance, andmore » (7) Disposal. The model is then applied to examine the risk of attacks at various stages of the lifecycle.« less

  8. Software Program: Software Management Guidebook

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.

  9. TriBITS lifecycle model. Version 1.0, a lean/agile software lifecycle model for research-based computational science and engineering and applied mathematical software.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willenbring, James M.; Bartlett, Roscoe Ainsworth; Heroux, Michael Allen

    2012-01-01

    Software lifecycles are becoming an increasingly important issue for computational science and engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process - respecting the competing needs of research vs. production - cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for manymore » CSE software projects that are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Here, we advocate three to four phases or maturity levels that address the appropriate handling of many issues associated with the transition from research to production software. The goals of this lifecycle model are to better communicate maturity levels with customers and to help to identify and promote Software Engineering (SE) practices that will help to improve productivity and produce better software. An important collection of software in this domain is Trilinos, which is used as the motivation and the initial target for this lifecycle model. However, many other related and similar CSE (and non-CSE) software projects can also make good use of this lifecycle model, especially those that use the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.« less

  10. Overview of the TriBITS Lifecycle Model: Lean/Agile Software Lifecycle Model for Research-based Computational Science and Engineering Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe A; Heroux, Dr. Michael A; Willenbring, James

    2012-01-01

    Software lifecycles are becoming an increasingly important issue for computational science & engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process--respecting the competing needs of research vs. production--cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects thatmore » are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.« less

  11. Evaluation of a Game to Teach Requirements Collection and Analysis in Software Engineering at Tertiary Education Level

    ERIC Educational Resources Information Center

    Hainey, Thomas; Connolly, Thomas M.; Stansfield, Mark; Boyle, Elizabeth A.

    2011-01-01

    A highly important part of software engineering education is requirements collection and analysis which is one of the initial stages of the Database Application Lifecycle and arguably the most important stage of the Software Development Lifecycle. No other conceptual work is as difficult to rectify at a later stage or as damaging to the overall…

  12. A Recommended Framework for the Network-Centric Acquisition Process

    DTIC Science & Technology

    2009-09-01

    ISO /IEC 12207 , Systems and Software Engineering-Software Life-Cycle Processes  ANSI/EIA 632, Processes for Engineering a System. There are...engineering [46]. Some of the process models presented in the DAG are:  ISO /IEC 15288, Systems and Software Engineering-System Life-Cycle Processes...e.g., ISO , IA, Security, etc.). Vetting developers helps ensure that they are using industry best industry practices and maximize the IA compliance

  13. Ontology for Life-Cycle Modeling of Water Distribution Systems: Model View Definition

    DTIC Science & Technology

    2013-06-01

    Research and Development Center, Construction Engineering Research Laboratory (ERDC-CERL) to develop a life-cycle building model have resulted in the...Laboratory (ERDC-CERL) to develop a life-cycle building model have resulted in the definition of a “core” building information model that contains...developed experimental BIM models us- ing commercial off-the-shelf (COTS) software. Those models represent three types of typical low-rise Army

  14. 77 FR 50724 - Developing Software Life Cycle Processes for Digital Computer Software Used in Safety Systems of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Developing Software Life Cycle Processes for Digital... Software Life Cycle Processes for Digital Computer Software used in Safety Systems of Nuclear Power Plants... clarifications, the enhanced consensus practices for developing software life-cycle processes for digital...

  15. The integration of the risk management process with the lifecycle of medical device software.

    PubMed

    Pecoraro, F; Luzi, D

    2014-01-01

    The application of software in the Medical Device (MD) domain has become central to the improvement of diagnoses and treatments. The new European regulations that specifically address software as an important component of MD, require complex procedures to make software compliant with safety requirements, introducing thereby new challenges in the qualification and classification of MD software as well as in the performance of risk management activities. Under this perspective, the aim of this paper is to propose an integrated framework that combines the activities to be carried out by the manufacturer to develop safe software within the development lifecycle based on the regulatory requirements reported in US and European regulations as well as in the relevant standards and guidelines. A comparative analysis was carried out to identify the main issues related to the application of the current new regulations. In addition, standards and guidelines recently released to harmonise procedures for the validation of MD software have been used to define the risk management activities to be carried out by the manufacturer during the software development process. This paper highlights the main issues related to the qualification and classification of MD software, providing an analysis of the different regulations applied in Europe and the US. A model that integrates the risk management process within the software development lifecycle has been proposed too. It is based on regulatory requirements and considers software risk analysis as a central input to be managed by the manufacturer already at the initial stages of the software design, in order to prevent MD failures. Relevant changes in the process of MD development have been introduced with the recognition of software being an important component of MDs as stated in regulations and standards. This implies the performance of highly iterative processes that have to integrate the risk management in the framework of software development. It also makes it necessary to involve both medical and software engineering competences to safeguard patient and user safety.

  16. Modeling defect trends for iterative development

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Spanguolo, J. N.

    2003-01-01

    The Employment of Defects (EoD) approach to measuring and analyzing defects seeks to identify and capture trends and phenomena that are critical to managing software quality in the iterative software development lifecycle at JPL.

  17. The application of virtual reality systems as a support of digital manufacturing and logistics

    NASA Astrophysics Data System (ADS)

    Golda, G.; Kampa, A.; Paprocka, I.

    2016-08-01

    Modern trends in development of computer aided techniques are heading toward the integration of design competitive products and so-called "digital manufacturing and logistics", supported by computer simulation software. All phases of product lifecycle: starting from design of a new product, through planning and control of manufacturing, assembly, internal logistics and repairs, quality control, distribution to customers and after-sale service, up to its recycling or utilization should be aided and managed by advanced packages of product lifecycle management software. Important problems for providing the efficient flow of materials in supply chain management of whole product lifecycle, using computer simulation will be described on that paper. Authors will pay attention to the processes of acquiring relevant information and correct data, necessary for virtual modeling and computer simulation of integrated manufacturing and logistics systems. The article describes possibilities of use an applications of virtual reality software for modeling and simulation the production and logistics processes in enterprise in different aspects of product lifecycle management. The authors demonstrate effective method of creating computer simulations for digital manufacturing and logistics and show modeled and programmed examples and solutions. They pay attention to development trends and show options of the applications that go beyond enterprise.

  18. Applications of AN OO Methodology and Case to a Daq System

    NASA Astrophysics Data System (ADS)

    Bee, C. P.; Eshghi, S.; Jones, R.; Kolos, S.; Magherini, C.; Maidantchik, C.; Mapelli, L.; Mornacchi, G.; Niculescu, M.; Patel, A.; Prigent, D.; Spiwoks, R.; Soloviev, I.; Caprini, M.; Duval, P. Y.; Etienne, F.; Ferrato, D.; Le van Suu, A.; Qian, Z.; Gaponenko, I.; Merzliakov, Y.; Ambrosini, G.; Ferrari, R.; Fumagalli, G.; Polesello, G.

    The RD13 project has evaluated the use of the Object Oriented Information Engineering (OOIE) method during the development of several software components connected to the DAQ system. The method is supported by a sophisticated commercial CASE tool (Object Management Workbench) and programming environment (Kappa) which covers the full life-cycle of the software including model simulation, code generation and application deployment. This paper gives an overview of the method, CASE tool, DAQ components which have been developed and we relate our experiences with the method and tool, its integration into our development environment and the spiral lifecycle it supports.

  19. Support for life-cycle product reuse in NASA's SSE

    NASA Technical Reports Server (NTRS)

    Shotton, Charles

    1989-01-01

    The Software Support Environment (SSE) is a software factory for the production of Space Station Freedom Program operational software. The SSE is to be centrally developed and maintained and used to configure software production facilities in the field. The PRC product TTCQF provides for an automated qualification process and analysis of existing code that can be used for software reuse. The interrogation subsystem permits user queries of the reusable data and components which have been identified by an analyzer and qualified with associated metrics. The concept includes reuse of non-code life-cycle components such as requirements and designs. Possible types of reusable life-cycle components include templates, generics, and as-is items. Qualification of reusable elements requires analysis (separation of candidate components into primitives), qualification (evaluation of primitives for reusability according to reusability criteria) and loading (placing qualified elements into appropriate libraries). There can be different qualifications for different installations, methodologies, applications and components. Identifying reusable software and related components is labor-intensive and is best carried out as an integrated function of an SSE.

  20. Automated Estimation Of Software-Development Costs

    NASA Technical Reports Server (NTRS)

    Roush, George B.; Reini, William

    1993-01-01

    COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.

  1. Cost Estimation of Software Development and the Implications for the Program Manager

    DTIC Science & Technology

    1992-06-01

    Software Lifecycle Model (SLIM), the Jensen System-4 model, the Software Productivity, Quality, and Reliability Estimator ( SPQR \\20), the Constructive...function models in current use are the Software Productivity, Quality, and Reliability Estimator ( SPQR /20) and the Software Architecture Sizing and...Estimator ( SPQR /20) was developed by T. Capers Jones of Software Productivity Research, Inc., in 1985. The model is intended to estimate the outcome

  2. Full Life-Cycle Defect Management Assessment: Initial Inspection Data Collection Results and Research Questions for Further Study

    NASA Technical Reports Server (NTRS)

    Shull, Forrest; Feldmann, Raimund; Haingaertner, Ralf; Regardie, Myrna; Seaman, Carolyn

    2007-01-01

    It is often the case in software projects that when schedule and budget resources are limited, the Verification and Validation (V&V) activities suffer. Fewer V&V activities can be afforded and moreover, short-term challenges can result in V&V activities being scaled back or dropped altogether. As a result, too often the default solution is to save activities for improving software quality until too late in the life-cycle, relying on late-term code inspections followed by thorough testing activities to reduce defect counts to acceptable levels. As many project managers realize, however, this is a resource-intensive way of achieving the required quality for software. The Full Life-cycle Defect Management Assessment Initiative, funded by NASA s Office of Safety and Mission Assurance under the Software Assurance Research Program, aims to address these problems by: Improving the effectiveness of early life-cycle V&V activities to make their benefits more attractive to team leads. Specifically, we focus on software inspection, a proven method that can be applied to any software work product, long before executable code has been developed; Better communicating this effectiveness to software development teams, along with suggestions for parameters to improve in the future to increase effectiveness; Analyzing the impact of early life-cycle V&V on the effectiveness and cost required for late life-cycle V&V activities, such as testing, in order to make the tradeoffs more apparent. This white paper reports on an initial milestone in this work, the development of a preliminary model of inspection effectiveness across multiple NASA Centers. This model contributes toward reaching our project goals by: Allowing an examination of inspection parameters, across different types of projects and different work products, for an analysis of factors that impact defect detection effectiveness. Allowing a comparison of this NASA-specific model to existing recommendations in the literature regarding how to plan effective inspections. Forming a baseline model which can be extended to incorporate factors describing: the numbers and types of defects that are missed by inspections; how such defects flow downstream through software development phases; how effectively they can be caught by testing activities in the late stages of development. The model has been implemented in a prototype web-enabled decision-support tool which allows developers to enter their inspection data and receive feedback based on a comparison against the model. The tool also allows users to access reusable materials (such as checklists) from projects included in the baseline. Both the tool itself and the model underlying it will continue to be extended throughout the remainder of this initiative. As results of analyzing inspection effectiveness for defect containment are determined, they can be shared via the tool and also via updates to existing training courses on metrics and software inspections. Moreover, the tool will help satisfy key CMMI requirements for the NASA Centers, as it will enable NASA to take a global view across peer review results for various types of projects to identify systemic problems. This analysis can result in continuous improvements to the approach to verification.

  3. ISEES: an institute for sustainable software to accelerate environmental science

    NASA Astrophysics Data System (ADS)

    Jones, M. B.; Schildhauer, M.; Fox, P. A.

    2013-12-01

    Software is essential to the full science lifecycle, spanning data acquisition, processing, quality assessment, data integration, analysis, modeling, and visualization. Software runs our meteorological sensor systems, our data loggers, and our ocean gliders. Every aspect of science is impacted by, and improved by, software. Scientific advances ranging from modeling climate change to the sequencing of the human genome have been rendered possible in the last few decades due to the massive improvements in the capabilities of computers to process data through software. This pivotal role of software in science is broadly acknowledged, while simultaneously being systematically undervalued through minimal investments in maintenance and innovation. As a community, we need to embrace the creation, use, and maintenance of software within science, and address problems such as code complexity, openness,reproducibility, and accessibility. We also need to fully develop new skills and practices in software engineering as a core competency in our earth science disciplines, starting with undergraduate and graduate education and extending into university and agency professional positions. The Institute for Sustainable Earth and Environmental Software (ISEES) is being envisioned as a community-driven activity that can facilitate and galvanize activites around scientific software in an analogous way to synthesis centers such as NCEAS and NESCent that have stimulated massive advances in ecology and evolution. We will describe the results of six workshops (Science Drivers, Software Lifecycles, Software Components, Workforce Development and Training, Sustainability and Governance, and Community Engagement) that have been held in 2013 to envision such an institute. We will present community recommendations from these workshops and our strategic vision for how ISEES will address the technical issues in the software lifecycle, sustainability of the whole software ecosystem, and the critical issue of computational training for the scientific community. Process for envisioning ISEES.

  4. Software Engineering Guidebook

    NASA Technical Reports Server (NTRS)

    Connell, John; Wenneson, Greg

    1993-01-01

    The Software Engineering Guidebook describes SEPG (Software Engineering Process Group) supported processes and techniques for engineering quality software in NASA environments. Three process models are supported: structured, object-oriented, and evolutionary rapid-prototyping. The guidebook covers software life-cycles, engineering, assurance, and configuration management. The guidebook is written for managers and engineers who manage, develop, enhance, and/or maintain software under the Computer Software Services Contract.

  5. A conceptual model for megaprogramming

    NASA Technical Reports Server (NTRS)

    Tracz, Will

    1990-01-01

    Megaprogramming is component-based software engineering and life-cycle management. Magaprogramming and its relationship to other research initiatives (common prototyping system/common prototyping language, domain specific software architectures, and software understanding) are analyzed. The desirable attributes of megaprogramming software components are identified and a software development model and resulting prototype megaprogramming system (library interconnection language extended by annotated Ada) are described.

  6. An approach to developing user interfaces for space systems

    NASA Astrophysics Data System (ADS)

    Shackelford, Keith; McKinney, Karen

    1993-08-01

    Inherent weakness in the traditional waterfall model of software development has led to the definition of the spiral model. The spiral model software development lifecycle model, however, has not been applied to NASA projects. This paper describes its use in developing real time user interface software for an Environmental Control and Life Support System (ECLSS) Process Control Prototype at NASA's Marshall Space Flight Center.

  7. Workflow-Based Software Development Environment

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    2013-01-01

    The Software Developer's Assistant (SDA) helps software teams more efficiently and accurately conduct or execute software processes associated with NASA mission-critical software. SDA is a process enactment platform that guides software teams through project-specific standards, processes, and procedures. Software projects are decomposed into all of their required process steps or tasks, and each task is assigned to project personnel. SDA orchestrates the performance of work required to complete all process tasks in the correct sequence. The software then notifies team members when they may begin work on their assigned tasks and provides the tools, instructions, reference materials, and supportive artifacts that allow users to compliantly perform the work. A combination of technology components captures and enacts any software process use to support the software lifecycle. It creates an adaptive workflow environment that can be modified as needed. SDA achieves software process automation through a Business Process Management (BPM) approach to managing the software lifecycle for mission-critical projects. It contains five main parts: TieFlow (workflow engine), Business Rules (rules to alter process flow), Common Repository (storage for project artifacts, versions, history, schedules, etc.), SOA (interface to allow internal, GFE, or COTS tools integration), and the Web Portal Interface (collaborative web environment

  8. Integrating a flexible modeling framework (FMF) with the network security assessment instrument to reduce software security risk

    NASA Technical Reports Server (NTRS)

    Gilliam, D. P.; Powell, J. D.

    2002-01-01

    This paper presents a portion of an overall research project on the generation of the network security assessment instrument to aid developers in assessing and assuring the security of software in the development and maintenance lifecycles.

  9. Product specification documentation standard and Data Item Descriptions (DID). Volume of the information system life-cycle and documentation standards, volume 3

    NASA Technical Reports Server (NTRS)

    Callender, E. David; Steinbacher, Jody

    1989-01-01

    This is the third of five volumes on Information System Life-Cycle and Documentation Standards which present a well organized, easily used standard for providing technical information needed for developing information systems, components, and related processes. This volume states the Software Management and Assurance Program documentation standard for a product specification document and for data item descriptions. The framework can be applied to any NASA information system, software, hardware, operational procedures components, and related processes.

  10. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1987-01-01

    The Software Automation, Generation and Administration (SAGA) project is investigating the design and construction of practical software engineering environments for developing and maintaining aerospace systems and applications software. The research includes the practical organization of the software lifecycle, configuration management, software requirements specifications, executable specifications, design methodologies, programming, verification, validation and testing, version control, maintenance, the reuse of software, software libraries, documentation, and automated management.

  11. Management plan documentation standard and Data Item Descriptions (DID). Volume of the information system life-cycle and documentation standards, volume 2

    NASA Technical Reports Server (NTRS)

    Callender, E. David; Steinbacher, Jody

    1989-01-01

    This is the second of five volumes of the Information System Life-Cycle and Documentation Standards. This volume provides a well-organized, easily used standard for management plans used in acquiring, assuring, and developing information systems and software, hardware, and operational procedures components, and related processes.

  12. DDP - a tool for life-cycle risk management

    NASA Technical Reports Server (NTRS)

    Cornford, S. L.; Feather, M. S.; Hicks, K. A.

    2001-01-01

    At JPL we have developed, and implemented, a process for achieving life-cycle risk management. This process has been embodied in a software tool and is called Defect Detection and Prevention (DDP). The DDP process can be succinctly stated as: determine where we want to be, what could get in the way and how we will get there.

  13. Evaluating Games-Based Learning

    ERIC Educational Resources Information Center

    Hainey, Thomas; Connolly, Thomas

    2010-01-01

    A highly important part of software engineering education is requirements collection and analysis, one of the initial stages of the Software Development Lifecycle. No other conceptual work is as difficult to rectify at a later stage or as damaging to the overall system if performed incorrectly. As software engineering is a field with a reputation…

  14. Imprinting Community College Computer Science Education with Software Engineering Principles

    ERIC Educational Resources Information Center

    Hundley, Jacqueline Holliday

    2012-01-01

    Although the two-year curriculum guide includes coverage of all eight software engineering core topics, the computer science courses taught in Alabama community colleges limit student exposure to the programming, or coding, phase of the software development lifecycle and offer little experience in requirements analysis, design, testing, and…

  15. CHIME: A Metadata-Based Distributed Software Development Environment

    DTIC Science & Technology

    2005-01-01

    structures by using typography , graphics , and animation. The Software Im- mersion in our conceptual model for CHIME can be seen as a form of Software...Even small- to medium-sized development efforts may involve hundreds of artifacts -- design documents, change requests, test cases and results, code...for managing and organizing information from all phases of the software lifecycle. CHIME is designed around an XML-based metadata architecture, in

  16. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Badger, W.; Beckman, C. S.; Beshers, G.; Hammerslag, D.; Kimball, J.; Kirslis, P. A.; Render, H.; Richards, P.; Terwilliger, R.

    1984-01-01

    The project to automate the management of software production systems is described. The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. Several major components of the SAGA system are completed to prototype form. The construction methods are described.

  17. Solar Asset Management Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iverson, Aaron; Zviagin, George

    Ra Power Management (RPM) has developed a cloud based software platform that manages the financial and operational functions of third party financed solar projects throughout their lifecycle. RPM’s software streamlines and automates the sales, financing, and management of a portfolio of solar assets. The software helps solar developers automate the most difficult aspects of asset management, leading to increased transparency, efficiency, and reduction in human error. More importantly, our platform will help developers save money by improving their operating margins.

  18. Reuse-Driven Software Processes Guidebook. Version 02.00.03

    DTIC Science & Technology

    1993-11-01

    a required sys - tem without unduly constraining the details of the solution. The Naval Research Laboratory Software Cost Reduction project developed...conventional manner. The emphasis is still on the development of "one-of-a-kind" sys - tems and the phased completion and review of corresponding...Application Engineering to improve the life-cycle productivity of Sy - 21 OVM ftrdauntals of Syatbes the total software development enterprise. The

  19. Software Assurance Best Practices for Air Force Weapon and Information Technology Systems - Are We Bleeding

    DTIC Science & Technology

    2008-03-01

    in applications is software assurance. There are many subtle variations to the software assurance definition (Goertzel, et al ., 2007), but the DoD...Gary McGraw (2006), and Thorsten 18 Schneider (2006). Goertzel, et al . (2007), lists and compares several security-enhanced software development...detailed by Goertzel, et al ., is the Microsoft Trustworthy Computing Security Development Lifecycle (SDL), shown in the following figure: Figure 6

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iverson, Aaron

    Ra Power Management (RPM) has developed a cloud based software platform that manages the financial and operational functions of third party financed solar projects throughout their lifecycle. RPM’s software streamlines and automates the sales, financing, and management of a portfolio of solar assets. The software helps solar developers automate the most difficult aspects of asset management, leading to increased transparency, efficiency, and reduction in human error. More importantly, our platform will help developers save money by improving their operating margins

  1. Core Logistics Capability Policy Applied to USAF Combat Aircraft Avionics Software: A Systems Engineering Analysis

    DTIC Science & Technology

    2010-06-01

    cannot make a distinction between software maintenance and development” (Sharma, 2004). ISO /IEC 12207 Software Lifecycle Processes offers a guide to...synopsis of ISO /IEC 12207 , Raghu Singh of the Federal Aviation Administration states “Whenever a software product needs modifications, the development...Corporation. Singh, R. (1998). International Standard ISO /IEC 12207 Software Life Cycle Processes. Washington: Federal Aviation Administration. The Joint

  2. CrossTalk: The Journal of Defense Software Engineering. Volume 21, Number 1

    DTIC Science & Technology

    2008-01-01

    project manage- ment and the individual components of the software life-cycle model ; it will be awarded for...software professionals that had been formally educated in software project manage- ment. The study indicated that our industry is lacking in program managers...soft- ware developments get bigger, more complicated, and more dependent on senior software pro- fessionals to get the project on the right path

  3. Toward a Formal Model of the Design and Evolution of Software

    DTIC Science & Technology

    1988-12-20

    should have the flezibiity to support a variety of design methodologies, be compinhenaive enough to encompass the gamut of software lifecycle...the future. It should have the flezibility to support a variety of design methodologies, be comprehensive enough to encompass the gamut of software...variety of design methodologies, be comprehensive enough to encompass the gamut of software lifecycle activities, and be precise enough to provide the

  4. ICW eHealth Framework.

    PubMed

    Klein, Karsten; Wolff, Astrid C; Ziebold, Oliver; Liebscher, Thomas

    2008-01-01

    The ICW eHealth Framework (eHF) is a powerful infrastructure and platform for the development of service-oriented solutions in the health care business. It is the culmination of many years of experience of ICW in the development and use of in-house health care solutions and represents the foundation of ICW product developments based on the Java Enterprise Edition (Java EE). The ICW eHealth Framework has been leveraged to allow development by external partners - enabling adopters a straightforward integration into ICW solutions. The ICW eHealth Framework consists of reusable software components, development tools, architectural guidelines and conventions defining a full software-development and product lifecycle. From the perspective of a partner, the framework provides services and infrastructure capabilities for integrating applications within an eHF-based solution. This article introduces the ICW eHealth Framework's basic architectural concepts and technologies. It provides an overview of its module and component model, describes the development platform that supports the complete software development lifecycle of health care applications and outlines technological aspects, mainly focusing on application development frameworks and open standards.

  5. Software Development: A Product Life-Cycle Perspective

    DTIC Science & Technology

    1990-05-01

    management came from these magazines and journals: Journal of Advertising Research , Business Marketing, Journal of Systems Manaament, nural Marketing...Johanna. "Price is More Sensitive." Software Magazine, March 1988, 44. Andrews, Kirby. "Communications Imperatives for New Products." Journal of Advertising Research , October

  6. Secure it now or secure it later: the benefits of addressing cyber-security from the outset

    NASA Astrophysics Data System (ADS)

    Olama, Mohammed M.; Nutaro, James

    2013-05-01

    The majority of funding for research and development (R&D) in cyber-security is focused on the end of the software lifecycle where systems have been deployed or are nearing deployment. Recruiting of cyber-security personnel is similarly focused on end-of-life expertise. By emphasizing cyber-security at these late stages, security problems are found and corrected when it is most expensive to do so, thus increasing the cost of owning and operating complex software systems. Worse, expenditures on expensive security measures often mean less money for innovative developments. These unwanted increases in cost and potential slowing of innovation are unavoidable consequences of an approach to security that finds and remediate faults after software has been implemented. We argue that software security can be improved and the total cost of a software system can be substantially reduced by an appropriate allocation of resources to the early stages of a software project. By adopting a similar allocation of R&D funds to the early stages of the software lifecycle, we propose that the costs of cyber-security can be better controlled and, consequently, the positive effects of this R&D on industry will be much more pronounced.

  7. Success Factors for Using Case Method in Teaching and Learning Software Engineering

    ERIC Educational Resources Information Center

    Razali, Rozilawati; Zainal, Dzulaiha Aryanee Putri

    2013-01-01

    The Case Method (CM) has long been used effectively in Social Science education. Its potential use in Applied Science such as Software Engineering (SE) however has yet to be further explored. SE is an engineering discipline that concerns the principles, methods and tools used throughout the software development lifecycle. In CM, subjects are…

  8. Real-time software failure characterization

    NASA Technical Reports Server (NTRS)

    Dunham, Janet R.; Finelli, George B.

    1990-01-01

    A series of studies aimed at characterizing the fundamentals of the software failure process has been undertaken as part of a NASA project on the modeling of a real-time aerospace vehicle software reliability. An overview of these studies is provided, and the current study, an investigation of the reliability of aerospace vehicle guidance and control software, is examined. The study approach provides for the collection of life-cycle process data, and for the retention and evaluation of interim software life-cycle products.

  9. General object-oriented software development

    NASA Technical Reports Server (NTRS)

    Seidewitz, Edwin V.; Stark, Mike

    1986-01-01

    Object-oriented design techniques are gaining increasing popularity for use with the Ada programming language. A general approach to object-oriented design which synthesizes the principles of previous object-oriented methods into the overall software life-cycle, providing transitions from specification to design and from design to code. It therefore provides the basis for a general object-oriented development methodology.

  10. Techniques for development of safety-related software for surgical robots.

    PubMed

    Varley, P

    1999-12-01

    Regulatory bodies require evidence that software controlling potentially hazardous devices is developed to good manufacturing practices. Effective techniques used in other industries assume long timescales and high staffing levels and can be unsuitable for use without adaptation in developing electronic healthcare devices. This paper discusses a set of techniques used in practice to develop software for a particular innovative medical product, an endoscopic camera manipulator. These techniques include identification of potential hazards and tracing their mitigating factors through the project lifecycle.

  11. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Beckman, Carol S.; Benzinger, Leonora; Beshers, George; Hammerslag, David; Kimball, John; Kirslis, Peter A.; Render, Hal; Richards, Paul; Terwilliger, Robert

    1985-01-01

    The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. The SAGA system consists of a small number of software components that are adapted by the meta-tools into specific tools for use in the software development application. The modules are design so that the meta-tools can construct an environment which is both integrated and flexible. The SAGA project is documented in several papers which are presented.

  12. Distilling Design Patterns From Agile Curation Case Studies

    NASA Astrophysics Data System (ADS)

    Benedict, K. K.; Lenhardt, W. C.; Young, J. W.

    2016-12-01

    In previous work the authors have argued that there is a need to take a new look at the data management lifecycle. Our core argument is that the data management lifecycle needs to be in essence deconstructed and rebuilt. As part of this process we also argue that much can be gained from applying ideas, concepts, and principles from agile software development methods. To be sure we are not arguing for a rote application of these agile software approaches, however, given various trends related to data and technology, it is imperative to update our thinking about how to approach the data management lifecycle, recognize differing project scales, corresponding variations in structure, and alternative models for solving the problems of scientific data curation. In this paper we will describe what we term agile curation design patterns, borrowing the concept of design patterns from the software world and we will present some initial thoughts on agile curation design patterns as informed by a sample of data curation case studies solicited from participants in agile data curation meeting sessions conducted in 2015-16.

  13. Integrated testing and verification system for research flight software

    NASA Technical Reports Server (NTRS)

    Taylor, R. N.

    1979-01-01

    The MUST (Multipurpose User-oriented Software Technology) program is being developed to cut the cost of producing research flight software through a system of software support tools. An integrated verification and testing capability was designed as part of MUST. Documentation, verification and test options are provided with special attention on real-time, multiprocessing issues. The needs of the entire software production cycle were considered, with effective management and reduced lifecycle costs as foremost goals.

  14. Software Carpentry In The Hydrological Sciences

    NASA Astrophysics Data System (ADS)

    Ahmadia, A. J.; Kees, C. E.

    2014-12-01

    Scientists are spending an increasing amount of time building and using hydrology software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. As hydrology models increase in capability and enter use by a growing number of scientists and their communities, it is important that the scientific software development practices scale up to meet the challenges posed by increasing software complexity, lengthening software lifecycles, a growing number of stakeholders and contributers, and a broadened developer base that extends from application domains to high performance computing centers. Many of these challenges in complexity, lifecycles, and developer base have been successfully met by the open source community, and there are many lessons to be learned from their experiences and practices. Additionally, there is much wisdom to be found in the results of research studies conducted on software engineering itself. Software Carpentry aims to bridge the gap between the current state of software development and these known best practices for scientific software development, with a focus on hands-on exercises and practical advice. In 2014, Software Carpentry workshops targeting earth/environmental sciences and hydrological modeling have been organized and run at the Massachusetts Institute of Technology, the US Army Corps of Engineers, the Community Surface Dynamics Modeling System Annual Meeting, and the Earth Science Information Partners Summer Meeting. In this presentation, we will share some of the successes in teaching this material, as well as discuss and present instructional material specific to hydrological modeling.

  15. Assurance specification documentation standard and Data Item Descriptions (DID). Volume of the information system life-cycle and documentation standards, volume 4

    NASA Technical Reports Server (NTRS)

    Callender, E. David; Steinbacher, Jody

    1989-01-01

    This is the fourth of five volumes on Information System Life-Cycle and Documentation Standards. This volume provides a well organized, easily used standard for assurance documentation for information systems and software, hardware, and operational procedures components, and related processes. The specifications are developed in conjunction with the corresponding management plans specifying the assurance activities to be performed.

  16. Management control and status reports documentation standard and Data Item Descriptions (DID). Volume of the information system life-cycle and documentation standards, volume 5

    NASA Technical Reports Server (NTRS)

    Callender, E. David; Steinbacher, Jody

    1989-01-01

    This is the fifth of five volumes on Information System Life-Cycle and Documentation Standards. This volume provides a well organized, easily used standard for management control and status reports used in monitoring and controlling the management, development, and assurance of informations systems and software, hardware, and operational procedures components, and related processes.

  17. Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1981-01-01

    A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.

  18. Employing Service Oriented Architecture Technologies to Bind a Thousand Ship Navy

    DTIC Science & Technology

    2008-06-01

    critical of the software lifecycle ( Pressman , 272). This remains true with SOA technologies. Theoretically, SOA provides a rapid development and... Pressman , R. S., “Software Engineering, A Practitioner’s Approach Fifth Edition”, McGraw-Hill, New York, 2001 4. Space and Naval Warfare Systems Center

  19. Final Report Ra Power Management 1255 10-15-16 FINAL_Public

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iverson, Aaron

    Ra Power Management (RPM) has developed a cloud based software platform that manages the financial and operational functions of third party financed solar projects throughout their lifecycle. RPM’s software streamlines and automates the sales, financing, and management of a portfolio of solar assets. The software helps solar developers automate the most difficult aspects of asset management, leading to increased transparency, efficiency, and reduction in human error. More importantly, our platform will help developers save money by improving their operating margins

  20. Mobile Inquiry Learning in Sweden: Development Insights on Interoperability, Extensibility and Sustainability of the LETS GO Software System

    ERIC Educational Resources Information Center

    Vogel, Bahtijar; Kurti, Arianit; Milrad, Marcelo; Johansson, Emil; Müller, Maximilian

    2014-01-01

    This paper presents the overall lifecycle and evolution of a software system we have developed in relation to the "Learning Ecology through Science with Global Outcomes" (LETS GO) research project. One of the aims of the project is to support "open inquiry learning" using mobile science collaboratories that provide open…

  1. Software Carpentry and the Hydrological Sciences

    NASA Astrophysics Data System (ADS)

    Ahmadia, A. J.; Kees, C. E.; Farthing, M. W.

    2013-12-01

    Scientists are spending an increasing amount of time building and using hydrology software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. As hydrology models increase in capability and enter use by a growing number of scientists and their communities, it is important that the scientific software development practices scale up to meet the challenges posed by increasing software complexity, lengthening software lifecycles, a growing number of stakeholders and contributers, and a broadened developer base that extends from application domains to high performance computing centers. Many of these challenges in complexity, lifecycles, and developer base have been successfully met by the open source community, and there are many lessons to be learned from their experiences and practices. Additionally, there is much wisdom to be found in the results of research studies conducted on software engineering itself. Software Carpentry aims to bridge the gap between the current state of software development and these known best practices for scientific software development, with a focus on hands-on exercises and practical advice based on the following principles: 1. Write programs for people, not computers. 2. Automate repetitive tasks 3. Use the computer to record history 4. Make incremental changes 5. Use version control 6. Don't repeat yourself (or others) 7. Plan for mistakes 8. Optimize software only after it works 9. Document design and purpose, not mechanics 10. Collaborate We discuss how these best practices, arising from solid foundations in research and experience, have been shown to help improve scientist's productivity and the reliability of their software.

  2. Carbon footprint estimator, phase II : volume II - technical appendices.

    DOT National Transportation Integrated Search

    2014-03-01

    The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...

  3. Carbon footprint estimator, phase II : volume I - GASCAP model.

    DOT National Transportation Integrated Search

    2014-03-01

    The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...

  4. System Evaluation and Life-Cycle Cost Analysis of a Commercial-Scale High-Temperature Electrolysis Hydrogen Production Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwin A. Harvego; James E. O'Brien; Michael G. McKellar

    2012-11-01

    Results of a system evaluation and lifecycle cost analysis are presented for a commercial-scale high-temperature electrolysis (HTE) central hydrogen production plant. The plant design relies on grid electricity to power the electrolysis process and system components, and industrial natural gas to provide process heat. The HYSYS process analysis software was used to evaluate the reference central plant design capable of producing 50,000 kg/day of hydrogen. The HYSYS software performs mass and energy balances across all components to allow optimization of the design using a detailed process flow sheet and realistic operating conditions specified by the analyst. The lifecycle cost analysismore » was performed using the H2A analysis methodology developed by the Department of Energy (DOE) Hydrogen Program. This methodology utilizes Microsoft Excel spreadsheet analysis tools that require detailed plant performance information (obtained from HYSYS), along with financial and cost information to calculate lifecycle costs. The results of the lifecycle analyses indicate that for a 10% internal rate of return, a large central commercial-scale hydrogen production plant can produce 50,000 kg/day of hydrogen at an average cost of $2.68/kg. When the cost of carbon sequestration is taken into account, the average cost of hydrogen production increases by $0.40/kg to $3.08/kg.« less

  5. Models for Deploying Open Source and Commercial Software to Support Earth Science Data Processing and Distribution

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Downs, R. R.

    2011-12-01

    Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may also have the additional benefits of guaranteed up-time, bug fixes, and vendor-added enhancements. Deploying commercial software can be advantageous for obtaining system or software components offered by a vendor that meet in-house requirements. The vendor can be contracted to provide installation, support and maintenance services as needed. Combining these options offers a menu of choices, enabling selection of system components or software modules that meet the evolving requirements encountered throughout the scientific data lifecycle.

  6. Software Security Practices: Integrating Security into the SDLC

    DTIC Science & Technology

    2011-05-01

    Software Security Practices Integrating Security into the SDLC Robert A. Martin HS SEDI is a trademark of the U.S. Department of Homeland Security...2011 to 00-00-2011 4. TITLE AND SUBTITLE Software Security Practices Integrating Security into the SDLC 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...SEDI FFRDC is managed and operated by The MITRE Corporation for DHS. 4 y y w SDLC Integrating Security into a typical software development lifecycle

  7. Real cost : user manual.

    DOT National Transportation Integrated Search

    2004-05-01

    This manual provides basic instruction for using RealCost, software that was developed by the Federal Highway Administration (FHWA) to support the application of life-cycle cost analysis (LCCA) in the pavement project-level decisionmaking process. Th...

  8. Agile Methods: Selected DoD Management and Acquisition Concerns

    DTIC Science & Technology

    2011-10-01

    SIDRE Software Intensive Innovative Development and Reengineering/Evolution SLIM Software Lifecycle Management -Estimate SLOC source lines of code...ISBN #0321502752 Coaching Agile Teams Lyssa Adkins ISBN #0321637704 Agile Project Management : Creating Innovative Products – Second Edition Jim...Accessed July 13, 2011. [Highsmith 2009] Highsmith, J. Agile Project Management : Creating Innovative Products, 2nd ed. Addison- Wesley, 2009

  9. Using Modified Fagan Inspections to Control Rapid System Development

    NASA Technical Reports Server (NTRS)

    Griesel, M. A.; Welz, L. L.

    1994-01-01

    The Jet Propulsion Laboratory (JPL) has been developing new approaches to software and system development to shorten life cycle time and reduce total life-cycle cost, while maintaining product quality. One such approach has been taken by the Just-In-Time (JIT) Materiel Acquisition System Development Project.

  10. Adopting Open Source Software to Address Software Risks during the Scientific Data Life Cycle

    NASA Astrophysics Data System (ADS)

    Vinay, S.; Downs, R. R.

    2012-12-01

    Software enables the creation, management, storage, distribution, discovery, and use of scientific data throughout the data lifecycle. However, the capabilities offered by software also present risks for the stewardship of scientific data, since future access to digital data is dependent on the use of software. From operating systems to applications for analyzing data, the dependence of data on software presents challenges for the stewardship of scientific data. Adopting open source software provides opportunities to address some of the proprietary risks of data dependence on software. For example, in some cases, open source software can be deployed to avoid licensing restrictions for using, modifying, and transferring proprietary software. The availability of the source code of open source software also enables the inclusion of modifications, which may be contributed by various community members who are addressing similar issues. Likewise, an active community that is maintaining open source software can be a valuable source of help, providing an opportunity to collaborate to address common issues facing adopters. As part of the effort to meet the challenges of software dependence for scientific data stewardship, risks from software dependence have been identified that exist during various times of the data lifecycle. The identification of these risks should enable the development of plans for mitigating software dependencies, where applicable, using open source software, and to improve understanding of software dependency risks for scientific data and how they can be reduced during the data life cycle.

  11. Modernization of software quality assurance

    NASA Technical Reports Server (NTRS)

    Bhaumik, Gokul

    1988-01-01

    The customers satisfaction depends not only on functional performance, it also depends on the quality characteristics of the software products. An examination of this quality aspect of software products will provide a clear, well defined framework for quality assurance functions, which improve the life-cycle activities of software development. Software developers must be aware of the following aspects which have been expressed by many quality experts: quality cannot be added on; the level of quality built into a program is a function of the quality attributes employed during the development process; and finally, quality must be managed. These concepts have guided our development of the following definition for a Software Quality Assurance function: Software Quality Assurance is a formal, planned approach of actions designed to evaluate the degree of an identifiable set of quality attributes present in all software systems and their products. This paper is an explanation of how this definition was developed and how it is used.

  12. Gate-to-gate Life-Cycle Inventory of Hardboard Production in North America

    Treesearch

    Richard Bergman

    2014-01-01

    Whole-building life-cycle assessments (LCAs) populated by life-cycle inventory (LCI) data are incorporated into environmental footprint software tools for establishing green building certification by building professionals and code. However, LCI data on some wood building products are still needed to help fill gaps in the data and thus provide a more complete picture...

  13. Planning level assessment of greenhouse gas emissions for alternative transportation construction projects : carbon footprint estimator, phase II, volume I - GASCAP model.

    DOT National Transportation Integrated Search

    2014-03-01

    The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...

  14. A Software Safety Risk Taxonomy for Use in Retrospective Safety Cases

    NASA Technical Reports Server (NTRS)

    Hill, Janice L.

    2007-01-01

    Safety standards contain technical and process-oriented safely requirements. The best time to include these requirements is early in the development lifecycle of the system. When software safety requirements are levied on a legacy system after the fact, a retrospective safety case will need to be constructed for the software in the system. This can be a difficult task because there may be few to no art facts available to show compliance to the software safely requirements. The risks associated with not meeting safely requirements in a legacy safely-critical computer system must be addressed to give confidence for reuse. This paper introduces a proposal for a software safely risk taxonomy for legacy safely-critical computer systems, by specializing the Software Engineering Institute's 'Software Development Risk Taxonomy' with safely elements and attributes.

  15. Software safety - A user's practical perspective

    NASA Technical Reports Server (NTRS)

    Dunn, William R.; Corliss, Lloyd D.

    1990-01-01

    Software safety assurance philosophy and practices at the NASA Ames are discussed. It is shown that, to be safe, software must be error-free. Software developments on two digital flight control systems and two ground facility systems are examined, including the overall system and software organization and function, the software-safety issues, and their resolution. The effectiveness of safety assurance methods is discussed, including conventional life-cycle practices, verification and validation testing, software safety analysis, and formal design methods. It is concluded (1) that a practical software safety technology does not yet exist, (2) that it is unlikely that a set of general-purpose analytical techniques can be developed for proving that software is safe, and (3) that successful software safety-assurance practices will have to take into account the detailed design processes employed and show that the software will execute correctly under all possible conditions.

  16. Intelligent Tools for Planning Knowledge base Development and Verification

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintaining the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems.

  17. Static and Completion Analysis for Planning Knowledge Base Development and Verification

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintaining the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems.

  18. Framework Support For Knowledge-Based Software Development

    NASA Astrophysics Data System (ADS)

    Huseth, Steve

    1988-03-01

    The advent of personal engineering workstations has brought substantial information processing power to the individual programmer. Advanced tools and environment capabilities supporting the software lifecycle are just beginning to become generally available. However, many of these tools are addressing only part of the software development problem by focusing on rapid construction of self-contained programs by a small group of talented engineers. Additional capabilities are required to support the development of large programming systems where a high degree of coordination and communication is required among large numbers of software engineers, hardware engineers, and managers. A major player in realizing these capabilities is the framework supporting the software development environment. In this paper we discuss our research toward a Knowledge-Based Software Assistant (KBSA) framework. We propose the development of an advanced framework containing a distributed knowledge base that can support the data representation needs of tools, provide environmental support for the formalization and control of the software development process, and offer a highly interactive and consistent user interface.

  19. Automated Software Development Workstation (ASDW)

    NASA Technical Reports Server (NTRS)

    Fridge, Ernie

    1990-01-01

    Software development is a serious bottleneck in the construction of complex automated systems. An increase of the reuse of software designs and components has been viewed as a way to relieve this bottleneck. One approach to achieving software reusability is through the development and use of software parts composition systems. A software parts composition system is a software development environment comprised of a parts description language for modeling parts and their interfaces, a catalog of existing parts, a composition editor that aids a user in the specification of a new application from existing parts, and a code generator that takes a specification and generates an implementation of a new application in a target language. The Automated Software Development Workstation (ASDW) is an expert system shell that provides the capabilities required to develop and manipulate these software parts composition systems. The ASDW is now in Beta testing at the Johnson Space Center. Future work centers on responding to user feedback for capability and usability enhancement, expanding the scope of the software lifecycle that is covered, and in providing solutions to handling very large libraries of reusable components.

  20. Computing in high-energy physics

    DOE PAGES

    Mount, Richard P.

    2016-05-31

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  1. Computing in high-energy physics

    NASA Astrophysics Data System (ADS)

    Mount, Richard P.

    2016-04-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Finally, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  2. Computing in high-energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mount, Richard P.

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  3. 49 CFR 229.305 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... cohesion. Component means an electronic element, device, or appliance (including hardware or software) that... and software version, is documented and maintained through the life-cycle of the products in use. Executive software means software common to all installations of a given electronic product. It generally is...

  4. 49 CFR 229.305 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... cohesion. Component means an electronic element, device, or appliance (including hardware or software) that... and software version, is documented and maintained through the life-cycle of the products in use. Executive software means software common to all installations of a given electronic product. It generally is...

  5. 49 CFR 229.305 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... cohesion. Component means an electronic element, device, or appliance (including hardware or software) that... and software version, is documented and maintained through the life-cycle of the products in use. Executive software means software common to all installations of a given electronic product. It generally is...

  6. APPLICATION OF THE US DECISION SUPPORT TOOL FOR MATERIALS AND WASTE MANAGEMENT

    EPA Science Inventory

    EPA¿s National Risk Management Research Laboratory has led the development of a municipal solid waste decision support tool (MSW-DST). The computer software can be used to calculate life-cycle environmental tradeoffs and full costs of different waste management plans or recycling...

  7. Software Development Standard Processes (SDSP)

    NASA Technical Reports Server (NTRS)

    Lavin, Milton L.; Wang, James J.; Morillo, Ronald; Mayer, John T.; Jamshidian, Barzia; Shimizu, Kenneth J.; Wilkinson, Belinda M.; Hihn, Jairus M.; Borgen, Rosana B.; Meyer, Kenneth N.; hide

    2011-01-01

    A JPL-created set of standard processes is to be used throughout the lifecycle of software development. These SDSPs cover a range of activities, from management and engineering activities, to assurance and support activities. These processes must be applied to software tasks per a prescribed set of procedures. JPL s Software Quality Improvement Project is currently working at the behest of the JPL Software Process Owner to ensure that all applicable software tasks follow these procedures. The SDSPs are captured as a set of 22 standards in JPL s software process domain. They were developed in-house at JPL by a number of Subject Matter Experts (SMEs) residing primarily within the Engineering and Science Directorate, but also from the Business Operations Directorate and Safety and Mission Success Directorate. These practices include not only currently performed best practices, but also JPL-desired future practices in key thrust areas like software architecting and software reuse analysis. Additionally, these SDSPs conform to many standards and requirements to which JPL projects are beholden.

  8. Information System Life-Cycle And Documentation Standards (SMAP DIDS)

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Although not computer program, SMAP DIDS written to provide systematic, NASA-wide structure for documenting information system development projects. Each DID (data item description) outlines document required for top-quality software development. When combined with management, assurance, and life cycle standards, Standards protect all parties who participate in design and operation of new information system.

  9. Evaluating Managerial Styles for System Development Life Cycle Stages to Ensure Software Project Success

    ERIC Educational Resources Information Center

    Kocherla, Showry

    2012-01-01

    Information technology (IT) projects are considered successful if they are completed on time, within budget, and within scope. Even though, the required tools and methodologies are in place, IT projects continue to fail at a higher rate. Current literature lacks explanation for success within the stages of system development life-cycle (SDLC) such…

  10. Software cost/resource modeling: Deep space network software cost estimation model

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. J.

    1980-01-01

    A parametric software cost estimation model prepared for JPL deep space network (DSN) data systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models, such as those of the General Research Corporation, Doty Associates, IBM (Walston-Felix), Rome Air Force Development Center, University of Maryland, and Rayleigh-Norden-Putnam. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software lifecycle statistics. The estimation model output scales a standard DSN work breakdown structure skeleton, which is then input to a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.

  11. Agile development approach for the observatory control software of the DAG 4m telescope

    NASA Astrophysics Data System (ADS)

    Güçsav, B. Bülent; ćoker, Deniz; Yeşilyaprak, Cahit; Keskin, Onur; Zago, Lorenzo; Yerli, Sinan K.

    2016-08-01

    Observatory Control Software for the upcoming 4m infrared telescope of DAG (Eastern Anatolian Observatory in Turkish) is in the beginning of its lifecycle. After the process of elicitation-validation of the initial requirements, we have been focused on preparation of a rapid conceptual design not only to see the big picture of the system but also to clarify the further development methodology. The existing preliminary designs for both software (including TCS and active optics control system) and hardware shall be presented here in brief to exploit the challenges the DAG software team has been facing with. The potential benefits of an agile approach for the development will be discussed depending on the published experience of the community and on the resources available to us.

  12. Making Use of a Decade of Widely Varying Historical Data: SARP Project - "Full Life-Cycle Defect Management"

    NASA Technical Reports Server (NTRS)

    Shull, Forrest; Godfrey, Sally; Bechtel, Andre; Feldmann, Raimund L.; Regardie, Myrna; Seaman, Carolyn

    2008-01-01

    A viewgraph presentation describing the NASA Software Assurance Research Program (SARP) project, with a focus on full life-cycle defect management, is provided. The topics include: defect classification, data set and algorithm mapping, inspection guidelines, and tool support.

  13. Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic

    NASA Technical Reports Server (NTRS)

    Leucht, Kurt W.; Semmel, Glenn S.

    2008-01-01

    The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.

  14. Automating Risk Analysis of Software Design Models

    PubMed Central

    Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P.

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688

  15. Automating risk analysis of software design models.

    PubMed

    Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  16. RICIS Symposium 1988

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Integrated Environments for Large, Complex Systems is the theme for the RICIS symposium of 1988. Distinguished professionals from industry, government, and academia have been invited to participate and present their views and experiences regarding research, education, and future directions related to this topic. Within RICIS, more than half of the research being conducted is in the area of Computer Systems and Software Engineering. The focus of this research is on the software development life-cycle for large, complex, distributed systems. Within the education and training component of RICIS, the primary emphasis has been to provide education and training for software professionals.

  17. 49 CFR 236.903 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... electrical, mechanical, hardware, or software) that is part of a system or subsystem. Configuration..., including the hardware components and software version, is documented and maintained through the life-cycle... or compensates individuals to perform the duties specified in § 236.921 (a). Executive software means...

  18. 49 CFR 236.903 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... electrical, mechanical, hardware, or software) that is part of a system or subsystem. Configuration..., including the hardware components and software version, is documented and maintained through the life-cycle... or compensates individuals to perform the duties specified in § 236.921 (a). Executive software means...

  19. 49 CFR 236.903 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... electrical, mechanical, hardware, or software) that is part of a system or subsystem. Configuration..., including the hardware components and software version, is documented and maintained through the life-cycle... or compensates individuals to perform the duties specified in § 236.921 (a). Executive software means...

  20. 49 CFR 236.903 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... electrical, mechanical, hardware, or software) that is part of a system or subsystem. Configuration..., including the hardware components and software version, is documented and maintained through the life-cycle... or compensates individuals to perform the duties specified in § 236.921 (a). Executive software means...

  1. Aspect-Oriented Model-Driven Software Product Line Engineering

    NASA Astrophysics Data System (ADS)

    Groher, Iris; Voelter, Markus

    Software product line engineering aims to reduce development time, effort, cost, and complexity by taking advantage of the commonality within a portfolio of similar products. The effectiveness of a software product line approach directly depends on how well feature variability within the portfolio is implemented and managed throughout the development lifecycle, from early analysis through maintenance and evolution. This article presents an approach that facilitates variability implementation, management, and tracing by integrating model-driven and aspect-oriented software development. Features are separated in models and composed of aspect-oriented composition techniques on model level. Model transformations support the transition from problem to solution space models. Aspect-oriented techniques enable the explicit expression and modularization of variability on model, template, and code level. The presented concepts are illustrated with a case study of a home automation system.

  2. Questioning the Role of Requirements Engineering in the Causes of Safety-Critical Software Failures

    NASA Technical Reports Server (NTRS)

    Johnson, C. W.; Holloway, C. M.

    2006-01-01

    Many software failures stem from inadequate requirements engineering. This view has been supported both by detailed accident investigations and by a number of empirical studies; however, such investigations can be misleading. It is often difficult to distinguish between failures in requirements engineering and problems elsewhere in the software development lifecycle. Further pitfalls arise from the assumption that inadequate requirements engineering is a cause of all software related accidents for which the system fails to meet its requirements. This paper identifies some of the problems that have arisen from an undue focus on the role of requirements engineering in the causes of major accidents. The intention is to provoke further debate within the emerging field of forensic software engineering.

  3. Quality Attribute Techniques Framework

    NASA Astrophysics Data System (ADS)

    Chiam, Yin Kia; Zhu, Liming; Staples, Mark

    The quality of software is achieved during its development. Development teams use various techniques to investigate, evaluate and control potential quality problems in their systems. These “Quality Attribute Techniques” target specific product qualities such as safety or security. This paper proposes a framework to capture important characteristics of these techniques. The framework is intended to support process tailoring, by facilitating the selection of techniques for inclusion into process models that target specific product qualities. We use risk management as a theory to accommodate techniques for many product qualities and lifecycle phases. Safety techniques have motivated the framework, and safety and performance techniques have been used to evaluate the framework. The evaluation demonstrates the ability of quality risk management to cover the development lifecycle and to accommodate two different product qualities. We identify advantages and limitations of the framework, and discuss future research on the framework.

  4. Cradle-to-Gate Life-Cycle Inventory of Hardboard and Engineered Wood Siding and Trim Produced in North America

    Treesearch

    Richard D. Bergman

    2015-01-01

    Developing wood product LCI data helps construct product LCAs that are then incorporated into developing whole building LCAs in environmental footprint software such as the Athena Impact Estimator for Buildings (ASMI 2015). Conducting whole building LCAs provide for points that go toward green building certification in rating systems such as LEED v4, Green Globes, and...

  5. Fly-by-light technology development plan

    NASA Technical Reports Server (NTRS)

    Todd, J. R.; Williams, T.; Goldthorpe, S.; Hay, J.; Brennan, M.; Sherman, B.; Chen, J.; Yount, Larry J.; Hess, Richard F.; Kravetz, J.

    1990-01-01

    The driving factors and developments which make a fly-by-light (FBL) viable are discussed. Documentation, analyses, and recommendations are provided on the major issues pertinent to facilitating the U.S. implementation of commercial FBL aircraft before the turn of the century. Areas of particular concern include ultra-reliable computing (hardware/software); electromagnetic environment (EME); verification and validation; optical techniques; life-cycle maintenance; and basis and procedures for certification.

  6. Software And Systems Engineering Risk Management

    DTIC Science & Technology

    2010-04-01

    RSKM 2004 COSO Enterprise RSKM Framework 2006 ISO/IEC 16085 Risk Management Process 2008 ISO/IEC 12207 Software Lifecycle Processes 2009 ISO/IEC...1 Software And Systems Engineering Risk Management John Walz VP Technical and Conferences Activities, IEEE Computer Society Vice-Chair Planning...Software & Systems Engineering Standards Committee, IEEE Computer Society US TAG to ISO TMB Risk Management Working Group Systems and Software

  7. RT-Syn: A real-time software system generator

    NASA Technical Reports Server (NTRS)

    Setliff, Dorothy E.

    1992-01-01

    This paper presents research into providing highly reusable and maintainable components by using automatic software synthesis techniques. This proposal uses domain knowledge combined with automatic software synthesis techniques to engineer large-scale mission-critical real-time software. The hypothesis centers on a software synthesis architecture that specifically incorporates application-specific (in this case real-time) knowledge. This architecture synthesizes complex system software to meet a behavioral specification and external interaction design constraints. Some examples of these external constraints are communication protocols, precisions, timing, and space limitations. The incorporation of application-specific knowledge facilitates the generation of mathematical software metrics which are used to narrow the design space, thereby making software synthesis tractable. Success has the potential to dramatically reduce mission-critical system life-cycle costs not only by reducing development time, but more importantly facilitating maintenance, modifications, and extensions of complex mission-critical software systems, which are currently dominating life cycle costs.

  8. VIMOS Instrument Control Software Design: an Object Oriented Approach

    NASA Astrophysics Data System (ADS)

    Brau-Nogué, Sylvie; Lucuix, Christian

    2002-12-01

    The Franco-Italian VIMOS instrument is a VIsible imaging Multi-Object Spectrograph with outstanding multiplex capabilities, allowing to take spectra of more than 800 objects simultaneously, or integral field spectroscopy mode in a 54x54 arcsec area. VIMOS is being installed at the Nasmyth focus of the third Unit Telescope of the European Southern Observatory Very Large Telescope (VLT) at Mount Paranal in Chile. This paper will describe the analysis, the design and the implementation of the VIMOS Instrument Control System, using UML notation. Our Control group followed an Object Oriented software process while keeping in mind the ESO VLT standard control concepts. At ESO VLT a complete software library is available. Rather than applying waterfall lifecycle, ICS project used iterative development, a lifecycle consisting of several iterations. Each iteration consisted in : capture and evaluate the requirements, visual modeling for analysis and design, implementation, test, and deployment. Depending of the project phases, iterations focused more or less on specific activity. The result is an object model (the design model), including use-case realizations. An implementation view and a deployment view complement this product. An extract of VIMOS ICS UML model will be presented and some implementation, integration and test issues will be discussed.

  9. Software Intensive Systems

    DTIC Science & Technology

    2006-07-01

    Architect, Developer and Platform Evangelism • Microsoft Dynamic Systems Initiative-- John Wilson, Architect Windows Management • Windows Lifecycle...Presentations • Aegis--Reuben Pitts & CDR John Ailes, Program Executive Office, Integrated Warfare Systems • Long Term Mine Reconnaissance (LMRS)--CAPT...Paul Imes • Joint Tactical Radio System (JTRS)--Richard North, JPEO JTRS & Leonard Schiavone , MITRE • Single Integrated Air Picture (SIAP)--CAPT

  10. Integrated testing and verification system for research flight software design document

    NASA Technical Reports Server (NTRS)

    Taylor, R. N.; Merilatt, R. L.; Osterweil, L. J.

    1979-01-01

    The NASA Langley Research Center is developing the MUST (Multipurpose User-oriented Software Technology) program to cut the cost of producing research flight software through a system of software support tools. The HAL/S language is the primary subject of the design. Boeing Computer Services Company (BCS) has designed an integrated verification and testing capability as part of MUST. Documentation, verification and test options are provided with special attention on real time, multiprocessing issues. The needs of the entire software production cycle have been considered, with effective management and reduced lifecycle costs as foremost goals. Capabilities have been included in the design for static detection of data flow anomalies involving communicating concurrent processes. Some types of ill formed process synchronization and deadlock also are detected statically.

  11. Applying Real-Time UML: Real-World Experiences

    NASA Astrophysics Data System (ADS)

    Cooling, Niall; Pachschwoell, Stefan

    2004-06-01

    This paper presents Austrian Aerospace's experiences of applying UML for the design of an embedded real-time avionics system based on Feabhas' "Pragma Process". It describes the complete lifecycle from adoption of UML, through training, CASE-tool selection, system analysis, and software design and development of the project itself. It concludes by reflecting on the experiences obtained and some lessons learnt.

  12. LogiKit - assisting complex logic specification and implementation for embedded control systems

    NASA Astrophysics Data System (ADS)

    Diglio, A.; Nicolodi, B.

    2002-07-01

    LogiKit provides an overall lifecycle solution. LogiKit is a powerful software engineering case toolkit for requirements specification, simulation and documentation. LogiKit also provides an automatic ADA software design, code and unit test generator.

  13. The Environmental Control and Life Support System (ECLSS) advanced automation project

    NASA Technical Reports Server (NTRS)

    Dewberry, Brandon S.; Carnes, Ray

    1990-01-01

    The objective of the environmental control and life support system (ECLSS) Advanced Automation Project is to influence the design of the initial and evolutionary Space Station Freedom Program (SSFP) ECLSS toward a man-made closed environment in which minimal flight and ground manpower is needed. Another objective includes capturing ECLSS design and development knowledge future missions. Our approach has been to (1) analyze the SSFP ECLSS, (2) envision as our goal a fully automated evolutionary environmental control system - an augmentation of the baseline, and (3) document the advanced software systems, hooks, and scars which will be necessary to achieve this goal. From this analysis, prototype software is being developed, and will be tested using air and water recovery simulations and hardware subsystems. In addition, the advanced software is being designed, developed, and tested using automation software management plan and lifecycle tools. Automated knowledge acquisition, engineering, verification and testing tools are being used to develop the software. In this way, we can capture ECLSS development knowledge for future use develop more robust and complex software, provide feedback to the knowledge based system tool community, and ensure proper visibility of our efforts.

  14. The Need for V&V in Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1997-01-01

    V&V is currently performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to entire' domain or product line rather than a critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. engineering. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for activities.

  15. Integration and validation testing for PhEDEx, DBS and DAS with the PhEDEx LifeCycle agent

    NASA Astrophysics Data System (ADS)

    Boeser, C.; Chwalek, T.; Giffels, M.; Kuznetsov, V.; Wildish, T.

    2014-06-01

    The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within CMS experiment at LHC. To approach this problem we developed an integration test suite based on the LifeCycle agent, a tool originally conceived for stress-testing new releases of PhEDEx, the CMS data-placement tool. The LifeCycle agent provides a framework for customising the test workflow in arbitrary ways, and can scale to levels of activity well beyond those seen in normal running. This means we can run realistic performance tests at scales not likely to be seen by the experiment for some years, or with custom topologies to examine particular situations that may cause concern some time in the future. The LifeCycle agent has recently been enhanced to become a general purpose integration and validation testing tool for major CMS services. It allows cross-system integration tests of all three components to be performed in controlled environments, without interfering with production services. In this paper we discuss the design and implementation of the LifeCycle agent. We describe how it is used for small-scale debugging and validation tests, and how we extend that to large-scale tests of whole groups of sub-systems. We show how the LifeCycle agent can emulate the action of operators, physicists, or software agents external to the system under test, and how it can be scaled to large and complex systems.

  16. The Development of a Graphical Notation for the Formal Specification of Software

    DTIC Science & Technology

    1990-12-01

    the language. A detailed user survey should be performed after the language implementation is complete to determine the effectiveness of the graphical...productivity. ’Ihere is no better way to improve programmer productivity than to help the programmer to avoid performing the work in the first place. This is...optional prototyping phase is performed ) to develop a computer program (2:40). In 1985, Robert Balzer proposed the program transformation lifecycle

  17. Integrated Modeling Environment

    NASA Technical Reports Server (NTRS)

    Mosier, Gary; Stone, Paul; Holtery, Christopher

    2006-01-01

    The Integrated Modeling Environment (IME) is a software system that establishes a centralized Web-based interface for integrating people (who may be geographically dispersed), processes, and data involved in a common engineering project. The IME includes software tools for life-cycle management, configuration management, visualization, and collaboration.

  18. Implications of Responsive Space on the Flight Software Architecture

    NASA Technical Reports Server (NTRS)

    Wilmot, Jonathan

    2006-01-01

    The Responsive Space initiative has several implications for flight software that need to be addressed not only within the run-time element, but the development infrastructure and software life-cycle process elements as well. The runtime element must at a minimum support Plug & Play, while the development and process elements need to incorporate methods to quickly generate the needed documentation, code, tests, and all of the artifacts required of flight quality software. Very rapid response times go even further, and imply little or no new software development, requiring instead, using only predeveloped and certified software modules that can be integrated and tested through automated methods. These elements have typically been addressed individually with significant benefits, but it is when they are combined that they can have the greatest impact to Responsive Space. The Flight Software Branch at NASA's Goddard Space Flight Center has been developing the runtime, infrastructure and process elements needed for rapid integration with the Core Flight software System (CFS) architecture. The CFS architecture consists of three main components; the core Flight Executive (cFE), the component catalog, and the Integrated Development Environment (DE). This paper will discuss the design of the components, how they facilitate rapid integration, and lessons learned as the architecture is utilized for an upcoming spacecraft.

  19. Pragmatic quality metrics for evolutionary software development models

    NASA Technical Reports Server (NTRS)

    Royce, Walker

    1990-01-01

    Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.

  20. The Production Data Approach for Full Lifecycle Management

    NASA Astrophysics Data System (ADS)

    Schopf, J.

    2012-04-01

    The amount of data generated by scientists is growing exponentially, and studies have shown [Koe04] that un-archived data sets have a resource half-life that is only a fraction of those resources that are electronically archived. Most groups still lack standard approaches and procedures for data management. Arguably, however, scientists know something about building software. A recent article in Nature [Mer10] stated that 45% of research scientists spend more time now developing software than they did 5 years ago, and 38% spent at least 1/5th of their time developing software. Fox argues [Fox10] that a simple release of data is not the correct approach to data curation. In addition, just as software is used in a wide variety of ways never initially envisioned by its developers, we're seeing this even to a greater extent with data sets. In order to address the need for better data preservation and access, we propose that data sets should be managed in a similar fashion to building production quality software. These production data sets are not simply published once, but go through a cyclical process, including phases such as design, development, verification, deployment, support, analysis, and then development again, thereby supporting the full lifecycle of a data set. The process involved in academically-produced software changes over time with respect to issues such as how much it is used outside the development group, but factors in aspects such as knowing who is using the code, enabling multiple developers to contribute to code development with common procedures, formal testing and release processes, developing documentation, and licensing. When we work with data, either as a collection source, as someone tagging data, or someone re-using it, many of the lessons learned in building production software are applicable. Table 1 shows a comparison of production software elements to production data elements. Table 1: Comparison of production software and production data. Production Software Production Data End-user considerations End-user considerations Multiple Coders: Repository with check-in procedures Coding standards Multiple producers/collectors Local archive with check-in procedure Metadata Standards Formal testing Formal testing Bug tracking and fixes Bug tracking and fixes, QA/QC Documentation Documentation Formal Release Process Formal release process to external archive License Citation/usage statement The full presentation of this abstract will include a detailed discussion of these issues so that researchers can produce usable and accessible data sets as a first step toward reproducible science. By creating production-quality data sets, we extend the potential of our data, both in terms of usability and usefulness to ourselves and other researchers. The more we treat data with formal processes and release cycles, the more relevant and useful it can be to the scientific community.

  1. Large project experiences with object-oriented methods and reuse

    NASA Technical Reports Server (NTRS)

    Wessale, William; Reifer, Donald J.; Weller, David

    1992-01-01

    The SSVTF (Space Station Verification and Training Facility) project is completing the Preliminary Design Review of a large software development using object-oriented methods and systematic reuse. An incremental developmental lifecycle was tailored to provide early feedback and guidance on methods and products, with repeated attention to reuse. Object oriented methods were formally taught and supported by realistic examples. Reuse was readily accepted and planned by the developers. Schedule and budget issues were handled by agreements and work sharing arranged by the developers.

  2. COSTMODL - AN AUTOMATED SOFTWARE DEVELOPMENT COST ESTIMATION TOOL

    NASA Technical Reports Server (NTRS)

    Roush, G. B.

    1994-01-01

    The cost of developing computer software consumes an increasing portion of many organizations' budgets. As this trend continues, the capability to estimate the effort and schedule required to develop a candidate software product becomes increasingly important. COSTMODL is an automated software development estimation tool which fulfills this need. Assimilating COSTMODL to any organization's particular environment can yield significant reduction in the risk of cost overruns and failed projects. This user-customization capability is unmatched by any other available estimation tool. COSTMODL accepts a description of a software product to be developed and computes estimates of the effort required to produce it, the calendar schedule required, and the distribution of effort and staffing as a function of the defined set of development life-cycle phases. This is accomplished by the five cost estimation algorithms incorporated into COSTMODL: the NASA-developed KISS model; the Basic, Intermediate, and Ada COCOMO models; and the Incremental Development model. This choice affords the user the ability to handle project complexities ranging from small, relatively simple projects to very large projects. Unique to COSTMODL is the ability to redefine the life-cycle phases of development and the capability to display a graphic representation of the optimum organizational structure required to develop the subject project, along with required staffing levels and skills. The program is menu-driven and mouse sensitive with an extensive context-sensitive help system that makes it possible for a new user to easily install and operate the program and to learn the fundamentals of cost estimation without having prior training or separate documentation. The implementation of these functions, along with the customization feature, into one program makes COSTMODL unique within the industry. COSTMODL was written for IBM PC compatibles, and it requires Turbo Pascal 5.0 or later and Turbo Professional 5.0 for recompilation. An executable is provided on the distribution diskettes. COSTMODL requires 512K RAM. The standard distribution medium for COSTMODL is three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. COSTMODL was developed in 1991. IBM PC is a registered trademark of International Business Machines. Borland and Turbo Pascal are registered trademarks of Borland International, Inc. Turbo Professional is a trademark of TurboPower Software. MS-DOS is a registered trademark of Microsoft Corporation. Turbo Professional is a trademark of TurboPower Software.

  3. The Lifecycle of Bayesian Network Models Developed for Multi-Source Signature Assessment of Nuclear Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gastelum, Zoe N.; White, Amanda M.; Whitney, Paul D.

    2013-06-04

    The Multi-Source Signatures for Nuclear Programs project, part of Pacific Northwest National Laboratory’s (PNNL) Signature Discovery Initiative, seeks to computationally capture expert assessment of multi-type information such as text, sensor output, imagery, or audio/video files, to assess nuclear activities through a series of Bayesian network (BN) models. These models incorporate knowledge from a diverse range of information sources in order to help assess a country’s nuclear activities. The models span engineering topic areas, state-level indicators, and facility-specific characteristics. To illustrate the development, calibration, and use of BN models for multi-source assessment, we present a model that predicts a country’s likelihoodmore » to participate in the international nuclear nonproliferation regime. We validate this model by examining the extent to which the model assists non-experts arrive at conclusions similar to those provided by nuclear proliferation experts. We also describe the PNNL-developed software used throughout the lifecycle of the Bayesian network model development.« less

  4. Insider Threats in the Software Development Lifecycle

    DTIC Science & Technology

    2014-11-05

    employee, contractor, or other business partner who • has or had authorized access to an organization’s network , system or data and • intentionally...organization’s network , system, or data and who, through • their action/inaction without malicious intent • cause harm or substantially increase...and female Male Target Network , systems, or data PII or Customer Information IP (trade secrets) or Customer Information Access Used

  5. Rapid Building Assessment Project

    DTIC Science & Technology

    2014-05-01

    ongoing management of commercial energy efficiency. No other company offers all of these proven services on a seamless, integrated Software -as-a- Service ...FirstFuel has added a suite of additional Software -as-a- Service analytics capabilities to support the entire energy efficiency lifecycle, including...the client side. In this document, we refer to the service side software as “BUILDER” and the client software as “BuilderRED,” following the Army

  6. Software metrics: The key to quality software on the NCC project

    NASA Technical Reports Server (NTRS)

    Burns, Patricia J.

    1993-01-01

    Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.

  7. An exchange format for use-cases of hospital information systems.

    PubMed

    Masuda, G; Sakamoto, N; Sakai, R; Yamamoto, R

    2001-01-01

    Object-oriented software development is a powerful methodology for development of large hospital information systems. We think use-case driven approach is particularly useful for the development. In the use-cases driven approach, use-cases are documented at the first stage in the software development process and they are used through the whole steps in a variety of ways. Therefore, it is important to exchange and share the use-cases and make effective use of them through the overall lifecycle of a development process. In this paper, we propose a method of sharing and exchanging use-case models between applications, developers, and projects. We design an XML based exchange format for use-cases. We then discuss an application of the exchange format to support several software development activities. We preliminarily implemented a support system for object-oriented analysis based on the exchange format. The result shows that using the structural and semantic information in the exchange format enables the support system to assist the object-oriented analysis successfully.

  8. Software project management tools in global software development: a systematic mapping study.

    PubMed

    Chadli, Saad Yasser; Idri, Ali; Ros, Joaquín Nicolás; Fernández-Alemán, José Luis; de Gea, Juan M Carrillo; Toval, Ambrosio

    2016-01-01

    Global software development (GSD) which is a growing trend in the software industry is characterized by a highly distributed environment. Performing software project management (SPM) in such conditions implies the need to overcome new limitations resulting from cultural, temporal and geographic separation. The aim of this research is to discover and classify the various tools mentioned in literature that provide GSD project managers with support and to identify in what way they support group interaction. A systematic mapping study has been performed by means of automatic searches in five sources. We have then synthesized the data extracted and presented the results of this study. A total of 102 tools were identified as being used in SPM activities in GSD. We have classified these tools, according to the software life cycle process on which they focus and how they support the 3C collaboration model (communication, coordination and cooperation). The majority of the tools found are standalone tools (77%). A small number of platforms (8%) also offer a set of interacting tools that cover the software development lifecycle. Results also indicate that SPM areas in GSD are not adequately supported by corresponding tools and deserve more attention from tool builders.

  9. Software Development in the Water Sciences: a view from the divide (Invited)

    NASA Astrophysics Data System (ADS)

    Miles, B.; Band, L. E.

    2013-12-01

    While training in statistical methods is an important part of many earth scientists' training, these scientists often learn the bulk of their software development skills in an ad hoc, just-in-time manner. Yet to carry out contemporary research scientists are spending more and more time developing software. Here I present perspectives - as an earth sciences graduate student with professional software engineering experience - on the challenges scientists face adopting software engineering practices, with an emphasis on areas of the science software development lifecycle that could benefit most from improved engineering. This work builds on experience gained as part of the NSF-funded Water Science Software Institute (WSSI) conceptualization award (NSF Award # 1216817). Throughout 2013, the WSSI team held a series of software scoping and development sprints with the goals of: (1) adding features to better model green infrastructure within the Regional Hydro-Ecological Simulation System (RHESSys); and (2) infusing test-driven agile software development practices into the processes employed by the RHESSys team. The goal of efforts such as the WSSI is to ensure that investments by current and future scientists in software engineering training will enable transformative science by improving both scientific reproducibility and researcher productivity. Experience with the WSSI indicates: (1) the potential for achieving this goal; and (2) while scientists are willing to adopt some software engineering practices, transformative science will require continued collaboration between domain scientists and cyberinfrastructure experts for the foreseeable future.

  10. A Framework for Performing Verification and Validation in Reuse Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1997-01-01

    Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  11. Technology Infusion of CodeSonar into the Space Network Ground Segment (RII07)

    NASA Technical Reports Server (NTRS)

    Benson, Markland

    2008-01-01

    The NASA Software Assurance Research Program (in part) performs studies as to the feasibility of technologies for improving the safety, quality, reliability, cost, and performance of NASA software. This study considers the application of commercial automated source code analysis tools to mission critical ground software that is in the operations and sustainment portion of the product lifecycle.

  12. An Incremental Life-cycle Assurance Strategy for Critical System Certification

    DTIC Science & Technology

    2014-11-04

    for Safe Aircraft Operation Embedded software systems introduce a new class of problems not addressed by traditional system modeling & analysis...Platform Runtime Architecture Application Software Embedded SW System Engineer Data Stream Characteristics Latency jitter affects control behavior...do system level failures still occur despite fault tolerance techniques being deployed in systems ? Embedded software system as major source of

  13. Modular, Autonomous Command and Data Handling Software with Built-In Simulation and Test

    NASA Technical Reports Server (NTRS)

    Cuseo, John

    2012-01-01

    The spacecraft system that plays the greatest role throughout the program lifecycle is the Command and Data Handling System (C&DH), along with the associated algorithms and software. The C&DH takes on this role as cost driver because it is the brains of the spacecraft and is the element of the system that is primarily responsible for the integration and interoperability of all spacecraft subsystems. During design and development, many activities associated with mission design, system engineering, and subsystem development result in products that are directly supported by the C&DH, such as interfaces, algorithms, flight software (FSW), and parameter sets. A modular system architecture has been developed that provides a means for rapid spacecraft assembly, test, and integration. This modular C&DH software architecture, which can be targeted and adapted to a wide variety of spacecraft architectures, payloads, and mission requirements, eliminates the current practice of rewriting the spacecraft software and test environment for every mission. This software allows missionspecific software and algorithms to be rapidly integrated and tested, significantly decreasing time involved in the software development cycle. Additionally, the FSW includes an Onboard Dynamic Simulation System (ODySSy) that allows the C&DH software to support rapid integration and test. With this solution, the C&DH software capabilities will encompass all phases of the spacecraft lifecycle. ODySSy is an on-board simulation capability built directly into the FSW that provides dynamic built-in test capabilities as soon as the FSW image is loaded onto the processor. It includes a six-degrees- of-freedom, high-fidelity simulation that allows complete closed-loop and hardware-in-the-loop testing of a spacecraft in a ground processing environment without any additional external stimuli. ODySSy can intercept and modify sensor inputs using mathematical sensor models, and can intercept and respond to actuator commands. ODySSy integration is unique in that it allows testing of actual mission sequences on the flight vehicle while the spacecraft is in various stages of assembly, test, and launch operations all without any external support equipment or simulators. The ODySSy component of the FSW significantly decreases the time required for integration and test by providing an automated, standardized, and modular approach to integrated avionics and component interface and functional verification. ODySSy further provides the capability for on-orbit support in the form of autonomous mission planning and fault protection.

  14. Executive overview and introduction to the SMAP information system life-cycle and documentation standards

    NASA Technical Reports Server (NTRS)

    1989-01-01

    An overview of the five volume set of Information System Life-Cycle and Documentation Standards is provided with information on its use. The overview covers description, objectives, key definitions, structure and application of the standards, and document structure decisions. These standards were created to provide consistent NASA-wide structures for coordinating, controlling, and documenting the engineering of an information system (hardware, software, and operational procedures components) phase by phase.

  15. Generalized implementation of software safety policies

    NASA Technical Reports Server (NTRS)

    Knight, John C.; Wika, Kevin G.

    1994-01-01

    As part of a research program in the engineering of software for safety-critical systems, we are performing two case studies. The first case study, which is well underway, is a safety-critical medical application. The second, which is just starting, is a digital control system for a nuclear research reactor. Our goal is to use these case studies to permit us to obtain a better understanding of the issues facing developers of safety-critical systems, and to provide a vehicle for the assessment of research ideas. The case studies are not based on the analysis of existing software development by others. Instead, we are attempting to create software for new and novel systems in a process that ultimately will involve all phases of the software lifecycle. In this abstract, we summarize our results to date in a small part of this project, namely the determination and classification of policies related to software safety that must be enforced to ensure safe operation. We hypothesize that this classification will permit a general approach to the implementation of a policy enforcement mechanism.

  16. Discovering objects in a blood recipient information system.

    PubMed

    Qiu, D; Junghans, G; Marquardt, K; Kroll, H; Mueller-Eckhardt, C; Dudeck, J

    1995-01-01

    Application of object-oriented (OO) methodologies has been generally considered as a solution to the problem of improving the software development process and managing the so-called software crisis. Among them, object-oriented analysis (OOA) is the most essential and is a vital prerequisite for the successful use of other OO methodologies. Though there are already a good deal of OOA methods published, the most important aspect common to all these methods: discovering objects classes truly relevant to the given problem domain, has remained a subject to be intensively researched. In this paper, using the successful development of a blood recipient information system as an example, we present our approach which is based on the conceptual framework of responsibility-driven OOA. In the discussion, we also suggest that it may be inadequate to simply attribute the software crisis to the waterfall model of the software development life-cycle. We are convinced that the real causes for the failure of some software and information systems should be sought in the methodologies used in some crucial phases of the software development process. Furthermore, a software system can also fail if object classes essential to the problem domain are not discovered, implemented and visualized, so that the real-world situation cannot be faithfully traced by it.

  17. Software Engineering Education Directory

    DTIC Science & Technology

    1988-01-01

    Dana Hausman and Suzanne Woolf were crucial to the successful completion of this edition of the directory. Their teamwork, energy, and dedication...for this directory began in the summer of 1986 with a questionnaire mailed to schools selected from Peterson’s Graduate Programs in Engineering and...Christoper, and Siegel, Stan Software Cost Estimation and Life-Cycle Control by Putnam, Lawrence H. Software Quality Assurance: A Practical Approach by

  18. Operability engineering in the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Wilkinson, Belinda

    1993-01-01

    Many operability problems exist at the three Deep Space Communications Complexes (DSCC's) of the Deep Space Network (DSN). Four years ago, the position of DSN Operability Engineer was created to provide the opportunity for someone to take a system-level approach to solving these problems. Since that time, a process has been developed for personnel and development engineers and for enforcing user interface standards in software designed for the DSCC's. Plans are for the participation of operations personnel in the product life-cycle to expand in the future.

  19. The cleanroom case study in the Software Engineering Laboratory: Project description and early analysis

    NASA Technical Reports Server (NTRS)

    Green, Scott; Kouchakdjian, Ara; Basili, Victor; Weidow, David

    1990-01-01

    This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage.

  20. First experiences with the implementation of the European standard EN 62304 on medical device software for the quality assurance of a radiotherapy unit

    PubMed Central

    2014-01-01

    Background According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). Methods The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. Results The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. Conclusions The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers. PMID:24655818

  1. First experiences with the implementation of the European standard EN 62304 on medical device software for the quality assurance of a radiotherapy unit.

    PubMed

    Höss, Angelika; Lampe, Christian; Panse, Ralf; Ackermann, Benjamin; Naumann, Jakob; Jäkel, Oliver

    2014-03-21

    According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers.

  2. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience

    PubMed Central

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac

    2017-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255

  3. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.

    PubMed

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac

    2016-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.

  4. The Package-Based Development Process in the Flight Dynamics Division

    NASA Technical Reports Server (NTRS)

    Parra, Amalia; Seaman, Carolyn; Basili, Victor; Kraft, Stephen; Condon, Steven; Burke, Steven; Yakimovich, Daniil

    1997-01-01

    The Software Engineering Laboratory (SEL) has been operating for more than two decades in the Flight Dynamics Division (FDD) and has adapted to the constant movement of the software development environment. The SEL's Improvement Paradigm shows that process improvement is an iterative process. Understanding, Assessing and Packaging are the three steps that are followed in this cyclical paradigm. As the improvement process cycles back to the first step, after having packaged some experience, the level of understanding will be greater. In the past, products resulting from the packaging step have been large process documents, guidebooks, and training programs. As the technical world moves toward more modularized software, we have made a move toward more modularized software development process documentation, as such the products of the packaging step are becoming smaller and more frequent. In this manner, the QIP takes on a more spiral approach rather than a waterfall. This paper describes the state of the FDD in the area of software development processes, as revealed through the understanding and assessing activities conducted by the COTS study team. The insights presented include: (1) a characterization of a typical FDD Commercial Off the Shelf (COTS) intensive software development life-cycle process, (2) lessons learned through the COTS study interviews, and (3) a description of changes in the SEL due to the changing and accelerating nature of software development in the FDD.

  5. Configurable technology development for reusable control and monitor ground systems

    NASA Technical Reports Server (NTRS)

    Uhrlaub, David R.

    1994-01-01

    The control monitor unit (CMU) uses configurable software technology for real-time mission command and control, telemetry processing, simulation, data acquisition, data archiving, and ground operations automation. The base technology is currently planned for the following control and monitor systems: portable Space Station checkout systems; ecological life support systems; Space Station logistics carrier system; and the ground system of the Delta Clipper (SX-2) in the Single-Stage Rocket Technology program. The CMU makes extensive use of commercial technology to increase capability and reduce development and life-cycle costs. The concepts and technology are being developed by McDonnell Douglas Space and Defense Systems for the Real-Time Systems Laboratory at NASA's Kennedy Space Center under the Payload Ground Operations Contract. A second function of the Real-Time Systems Laboratory is development and utilization of advanced software development practices.

  6. Applying Standard Independent Verification and Validation (IVV) Techniques Within an Agile Framework: Is There a Compatibility Issue?

    NASA Technical Reports Server (NTRS)

    Dabney, James B.; Arthur, James Douglas

    2017-01-01

    Agile methods have gained wide acceptance over the past several years, to the point that they are now a standard management and execution approach for small-scale software development projects. While conventional Agile methods are not generally applicable to large multi-year and mission-critical systems, Agile hybrids are now being developed (such as SAFe) to exploit the productivity improvements of Agile while retaining the necessary process rigor and coordination needs of these projects. From the perspective of Independent Verification and Validation (IVV), however, the adoption of these hybrid Agile frameworks is becoming somewhat problematic. Hence, we find it prudent to question the compatibility of conventional IVV techniques with (hybrid) Agile practices.This paper documents our investigation of (a) relevant literature, (b) the modification and adoption of Agile frameworks to accommodate the development of large scale, mission critical systems, and (c) the compatibility of standard IVV techniques within hybrid Agile development frameworks. Specific to the latter, we found that the IVV methods employed within a hybrid Agile process can be divided into three groups: (1) early lifecycle IVV techniques that are fully compatible with the hybrid lifecycles, (2) IVV techniques that focus on tracing requirements, test objectives, etc. are somewhat incompatible, but can be tailored with a modest effort, and (3) IVV techniques involving an assessment requiring artifact completeness that are simply not compatible with hybrid Agile processes, e.g., those that assume complete requirement specification early in the development lifecycle.

  7. Towards a general object-oriented software development methodology

    NASA Technical Reports Server (NTRS)

    Seidewitz, ED; Stark, Mike

    1986-01-01

    An object is an abstract software model of a problem domain entity. Objects are packages of both data and operations of that data (Goldberg 83, Booch 83). The Ada (tm) package construct is representative of this general notion of an object. Object-oriented design is the technique of using objects as the basic unit of modularity in systems design. The Software Engineering Laboratory at the Goddard Space Flight Center is currently involved in a pilot program to develop a flight dynamics simulator in Ada (approximately 40,000 statements) using object-oriented methods. Several authors have applied object-oriented concepts to Ada (e.g., Booch 83, Cherry 85). It was found that these methodologies are limited. As a result a more general approach was synthesized with allows a designer to apply powerful object-oriented principles to a wide range of applications and at all stages of design. An overview is provided of this approach. Further, how object-oriented design fits into the overall software life-cycle is considered.

  8. Getting more out of biomedical documents with GATE's full lifecycle open source text analytics.

    PubMed

    Cunningham, Hamish; Tablan, Valentin; Roberts, Angus; Bontcheva, Kalina

    2013-01-01

    This software article describes the GATE family of open source text analysis tools and processes. GATE is one of the most widely used systems of its type with yearly download rates of tens of thousands and many active users in both academic and industrial contexts. In this paper we report three examples of GATE-based systems operating in the life sciences and in medicine. First, in genome-wide association studies which have contributed to discovery of a head and neck cancer mutation association. Second, medical records analysis which has significantly increased the statistical power of treatment/outcome models in the UK's largest psychiatric patient cohort. Third, richer constructs in drug-related searching. We also explore the ways in which the GATE family supports the various stages of the lifecycle present in our examples. We conclude that the deployment of text mining for document abstraction or rich search and navigation is best thought of as a process, and that with the right computational tools and data collection strategies this process can be made defined and repeatable. The GATE research programme is now 20 years old and has grown from its roots as a specialist development tool for text processing to become a rather comprehensive ecosystem, bringing together software developers, language engineers and research staff from diverse fields. GATE now has a strong claim to cover a uniquely wide range of the lifecycle of text analysis systems. It forms a focal point for the integration and reuse of advances that have been made by many people (the majority outside of the authors' own group) who work in text processing for biomedicine and other areas. GATE is available online <1> under GNU open source licences and runs on all major operating systems. Support is available from an active user and developer community and also on a commercial basis.

  9. Getting More Out of Biomedical Documents with GATE's Full Lifecycle Open Source Text Analytics

    PubMed Central

    Cunningham, Hamish; Tablan, Valentin; Roberts, Angus; Bontcheva, Kalina

    2013-01-01

    This software article describes the GATE family of open source text analysis tools and processes. GATE is one of the most widely used systems of its type with yearly download rates of tens of thousands and many active users in both academic and industrial contexts. In this paper we report three examples of GATE-based systems operating in the life sciences and in medicine. First, in genome-wide association studies which have contributed to discovery of a head and neck cancer mutation association. Second, medical records analysis which has significantly increased the statistical power of treatment/outcome models in the UK's largest psychiatric patient cohort. Third, richer constructs in drug-related searching. We also explore the ways in which the GATE family supports the various stages of the lifecycle present in our examples. We conclude that the deployment of text mining for document abstraction or rich search and navigation is best thought of as a process, and that with the right computational tools and data collection strategies this process can be made defined and repeatable. The GATE research programme is now 20 years old and has grown from its roots as a specialist development tool for text processing to become a rather comprehensive ecosystem, bringing together software developers, language engineers and research staff from diverse fields. GATE now has a strong claim to cover a uniquely wide range of the lifecycle of text analysis systems. It forms a focal point for the integration and reuse of advances that have been made by many people (the majority outside of the authors' own group) who work in text processing for biomedicine and other areas. GATE is available online <1> under GNU open source licences and runs on all major operating systems. Support is available from an active user and developer community and also on a commercial basis. PMID:23408875

  10. Methods for cost estimation in software project management

    NASA Astrophysics Data System (ADS)

    Briciu, C. V.; Filip, I.; Indries, I. I.

    2016-02-01

    The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.

  11. A Brief Study of Software Engineering Professional Continuing Education in DoD Acquisition

    DTIC Science & Technology

    2010-04-01

    Lifecycle Processes (IEEE 12207 ) (810) 37% 61% 2% Guide to the Software Engineering Body of K l d (SWEBOK) (804) 67% 31% 2% now e ge Software...Engineering-Software Measurement Process ( ISO /IEC 15939) (797) 55% 44% 2% Capability Maturity Model Integration (806) 17% 81% 2% Six Sigma Process...Improvement (804) 7% 91% 1% ISO 9000 Quality Management Systems (803) 10% 89% 1% 28 Conclusions Significant problem areas R i tequ remen s Management Very

  12. Moving Up the CMMI Capability and Maturity Levels Using Simulation

    DTIC Science & Technology

    2008-01-01

    Alternative Process Tools, Including NPV and ROI 6 Figure 3: Top-Level View of the Full Life-Cycle Version of the IEEE 12207 PSIM, Including IV&V Layer 19...Figure 4: Screenshot of the Incremental Version Model 19 Figure 5: IEEE 12207 PSIM Showing the Top-Level Life-Cycle Phases 22 Figure 6: IEEE 12207 ...Software Detailed Design for the IEEE 12207 Life- Cycle Process 24 Figure 8: Incremental Life Cycle PSIM Configured for a Specific Project Using SEPG

  13. A Lifecycle Approach to Brokered Data Management for Hydrologic Modeling Data Using Open Standards.

    NASA Astrophysics Data System (ADS)

    Blodgett, D. L.; Booth, N.; Kunicki, T.; Walker, J.

    2012-12-01

    The U.S. Geological Survey Center for Integrated Data Analytics has formalized an information management-architecture to facilitate hydrologic modeling and subsequent decision support throughout a project's lifecycle. The architecture is based on open standards and open source software to decrease the adoption barrier and to build on existing, community supported software. The components of this system have been developed and evaluated to support data management activities of the interagency Great Lakes Restoration Initiative, Department of Interior's Climate Science Centers and WaterSmart National Water Census. Much of the research and development of this system has been in cooperation with international interoperability experiments conducted within the Open Geospatial Consortium. Community-developed standards and software, implemented to meet the unique requirements of specific disciplines, are used as a system of interoperable, discipline specific, data types and interfaces. This approach has allowed adoption of existing software that satisfies the majority of system requirements. Four major features of the system include: 1) assistance in model parameter and forcing creation from large enterprise data sources; 2) conversion of model results and calibrated parameters to standard formats, making them available via standard web services; 3) tracking a model's processes, inputs, and outputs as a cohesive metadata record, allowing provenance tracking via reference to web services; and 4) generalized decision support tools which rely on a suite of standard data types and interfaces, rather than particular manually curated model-derived datasets. Recent progress made in data and web service standards related to sensor and/or model derived station time series, dynamic web processing, and metadata management are central to this system's function and will be presented briefly along with a functional overview of the applications that make up the system. As the separate pieces of this system progress, they will be combined and generalized to form a sort of social network for nationally consistent hydrologic modeling.

  14. Ada education in a software life-cycle context

    NASA Technical Reports Server (NTRS)

    Clough, Anne J.

    1986-01-01

    Some of the experience gained from a comprehensive educational program undertaken at The Charles Stark Draper Lab. to introduce the Ada language and to transition modern software engineering technology into the development of Ada and non-Ada applications is described. Initially, a core group, which included manager, engineers and programmers, received training in Ada. An Ada Office was established to assume the major responsibility for training, evaluation, acquisition and benchmarking of tools, and consultation on Ada projects. As a first step in this process, and in-house educational program was undertaken to introduce Ada to the Laboratory. Later, a software engineering course was added to the educational program as the need to address issues spanning the entire software life cycle became evident. Educational efforts to date are summarized, with an emphasis on the educational approach adopted. Finally, lessons learned in administering this program are addressed.

  15. CrossTalk: The Journal of Defense Software Engineering. Volume 22, Number 7, Nov/Dec 2009

    DTIC Science & Technology

    2009-12-01

    an MBA, and is a Certified Corporate Trainer. Booz Allen Hamilton AF PKI SPO 4241 E Piedras DR STE 210 San Antonio,TX 78228 Phone: (210) 925-9129...Certified Secure Software Lifecycle Professional, and a Project Management Professional. General Dynamics C4 Systems AF PKI SPO 4241 E Piedras DR STE 210

  16. Reducing Lifecycle Sustainment Costs

    DTIC Science & Technology

    2015-05-01

    ahead of government systems – Specific O&S needs in government: depots, software centers, VAMOSC/ ERP interfaces Implications of ERP Systems...funding is not allocated for its implementation .  Technology Refresh often requires non-recurring engineering investment, but the Working Capital Funds...VAMOSC Systems – Cost and Software Data Reports (CSDRs) • Contractor Logistics Support Contracts • Includes subcontractor reporting – Effects of

  17. BeefTracker: Spatial Tracking and Geodatabase for Beef Herd Sustainability and Lifecycle Analysis

    NASA Astrophysics Data System (ADS)

    Oltjen, J. W.; Stackhouse, J.; Forero, L.; Stackhouse-Lawson, K.

    2015-12-01

    We have developed a web-based mapping platform named "BeefTracker" to provide beef cattle ranchers a tool to determine how cattle production fits within sustainable ecosystems and to provide regional data to update beef sustainability lifecycle analysis. After initial identification and mapping of pastures, herd data (class and number of animals) are input on a mobile device in the field with a graphical pasture interface, stored in the cloud, and linked via the web to a personal computer for inventory tracking and analysis. Pasture use calculated on an animal basis provides quantifiable data regarding carrying capacity and subsequent beef production to provide more accurate data inputs for beef sustainability lifecycle analysis. After initial testing by university range scientists and ranchers we have enhanced the BeefTracker application to work when cell service is unavailable and to improve automation for increased ease of use. Thus far experiences with BeefTracker have been largely positive, due to livestock producers' perception of the need for this type of software application and its intuitive interface. We are now in the process of education to increase its use throughout the U.S.

  18. Cybersecurity and the Medical Device Product Development Lifecycle.

    PubMed

    Jones, Richard W; Katzis, Konstantinos

    2017-01-01

    Protecting connected medical devices from evolving cyber related threats, requires a continuous lifecycle approach whereby cybersecurity is integrated within the product development lifecycle and both complements and re-enforces the safety risk management processes therein. This contribution reviews the guidance relating to medical device cybersecurity within the product development lifecycle.

  19. System-of-Systems Technology-Portfolio-Analysis Tool

    NASA Technical Reports Server (NTRS)

    O'Neil, Daniel; Mankins, John; Feingold, Harvey; Johnson, Wayne

    2012-01-01

    Advanced Technology Life-cycle Analysis System (ATLAS) is a system-of-systems technology-portfolio-analysis software tool. ATLAS affords capabilities to (1) compare estimates of the mass and cost of an engineering system based on competing technological concepts; (2) estimate life-cycle costs of an outer-space-exploration architecture for a specified technology portfolio; (3) collect data on state-of-the-art and forecasted technology performance, and on operations and programs; and (4) calculate an index of the relative programmatic value of a technology portfolio. ATLAS facilitates analysis by providing a library of analytical spreadsheet models for a variety of systems. A single analyst can assemble a representation of a system of systems from the models and build a technology portfolio. Each system model estimates mass, and life-cycle costs are estimated by a common set of cost models. Other components of ATLAS include graphical-user-interface (GUI) software, algorithms for calculating the aforementioned index, a technology database, a report generator, and a form generator for creating the GUI for the system models. At the time of this reporting, ATLAS is a prototype, embodied in Microsoft Excel and several thousand lines of Visual Basic for Applications that run on both Windows and Macintosh computers.

  20. Knowledge Acquisition, Validation, and Maintenance in a Planning System for Automated Image Processing

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.

  1. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2007-01-01

    NASA relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft launched that does not have a computer on board that will provide command and control services. There have been recent incidents where software has played a role in high-profile mission failures and hazardous incidents. For example, the Mars Orbiter, Mars Polar Lander, the DART (Demonstration of Autonomous Rendezvous Technology), and MER (Mars Exploration Rover) Spirit anomalies were all caused or contributed to by software. The Mission Control Centers for the Shuttle, ISS, and unmanned programs are highly dependant on software for data displays, analysis, and mission planning. Despite this growing dependence on software control and monitoring, there has been little to no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Meanwhile, academia and private industry have been stepping forward with procedures and standards for safety critical systems and software, for example Dr. Nancy Leveson's book Safeware: System Safety and Computers. The NASA Software Safety Standard, originally published in 1997, was widely ignored due to its complexity and poor organization. It also focused on concepts rather than definite procedural requirements organized around a software project lifecycle. Led by NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard has recently undergone a significant update. This new standard provides the procedures and guidelines for evaluating a project for safety criticality and then lays out the minimum project lifecycle requirements to assure the software is created, operated, and maintained in the safest possible manner. This update of the standard clearly delineates the minimum set of software safety requirements for a project without detailing the implementation for those requirements. This allows the projects leeway to meet these requirements in many forms that best suit a particular project's needs and safety risk. In other words, it tells the project what to do, not how to do it. This update also incorporated advances in the state of the practice of software safety from academia and private industry. It addresses some of the more common issues now facing software developers in the NASA environment such as the use of Commercial-Off-the-Shelf Software (COTS), Modified OTS (MOTS), Government OTS (GOTS), and reused software. A team from across NASA developed the update and it has had both NASA-wide internal reviews by software engineering, quality, safety, and project management. It has also had expert external review. This presentation and paper will discuss the new NASA Software Safety Standard, its organization, and key features. It will start with a brief discussion of some NASA mission failures and incidents that had software as one of their root causes. It will then give a brief overview of the NASA Software Safety Process. This will include an overview of the key personnel responsibilities and functions that must be performed for safety-critical software.

  2. Implementing model-based system engineering for the whole lifecycle of a spacecraft

    NASA Astrophysics Data System (ADS)

    Fischer, P. M.; Lüdtke, D.; Lange, C.; Roshani, F.-C.; Dannemann, F.; Gerndt, A.

    2017-09-01

    Design information of a spacecraft is collected over all phases in the lifecycle of a project. A lot of this information is exchanged between different engineering tasks and business processes. In some lifecycle phases, model-based system engineering (MBSE) has introduced system models and databases that help to organize such information and to keep it consistent for everyone. Nevertheless, none of the existing databases approached the whole lifecycle yet. Virtual Satellite is the MBSE database developed at DLR. It has been used for quite some time in Phase A studies and is currently extended for implementing it in the whole lifecycle of spacecraft projects. Since it is unforeseeable which future use cases such a database needs to support in all these different projects, the underlying data model has to provide tailoring and extension mechanisms to its conceptual data model (CDM). This paper explains the mechanisms as they are implemented in Virtual Satellite, which enables extending the CDM along the project without corrupting already stored information. As an upcoming major use case, Virtual Satellite will be implemented as MBSE tool in the S2TEP project. This project provides a new satellite bus for internal research and several different payload missions in the future. This paper explains how Virtual Satellite will be used to manage configuration control problems associated with such a multi-mission platform. It discusses how the S2TEP project starts using the software for collecting the first design information from concurrent engineering studies, then making use of the extension mechanisms of the CDM to introduce further information artefacts such as functional electrical architecture, thus linking more and more processes into an integrated MBSE approach.

  3. Developing a space network interface simulator: The NTS approach

    NASA Technical Reports Server (NTRS)

    Hendrzak, Gary E.

    1993-01-01

    This paper describes the approach used to redevelop the Network Control Center (NCC) Test System (NTS), a hardware and software facility designed to make testing of the NCC Data System (NCCDS) software efficient, effective, and as rigorous as possible prior to operational use. The NTS transmits and receives network message traffic in real-time. Data transfer rates and message content are strictly controlled and are identical to that of the operational systems. NTS minimizes the need for costly and time-consuming testing with the actual external entities (e.g., the Hubble Space Telescope (HST) Payload Operations Control Center (POCC) and the White Sands Ground Terminal). Discussed are activities associated with the development of the NTS, lessons learned throughout the project's lifecycle, and resulting productivity and quality increases.

  4. Navigation/Prop Software Suite

    NASA Technical Reports Server (NTRS)

    Bruchmiller, Tomas; Tran, Sanh; Lee, Mathew; Bucker, Scott; Bupane, Catherine; Bennett, Charles; Cantu, Sergio; Kwong, Ping; Propst, Carolyn

    2012-01-01

    Navigation (Nav)/Prop software is used to support shuttle mission analysis, production, and some operations tasks. The Nav/Prop suite containing configuration items (CIs) resides on IPS/Linux workstations. It features lifecycle documents, and data files used for shuttle navigation and propellant analysis for all flight segments. This suite also includes trajectory server, archive server, and RAT software residing on MCC/Linux workstations. Navigation/Prop represents tool versions established during or after IPS Equipment Rehost-3 or after the MCC Rehost.

  5. Development of a software safety process and a case study of its use

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    1993-01-01

    The goal of this research is to continue the development of a comprehensive approach to software safety and to evaluate the approach with a case study. The case study is a major part of the project, and it involves the analysis of a specific safety-critical system from the medical equipment domain. The particular application being used was selected because of the availability of a suitable candidate system. We consider the results to be generally applicable and in no way particularly limited by the domain. The research is concentrating on issues raised by the specification and verification phases of the software lifecycle since they are central to our previously-developed rigorous definitions of software safety. The theoretical research is based on our framework of definitions for software safety. In the area of specification, the main topics being investigated are the development of techniques for building system fault trees that correctly incorporate software issues and the development of rigorous techniques for the preparation of software safety specifications. The research results are documented. Another area of theoretical investigation is the development of verification methods tailored to the characteristics of safety requirements. Verification of the correct implementation of the safety specification is central to the goal of establishing safe software. The empirical component of this research is focusing on a case study in order to provide detailed characterizations of the issues as they appear in practice, and to provide a testbed for the evaluation of various existing and new theoretical results, tools, and techniques. The Magnetic Stereotaxis System is summarized.

  6. Ascent/Descent Software

    NASA Technical Reports Server (NTRS)

    Brown, Charles; Andrew, Robert; Roe, Scott; Frye, Ronald; Harvey, Michael; Vu, Tuan; Balachandran, Krishnaiyer; Bly, Ben

    2012-01-01

    The Ascent/Descent Software Suite has been used to support a variety of NASA Shuttle Program mission planning and analysis activities, such as range safety, on the Integrated Planning System (IPS) platform. The Ascent/Descent Software Suite, containing Ascent Flight Design (ASC)/Descent Flight Design (DESC) Configuration items (Cis), lifecycle documents, and data files used for shuttle ascent and entry modeling analysis and mission design, resides on IPS/Linux workstations. A list of tools in Navigation (NAV)/Prop Software Suite represents tool versions established during or after the IPS Equipment Rehost-3 project.

  7. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes.

  8. Pavement management segment consolidation

    DOT National Transportation Integrated Search

    1998-01-01

    Dividing roads into "homogeneous" segments has been a major problem for all areas of highway engineering. SDDOT uses Deighton Associates Limited software, dTIMS, to analyze life-cycle costs for various rehabilitation strategies on each segment of roa...

  9. System Engineering Strategy for Distributed Multi-Purpose Simulation Architectures

    NASA Technical Reports Server (NTRS)

    Bhula, Dlilpkumar; Kurt, Cindy Marie; Luty, Roger

    2007-01-01

    This paper describes the system engineering approach used to develop distributed multi-purpose simulations. The multi-purpose simulation architecture focuses on user needs, operations, flexibility, cost and maintenance. This approach was used to develop an International Space Station (ISS) simulator, which is called the International Space Station Integrated Simulation (ISIS)1. The ISIS runs unmodified ISS flight software, system models, and the astronaut command and control interface in an open system design that allows for rapid integration of multiple ISS models. The initial intent of ISIS was to provide a distributed system that allows access to ISS flight software and models for the creation, test, and validation of crew and ground controller procedures. This capability reduces the cost and scheduling issues associated with utilizing standalone simulators in fixed locations, and facilitates discovering unknowns and errors earlier in the development lifecycle. Since its inception, the flexible architecture of the ISIS has allowed its purpose to evolve to include ground operator system and display training, flight software modification testing, and as a realistic test bed for Exploration automation technology research and development.

  10. Fully Employing Software Inspections Data

    NASA Technical Reports Server (NTRS)

    Shull, Forrest; Feldmann, Raimund L.; Seaman, Carolyn; Regardie, Myrna; Godfrey, Sally

    2009-01-01

    Software inspections provide a proven approach to quality assurance for software products of all kinds, including requirements, design, code, test plans, among others. Common to all inspections is the aim of finding and fixing defects as early as possible, and thereby providing cost savings by minimizing the amount of rework necessary later in the lifecycle. Measurement data, such as the number and type of found defects and the effort spent by the inspection team, provide not only direct feedback about the software product to the project team but are also valuable for process improvement activities. In this paper, we discuss NASA's use of software inspections and the rich set of data that has resulted. In particular, we present results from analysis of inspection data that illustrate the benefits of fully utilizing that data for process improvement at several levels. Examining such data across multiple inspections or projects allows team members to monitor and trigger cross project improvements. Such improvements may focus on the software development processes of the whole organization as well as improvements to the applied inspection process itself.

  11. Data Service Provider Cost Estimation Tool

    NASA Technical Reports Server (NTRS)

    Fontaine, Kathy; Hunolt, Greg; Booth, Arthur L.; Banks, Mel

    2011-01-01

    The Data Service Provider Cost Estimation Tool (CET) and Comparables Database (CDB) package provides to NASA s Earth Science Enterprise (ESE) the ability to estimate the full range of year-by-year lifecycle cost estimates for the implementation and operation of data service providers required by ESE to support its science and applications programs. The CET can make estimates dealing with staffing costs, supplies, facility costs, network services, hardware and maintenance, commercial off-the-shelf (COTS) software licenses, software development and sustaining engineering, and the changes in costs that result from changes in workload. Data Service Providers may be stand-alone or embedded in flight projects, field campaigns, research or applications projects, or other activities. The CET and CDB package employs a cost-estimation-by-analogy approach. It is based on a new, general data service provider reference model that provides a framework for construction of a database by describing existing data service providers that are analogs (or comparables) to planned, new ESE data service providers. The CET implements the staff effort and cost estimation algorithms that access the CDB and generates the lifecycle cost estimate for a new data services provider. This data creates a common basis for an ESE proposal evaluator for considering projected data service provider costs.

  12. The virtual digital nuclear power plant: A modern tool for supporting the lifecycle of VVER-based nuclear power units

    NASA Astrophysics Data System (ADS)

    Arkadov, G. V.; Zhukavin, A. P.; Kroshilin, A. E.; Parshikov, I. A.; Solov'ev, S. L.; Shishov, A. V.

    2014-10-01

    The article describes the "Virtual Digital VVER-Based Nuclear Power Plant" computerized system comprising a totality of verified initial data (sets of input data for a model intended for describing the behavior of nuclear power plant (NPP) systems in design and emergency modes of their operation) and a unified system of new-generation computation codes intended for carrying out coordinated computation of the variety of physical processes in the reactor core and NPP equipment. Experiments with the demonstration version of the "Virtual Digital VVER-Based NPP" computerized system has shown that it is in principle possible to set up a unified system of computation codes in a common software environment for carrying out interconnected calculations of various physical phenomena at NPPs constructed according to the standard AES-2006 project. With the full-scale version of the "Virtual Digital VVER-Based NPP" computerized system put in operation, the concerned engineering, design, construction, and operating organizations will have access to all necessary information relating to the NPP power unit project throughout its entire lifecycle. The domestically developed commercial-grade software product set to operate as an independently operating application to the project will bring about additional competitive advantages in the modern market of nuclear power technologies.

  13. Gaia DR1 documentation Chapter 6: Variability

    NASA Astrophysics Data System (ADS)

    Eyer, L.; Rimoldini, L.; Guy, L.; Holl, B.; Clementini, G.; Cuypers, J.; Mowlavi, N.; Lecoeur-Taïbi, I.; De Ridder, J.; Charnas, J.; Nienartowicz, K.

    2017-12-01

    This chapter describes the photometric variability processing of the Gaia DR1 data. Coordination Unit 7 is responsible for the variability analysis of over a billion celestial sources. In particular the definition, design, development, validation and provision of a software package for the data processing of photometrically variable objects. Data Processing Centre Geneva (DPCG) responsibilities cover all issues related to the computational part of the CU7 analysis. These span: hardware provisioning, including selection, deployment and optimisation of suitable hardware, choosing and developing software architecture, defining data and scientific workflows as well as operational activities such as configuration management, data import, time series reconstruction, storage and processing handling, visualisation and data export. CU7/DPCG is also responsible for interaction with other DPCs and CUs, software and programming training for the CU7 members, scientific software quality control and management of software and data lifecycle. Details about the specific data treatment steps of the Gaia DR1 data products are found in Eyer et al. (2017) and are not repeated here. The variability content of the Gaia DR1 focusses on a subsample of Cepheids and RR Lyrae stars around the South ecliptic pole, showcasing the performance of the Gaia photometry with respect to variable objects.

  14. Advanced software development workstation. Comparison of two object-oriented development methodologies

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    1992-01-01

    This report is an attempt to clarify some of the concerns raised about the OMT method, specifically that OMT is weaker than the Booch method in a few key areas. This interim report specifically addresses the following issues: (1) is OMT object-oriented or only data-driven?; (2) can OMT be used as a front-end to implementation in C++?; (3) the inheritance concept in OMT is in contradiction with the 'pure and real' inheritance concept found in object-oriented (OO) design; (4) low support for software life-cycle issues, for project and risk management; (5) uselessness of functional modeling for the ROSE project; and (6) problems with event-driven and simulation systems. The conclusion of this report is that both Booch's method and Rumbaugh's method are good OO methods, each with strengths and weaknesses in different areas of the development process.

  15. System testing of a production Ada (trademark) project: The GRODY study

    NASA Technical Reports Server (NTRS)

    Seigle, Jeffrey; Esker, Linda; Shi, Ying-Liang

    1990-01-01

    The use of the Ada language and design methodologies that utilize its features has a strong impact on all phases of the software development project lifecycle. At the National Aeronautics and Space Administration/Goddard Space Flight Center (NASA/GSFC), the Software Engineering Laboratory (SEL) conducted an experiment in parallel development of two flight dynamics systems in FORTRAN and Ada. The teams found some qualitative differences between the system test phases of the two projects. Although planning for system testing and conducting of tests were not generally affected by the use of Ada, the solving of problems found in system testing was generally facilitated by Ada constructs and design methodology. Most problems found in system testing were not due to difficulty with the language or methodology but to lack of experience with the application.

  16. Healthcare software assurance.

    PubMed

    Cooper, Jason G; Pauley, Keith A

    2006-01-01

    Software assurance is a rigorous, lifecycle phase-independent set of activities which ensure completeness, safety, and reliability of software processes and products. This is accomplished by guaranteeing conformance to all requirements, standards, procedures, and regulations. These assurance processes are even more important when coupled with healthcare software systems, embedded software in medical instrumentation, and other healthcare-oriented life-critical systems. The current Food and Drug Administration (FDA) regulatory requirements and guidance documentation do not address certain aspects of complete software assurance activities. In addition, the FDA's software oversight processes require enhancement to include increasingly complex healthcare systems such as Hospital Information Systems (HIS). The importance of complete software assurance is introduced, current regulatory requirements and guidance discussed, and the necessity for enhancements to the current processes shall be highlighted.

  17. Healthcare Software Assurance

    PubMed Central

    Cooper, Jason G.; Pauley, Keith A.

    2006-01-01

    Software assurance is a rigorous, lifecycle phase-independent set of activities which ensure completeness, safety, and reliability of software processes and products. This is accomplished by guaranteeing conformance to all requirements, standards, procedures, and regulations. These assurance processes are even more important when coupled with healthcare software systems, embedded software in medical instrumentation, and other healthcare-oriented life-critical systems. The current Food and Drug Administration (FDA) regulatory requirements and guidance documentation do not address certain aspects of complete software assurance activities. In addition, the FDA’s software oversight processes require enhancement to include increasingly complex healthcare systems such as Hospital Information Systems (HIS). The importance of complete software assurance is introduced, current regulatory requirements and guidance discussed, and the necessity for enhancements to the current processes shall be highlighted. PMID:17238324

  18. A proposed research program in information processing

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert

    1992-01-01

    The goal of the Formalized Software Development (FSD) project was to demonstrate improvements productivity of software development and maintenance through the use of a new software lifecycle paradigm. The paradigm calls for the mechanical, but human-guided, derivation of software implementations from formal specifications of the desired software behavior. It relies on altering a system's specification and rederiving its implementation as the standard technology for software maintenance. A system definition for this paradigm is composed of a behavioral specification together with a body of annotations that control the derivation of executable code from the specification. Annotations generally achieve the selection of certain data representations and/or algorithms that are consistent with, but not mandated by, the behavioral specification. In doing this, they may yield systems which exhibit only certain behaviors among multiple alternatives permitted by the behavioral specification. The FSD project proposed to construct a testbed in which to explore the realization of this new paradigm. The testbed was to provide operational support environment for software design, implementation, and maintenance. The testbed was proposed to provide highly automated support for individual programmers ('programming in the small'), but not to address the additional needs of programming teams ('programming in the large'). The testbed proposed to focus on supporting rapid construction and evolution of useful prototypes of software systems, as opposed to focusing on the problems of achieving production quality performance of systems.

  19. Ada and the rapid development lifecycle

    NASA Technical Reports Server (NTRS)

    Deforrest, Lloyd; Gref, Lynn

    1991-01-01

    JPL is under contract, through NASA, with the US Army to develop a state-of-the-art Command Center System for the US European Command (USEUCOM). The Command Center System will receive, process, and integrate force status information from various sources and provide this integrated information to staff officers and decision makers in a format designed to enhance user comprehension and utility. The system is based on distributed workstation class microcomputers, VAX- and SUN-based data servers, and interfaces to existing military mainframe systems and communication networks. JPL is developing the Command Center System utilizing an incremental delivery methodology called the Rapid Development Methodology with adherence to government and industry standards including the UNIX operating system, X Windows, OSF/Motif, and the Ada programming language. Through a combination of software engineering techniques specific to the Ada programming language and the Rapid Development Approach, JPL was able to deliver capability to the military user incrementally, with comparable quality and improved economies of projects developed under more traditional software intensive system implementation methodologies.

  20. The Model Life-cycle: Training Module

    EPA Pesticide Factsheets

    Model Life-Cycle includes identification of problems & the subsequent development, evaluation, & application of the model. Objectives: define ‘model life-cycle’, explore stages of model life-cycle, & strategies for development, evaluation, & applications.

  1. Using Ada: The deeper challenges

    NASA Technical Reports Server (NTRS)

    Feinberg, David A.

    1986-01-01

    The Ada programming language and the associated Ada Programming Support Environment (APSE) and Ada Run Time Environment (ARTE) provide the potential for significant life-cycle cost reductions in computer software development and maintenance activities. The Ada programming language itself is standardized, trademarked, and controlled via formal validation procedures. Though compilers are not yet production-ready as most would desire, the technology for constructing them is sufficiently well known and understood that time and money should suffice to correct current deficiencies. The APSE and ARTE are, on the other hand, significantly newer issues within most software development and maintenance efforts. Currently, APSE and ARTE are highly dependent on differing implementer concepts, strategies, and market objectives. Complex and sophisticated mission-critical computing systems require the use of a complete Ada-based capability, not just the programming language itself; yet the range of APSE and ARTE features which must actually be utilized can vary significantly from one system to another. As a consequence, the need to understand, objectively evaluate, and select differing APSE and ARTE capabilities and features is critical to the effective use of Ada and the life-cycle efficiencies it is intended to promote. It is the selection, collection, and understanding of APSE and ARTE which provide the deeper challenges of using Ada for real-life mission-critical computing systems. Some of the current issues which must be clarified, often on a case-by-case basis, in order to successfully realize the full capabilities of Ada are discussed.

  2. Choosing a software design method for real-time Ada applications: JSD process inversion as a means to tailor a design specification to the performance requirements and target machine

    NASA Technical Reports Server (NTRS)

    Withey, James V.

    1986-01-01

    The validity of real-time software is determined by its ability to execute on a computer within the time constraints of the physical system it is modeling. In many applications the time constraints are so critical that the details of process scheduling are elevated to the requirements analysis phase of the software development cycle. It is not uncommon to find specifications for a real-time cyclic executive program included to assumed in such requirements. It was found that prelininary designs structured around this implementation abscure the data flow of the real world system that is modeled and that it is consequently difficult and costly to maintain, update and reuse the resulting software. A cyclic executive is a software component that schedules and implicitly synchronizes the real-time software through periodic and repetitive subroutine calls. Therefore a design method is sought that allows the deferral of process scheduling to the later stages of design. The appropriate scheduling paradigm must be chosen given the performance constraints, the largest environment and the software's lifecycle. The concept of process inversion is explored with respect to the cyclic executive.

  3. A Framework for Performing V&V within Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1996-01-01

    Verification and validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In order to provide early detection of errors, V&V is conducted in parallel with system development, often beginning with the concept phase. In reuse-based software engineering, however, decisions on the requirements, design and even implementation of domain assets can be made prior to beginning development of a specific system. In this case, V&V must be performed during domain engineering in order to have an impact on system development. This paper describes a framework for performing V&V within architecture-centric, reuse-based software engineering. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  4. Cost Model Comparison: A Study of Internally and Commercially Developed Cost Models in Use by NASA

    NASA Technical Reports Server (NTRS)

    Gupta, Garima

    2011-01-01

    NASA makes use of numerous cost models to accurately estimate the cost of various components of a mission - hardware, software, mission/ground operations - during the different stages of a mission's lifecycle. The purpose of this project was to survey these models and determine in which respects they are similar and in which they are different. The initial survey included a study of the cost drivers for each model, the form of each model (linear/exponential/other CER, range/point output, capable of risk/sensitivity analysis), and for what types of missions and for what phases of a mission lifecycle each model is capable of estimating cost. The models taken into consideration consisted of both those that were developed by NASA and those that were commercially developed: GSECT, NAFCOM, SCAT, QuickCost, PRICE, and SEER. Once the initial survey was completed, the next step in the project was to compare the cost models' capabilities in terms of Work Breakdown Structure (WBS) elements. This final comparison was then portrayed in a visual manner with Venn diagrams. All of the materials produced in the process of this study were then posted on the Ground Segment Team (GST) Wiki.

  5. USER'S GUIDE FOR THE MUNICIPAL SOLID WASTE LIFE-CYCLE DATABASE

    EPA Science Inventory

    The report describes how to use the municipal solid waste (MSW) life cycle database, a software application with Microsoft Access interfaces, that provides environmental data for energy production, materials production, and MSW management activities and equipment. The basic datab...

  6. Sizing and Lifecycle Cost Analysis of an Ares V Composite Interstage

    NASA Technical Reports Server (NTRS)

    Mann, Troy; Smeltzer, Stan; Grenoble, Ray; Mason, Brian; Rosario, Sev; Fairbairn, Bob

    2012-01-01

    The Interstage Element of the Ares V launch vehicle was sized using a commercially available structural sizing software tool. Two different concepts were considered, a metallic design and a composite design. Both concepts were sized using similar levels of analysis fidelity and included the influence of design details on each concept. Additionally, the impact of the different manufacturing techniques and failure mechanisms for composite and metallic construction were considered. Significant details were included in analysis models of each concept, including penetrations for human access, joint connections, as well as secondary loading effects. The designs and results of the analysis were used to determine lifecycle cost estimates for the two Interstage designs. Lifecycle cost estimates were based on industry provided cost data for similar launch vehicle components. The results indicated that significant mass as well as cost savings are attainable for the chosen composite concept as compared with a metallic option.

  7. RICIS research

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.

    1987-01-01

    The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.

  8. The U.S./IAEA Workshop on Software Sustainability for Safeguards Instrumentation: Report to the NNSA DOE Office of International Nuclear Safeguards (NA-241)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pepper, Susan E.; Pickett, Chris A.; Queirolo, Al

    The U.S Department of Energy (DOE) National Nuclear Security Administration (NNSA) Next Generation Safeguards Initiative (NGSI) and the International Atomic Energy Agency (IAEA) convened a workshop on Software Sustainability for Safeguards Instrumentation in Vienna, Austria, May 6-8, 2014. Safeguards instrumentation software must be sustained in a changing environment to ensure existing instruments can continue to perform as designed, with improved security. The approaches to the development and maintenance of instrument software used in the past may not be the best model for the future and, therefore, the organizers’ goal was to investigate these past approaches and to determine an optimalmore » path forward. The purpose of this report is to provide input for the DOE NNSA Office of International Nuclear Safeguards (NA-241) and other stakeholders that can be utilized when making decisions related to the development and maintenance of software used in the implementation of international nuclear safeguards. For example, this guidance can be used when determining whether to fund the development, upgrade, or replacement of a particular software product. The report identifies the challenges related to sustaining software, and makes recommendations for addressing these challenges, supported by summaries and detailed notes from the workshop discussions. In addition the authors provide a set of recommendations for institutionalizing software sustainability practices in the safeguards community. The term “software sustainability” was defined for this workshop as ensuring that safeguards instrument software and algorithm functionality can be maintained efficiently throughout the instrument lifecycle, without interruption and providing the ability to continue to improve that software as needs arise.« less

  9. NOSC Program Managers Handbook. Revision 1

    DTIC Science & Technology

    1988-02-01

    cost. The effects of application of life-cycle cost analysis through the planning and RIDT&E phases of a program, and the " design to cost" concept on...is the plan for assuring the quality of the design , design documentation, and fabricated/assembled hardware and associated computer software. 13.5.3.2...listings and printouts, which document the n. requirements, design , or details of compute : software; explain the capabilities and limitations of the

  10. Extensibility Experiments with the Software Life-Cycle Support Environment

    DTIC Science & Technology

    1991-11-01

    APRICOT ) and Bit- Oriented Message Definer (BMD); and three from the Ada Software Repository (ASR) at White Sands-the NASA/Goddard Space Flight Center...Graphical Kernel System (GKS). c. AMS - The Automated Measurement System tool supports the definition, collec- tion, and reporting of quality metric...Ada Primitive Order Compilation Order Tool ( APRICOT ) 2. Bit-Oriented Message Definer (BMD) 3. LGEN: A Language Generator Tool 4. I"ilc Chc-ker 5

  11. The Core Flight System (cFS) Community: Providing Low Cost Solutions for Small Spacecraft

    NASA Technical Reports Server (NTRS)

    McComas, David; Wilmot, Jonathan; Cudmore, Alan

    2016-01-01

    In February 2015 the NASA Goddard Space Flight Center (GSFC) completed the open source release of the entire Core Flight Software (cFS) suite. After the open source release a multi-NASA center Configuration Control Board (CCB) was established that has managed multiple cFS product releases. The cFS was developed and is being maintained in compliance with the NASA Class B software development process requirements and the open source release includes all Class B artifacts. The cFS is currently running on three operational science spacecraft and is being used on multiple spacecraft and instrument development efforts. While the cFS itself is a viable flight software (FSW) solution, we have discovered that the cFS community is a continuous source of innovation and growth that provides products and tools that serve the entire FSW lifecycle and future mission needs. This paper summarizes the current state of the cFS community, the key FSW technologies being pursued, the development/verification tools and opportunities for the small satellite community to become engaged. The cFS is a proven high quality and cost-effective solution for small satellites with constrained budgets.

  12. Software Defined Cyberinfrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, Ian; Blaiszik, Ben; Chard, Kyle

    Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policiesmore » by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.« less

  13. Concepts associated with a unified life cycle analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; Peffers, Melissa S.; Tolle, Duane A.

    There is a risk associated with most things in the world, and all things have a life cycle unto themselves, even brownfields. Many components can be described by a''cycle of life.'' For example, five such components are life-form, chemical, process, activity, and idea, although many more may exist. Brownfields may touch upon several of these life cycles. Each life cycle can be represented as independent software; therefore, a software technology structure is being formulated to allow for the seamless linkage of software products, representing various life-cycle aspects. Because classes of these life cycles tend to be independent of each other,more » the current research programs and efforts do not have to be revamped; therefore, this unified life-cycle paradigm builds upon current technology and is backward compatible while embracing future technology. Only when two of these life cycles coincide and one impacts the other is there connectivity and a transfer of information at the interface. The current framework approaches (e.g., FRAMES, 3MRA, etc.) have a design that is amenable to capturing (1) many of these underlying philosophical concepts to assure backward compatibility of diverse independent assessment frameworks and (2) linkage communication to help transfer the needed information at the points of intersection. The key effort will be to identify (1) linkage points (i.e., portals) between life cycles, (2) the type and form of data passing between life cycles, and (3) conditions when life cycles interact and communicate. This paper discusses design aspects associated with a unified life-cycle analysis, which can support not only brownfields but also other types of assessments.« less

  14. Towards a general object-oriented software development methodology

    NASA Technical Reports Server (NTRS)

    Seidewitz, ED; Stark, Mike

    1986-01-01

    Object diagrams were used to design a 5000 statement team training exercise and to design the entire dynamics simulator. The object diagrams are also being used to design another 50,000 statement Ada system and a personal computer based system that will be written in Modula II. The design methodology evolves out of these experiences as well as the limitations of other methods that were studied. Object diagrams, abstraction analysis, and associated principles provide a unified framework which encompasses concepts from Yourdin, Booch, and Cherry. This general object-oriented approach handles high level system design, possibly with concurrency, through object-oriented decomposition down to a completely functional level. How object-oriented concepts can be used in other phases of the software life-cycle, such as specification and testing is being studied concurrently.

  15. NETMARK

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Koga, Dennis (Technical Monitor)

    2002-01-01

    This presentation discuss NASA's proposed NETMARK knowledge management tool which aims 'to control and interoperate with every block in a document, email, spreadsheet, power point, database, etc. across the lifecycle'. Topics covered include: system software requirements and hardware requirements, seamless information systems, computer architecture issues, and potential benefits to NETMARK users.

  16. Support for comprehensive reuse

    NASA Technical Reports Server (NTRS)

    Basili, V. R.; Rombach, H. D.

    1991-01-01

    Reuse of products, processes, and other knowledge will be the key to enable the software industry to achieve the dramatic improvement in productivity and quality required to satisfy the anticipated growing demands. Although experience shows that certain kinds of reuse can be successful, general success has been elusive. A software life-cycle technology which allows comprehensive reuse of all kinds of software-related experience could provide the means to achieving the desired order-of-magnitude improvements. A comprehensive framework of models, model-based characterization schemes, and support mechanisms for better understanding, evaluating, planning, and supporting all aspects of reuse are introduced.

  17. CRM Meets the Campus

    ERIC Educational Resources Information Center

    Villano, Matt

    2007-01-01

    In the corporate world, the notion of customer relationship management (CRM) is nothing new. That particular technology sector is now jam-packed with software that enables organizations to monitor and manage every interaction with a customer, from the very first experience on, throughout the lifecycle of the relationship. That relationship spans…

  18. A Cloud-based, Open-Source, Command-and-Control Software Paradigm for Space Situational Awareness (SSA)

    NASA Astrophysics Data System (ADS)

    Melton, R.; Thomas, J.

    With the rapid growth in the number of space actors, there has been a marked increase in the complexity and diversity of software systems utilized to support SSA target tracking, indication, warning, and collision avoidance. Historically, most SSA software has been constructed with "closed" proprietary code, which limits interoperability, inhibits the code transparency that some SSA customers need to develop domain expertise, and prevents the rapid injection of innovative concepts into these systems. Open-source aerospace software, a rapidly emerging, alternative trend in code development, is based on open collaboration, which has the potential to bring greater transparency, interoperability, flexibility, and reduced development costs. Open-source software is easily adaptable, geared to rapidly changing mission needs, and can generally be delivered at lower costs to meet mission requirements. This paper outlines Ball's COSMOS C2 system, a fully open-source, web-enabled, command-and-control software architecture which provides several unique capabilities to move the current legacy SSA software paradigm to an open source model that effectively enables pre- and post-launch asset command and control. Among the unique characteristics of COSMOS is the ease with which it can integrate with diverse hardware. This characteristic enables COSMOS to serve as the command-and-control platform for the full life-cycle development of SSA assets, from board test, to box test, to system integration and test, to on-orbit operations. The use of a modern scripting language, Ruby, also permits automated procedures to provide highly complex decision making for the tasking of SSA assets based on both telemetry data and data received from outside sources. Detailed logging enables quick anomaly detection and resolution. Integrated real-time and offline data graphing renders the visualization of the both ground and on-orbit assets simple and straightforward.

  19. Examination of the Open Market Corridor

    DTIC Science & Technology

    2003-12-01

    105 D. BENEFITS OF THE PURCHASE CARD PROGRAM ..........................107 1. List of Benefits ...107 2. Additional Benefits and How OMC Can Increase the Benefits ...107 E. WEAKNESSES OF...software licenses and support services. Estimated life-cycle costs for FY 1995 through FY 2005 are $3.7 billion. Operational benefits from SPS are

  20. Modeling and Composing Scenario-Based Requirements with Aspects

    NASA Technical Reports Server (NTRS)

    Araujo, Joao; Whittle, Jon; Ki, Dae-Kyoo

    2004-01-01

    There has been significant recent interest, within the Aspect-Oriented Software Development (AOSD) community, in representing crosscutting concerns at various stages of the software lifecycle. However, most of these efforts have concentrated on the design and implementation phases. We focus in this paper on representing aspects during use case modeling. In particular, we focus on scenario-based requirements and show how to compose aspectual and non-aspectual scenarios so that they can be simulated as a whole. Non-aspectual scenarios are modeled as UML sequence diagram. Aspectual scenarios are modeled as Interaction Pattern Specifications (IPS). In order to simulate them, the scenarios are transformed into a set of executable state machines using an existing state machine synthesis algorithm. Previous work composed aspectual and non-aspectual scenarios at the sequence diagram level. In this paper, the composition is done at the state machine level.

  1. Towards improving software security by using simulation to inform requirements and conceptual design

    DOE PAGES

    Nutaro, James J.; Allgood, Glenn O.; Kuruganti, Teja

    2015-06-17

    We illustrate the use of modeling and simulation early in the system life-cycle to improve security and reduce costs. The models that we develop for this illustration are inspired by problems in reliability analysis and supervisory control, for which similar models are used to quantify failure probabilities and rates. In the context of security, we propose that models of this general type can be used to understand trades between risk and cost while writing system requirements and during conceptual design, and thereby significantly reduce the need for expensive security corrections after a system enters operation

  2. Methodology for cloud-based design of robots

    NASA Astrophysics Data System (ADS)

    Ogorodnikova, O. M.; Vaganov, K. A.; Putimtsev, I. D.

    2017-09-01

    This paper presents some important results for cloud-based designing a robot arm by a group of students. Methodology for the cloud-based design was developed and used to initiate interdisciplinary project about research and development of a specific manipulator. The whole project data files were hosted by Ural Federal University data center. The 3D (three-dimensional) model of the robot arm was created using Siemens PLM software (Product Lifecycle Management) and structured as a complex mechatronics product by means of Siemens Teamcenter thin client; all processes were performed in the clouds. The robot arm was designed in purpose to load blanks up to 1 kg into the work space of the milling machine for performing student's researches.

  3. Creating and Testing Simulation Software

    NASA Technical Reports Server (NTRS)

    Heinich, Christina M.

    2013-01-01

    The goal of this project is to learn about the software development process, specifically the process to test and fix components of the software. The paper will cover the techniques of testing code, and the benefits of using one style of testing over another. It will also discuss the overall software design and development lifecycle, and how code testing plays an integral role in it. Coding is notorious for always needing to be debugged due to coding errors or faulty program design. Writing tests either before or during program creation that cover all aspects of the code provide a relatively easy way to locate and fix errors, which will in turn decrease the necessity to fix a program after it is released for common use. The backdrop for this paper is the Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI), a project whose goal is to simulate a launch using simulated models of the ground systems and the connections between them and the control room. The simulations will be used for training and to ensure that all possible outcomes and complications are prepared for before the actual launch day. The code being tested is the Programmable Logic Controller Interface (PLCIF) code, the component responsible for transferring the information from the models to the model Programmable Logic Controllers (PLCs), basic computers that are used for very simple tasks.

  4. Solving the Software Legacy Problem with RISA

    NASA Astrophysics Data System (ADS)

    Ibarra, A.; Gabriel, C.

    2012-09-01

    Nowadays hardware and system infrastructure evolve on time scales much shorter than the typical duration of space astronomy missions. Data processing software capabilities have to evolve to preserve the scientific return during the entire experiment life time. Software preservation is a key issue that has to be tackled before the end of the project to keep the data usable over many years. We present RISA (Remote Interface to Science Analysis) as a solution to decouple data processing software and infrastructure life-cycles, using JAVA applications and web-services wrappers to existing software. This architecture employs embedded SAS in virtual machines assuring a homogeneous job execution environment. We will also present the first studies to reactivate the data processing software of the EXOSAT mission, the first ESA X-ray astronomy mission launched in 1983, using the generic RISA approach.

  5. INITIATE: An Intelligent Adaptive Alert Environment.

    PubMed

    Jafarpour, Borna; Abidi, Samina Raza; Ahmad, Ahmad Marwan; Abidi, Syed Sibte Raza

    2015-01-01

    Exposure to a large volume of alerts generated by medical Alert Generating Systems (AGS) such as drug-drug interaction softwares or clinical decision support systems over-whelms users and causes alert fatigue in them. Some of alert fatigue effects are ignoring crucial alerts and longer response times. A common approach to avoid alert fatigue is to devise mechanisms in AGS to stop them from generating alerts that are deemed irrelevant. In this paper, we present a novel framework called INITIATE: an INtellIgent adapTIve AlerT Environment to avoid alert fatigue by managing alerts generated by one or more AGS. We have identified and categories the lifecycle of different alerts and have developed alert management logic as per the alerts' lifecycle. Our framework incorporates an ontology that represents the alert management strategy and an alert management engine that executes this strategy. Our alert management framework offers the following features: (1) Adaptability based on users' feedback; (2) Personalization and aggregation of messages; and (3) Connection to Electronic Medical Records by implementing a HL7 Clinical Document Architecture parser.

  6. Use of software engineering techniques in the design of the ALEPH data acquisition system

    NASA Astrophysics Data System (ADS)

    Charity, T.; McClatchey, R.; Harvey, J.

    1987-08-01

    The SASD methodology is being used to provide a rigorous design framework for various components of the ALEPH data acquisition system. The Entity-Relationship data model is used to describe the layout and configuration of the control and acquisition systems and detector components. State Transition Diagrams are used to specify control applications such as run control and resource management and Data Flow Diagrams assist in decomposing software tasks and defining interfaces between processes. These techniques encourage rigorous software design leading to enhanced functionality and reliability. Improved documentation and communication ensures continuity over the system life-cycle and simplifies project management.

  7. Requirements UML Tool (RUT) Expanded for Extreme Programming (CI02)

    NASA Technical Reports Server (NTRS)

    McCoy, James R.

    2003-01-01

    A procedure for capturing and managing system requirements that incorporates XP user stories. Because costs associated with identifying problems in requirements increase dramatically over the lifecycle of a project, a method for identifying sources of software risks in user stories is urgently needed. This initiative aims to determine a set of guide-lines for user stories that will result in high-quality requirement. To further this initiative, a tool is needed to analyze user stories that can assess the quality of individual user stories, detect sources cf software risk's, produce software metrics, and identify areas in user stories that can be improved.

  8. A Case Study in CAD Design Automation

    ERIC Educational Resources Information Center

    Lowe, Andrew G.; Hartman, Nathan W.

    2011-01-01

    Computer-aided design (CAD) software and other product life-cycle management (PLM) tools have become ubiquitous in industry during the past 20 years. Over this time they have continuously evolved, becoming programs with enormous capabilities, but the companies that use them have not evolved their design practices at the same rate. Due to the…

  9. Analysis of the Lifecycle of Mechanical Engineering Products

    NASA Astrophysics Data System (ADS)

    Gubaydulina, R. H.; Gruby, S. V.; Davlatov, G. D.

    2016-08-01

    Principal phases of the lifecycle of mechanical engineering products are analyzed in the paper. The authors have developed methods and procedures to improve designing, manufacturing, operating and recycling of the machine. It has been revealed that economic lifecycle of the product is a base for appropriate organization of mechanical engineering production. This lifecycle is calculated as a minimal sum total of consumer and producer costs. The machine construction and its manufacturing technology are interrelated through a maximal possible company profit. The products are to be recycled by their producer. Recycling should be considered as a feedback phase, necessary to make the whole lifecycle of the product a constantly functioning self-organizing system. The principles, outlined in this paper can be used as fundamentals to develop an automated PLM-system.

  10. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM) early in the development lifecycle for the SLS program, NASA formed the M&FM team as part of the Integrated Systems Health Management and Automation Branch under the Spacecraft Vehicle Systems Department at the Marshall Space Flight Center (MSFC). To support the development of the FM algorithms, the VMET developed by the M&FM team provides the ability to integrate the algorithms, perform test cases, and integrate vendor-supplied physics-based launch vehicle (LV) subsystem models. Additionally, the team has developed processes for implementing and validating the M&FM algorithms for concept validation and risk reduction. The flexibility of the VMET capabilities enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS, GNC, and others. One of the principal functions of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software test and validation processes. In any software development process there is inherent risk in the interpretation and implementation of concepts from requirements and test cases into flight software compounded with potential human errors throughout the development and regression testing lifecycle. Risk reduction is addressed by the M&FM group but in particular by the Analysis Team working with other organizations such as S&MA, Structures and Environments, GNC, Orion, Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission (LOM) and Loss of Crew (LOC) probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses to be tested in VMET to ensure reliable failure detection, and confirm responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - the ARINC 6535-partitioned Operating System, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by FSW. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure their effectiveness and performance in the exterior FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.

  11. State Analysis Database Tool

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Bennett, Matthew

    2006-01-01

    The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.

  12. LIFE-CYCLE IMPACT ASSESSMENT DEMONSTRATION FOR THE BGU-24

    EPA Science Inventory

    The primary goal of this project was to develop and demonstrate a life-cycle impact assessment (LCIA) approach using existing life-cycle inventory (LCI) data on one of the propellants, energetics, and pyrotechnic (PEP) materials of interest to the U.S. Department of Defense (DoD)...

  13. A Holistic Approach to Systems Development

    NASA Technical Reports Server (NTRS)

    Wong, Douglas T.

    2008-01-01

    Introduces a Holistic and Iterative Design Process. Continuous process but can be loosely divided into four stages. More effort spent early on in the design. Human-centered and Multidisciplinary. Emphasis on Life-Cycle Cost. Extensive use of modeling, simulation, mockups, human subjects, and proven technologies. Human-centered design doesn t mean the human factors discipline is the most important Disciplines should be involved in the design: Subsystem vendors, configuration management, operations research, manufacturing engineering, simulation/modeling, cost engineering, hardware engineering, software engineering, test and evaluation, human factors, electromagnetic compatibility, integrated logistics support, reliability/maintainability/availability, safety engineering, test equipment, training systems, design-to-cost, life cycle cost, application engineering etc. 9

  14. On architecting and composing engineering information services to enable smart manufacturing

    PubMed Central

    Ivezic, Nenad; Srinivasan, Vijay

    2016-01-01

    Engineering information systems play an important role in the current era of digitization of manufacturing, which is a key component to enable smart manufacturing. Traditionally, these engineering information systems spanned the lifecycle of a product by providing interoperability of software subsystems through a combination of open and proprietary exchange of data. But research and development efforts are underway to replace this paradigm with engineering information services that can be composed dynamically to meet changing needs in the operation of smart manufacturing systems. This paper describes the opportunities and challenges in architecting such engineering information services and composing them to enable smarter manufacturing. PMID:27840595

  15. Ontology for Life-Cycle Modeling of Electrical Distribution Systems: Model View Definition

    DTIC Science & Technology

    2013-06-01

    building information models ( BIM ) at the coordinated design stage of building construction. 1.3 Approach To...standard for exchanging Building Information Modeling ( BIM ) data, which defines hundreds of classes for common use in software, currently supported by...specifications, Construction Operations Building in- formation exchange (COBie), Building Information Modeling ( BIM ) 16. SECURITY CLASSIFICATION OF:

  16. The US Army Corps of Engineers Roadmap for Life-Cycle Building Information Modeling (BIM)

    DTIC Science & Technology

    2012-11-01

    Building Information Modeling ( BIM ) En gi ne er R es ea rc h an...Abstract Building Information Modeling ( BIM ) technology has rapidly gained ac- ceptance throughout the planning, architecture, engineering...the Industry Foundation Class (IFC) definitions to create vendor-neutral data exchanges for use in BIM software tools. Building Information Modeling

  17. LIFE-CYCLE IMPACT ASSESSMENT DEMONSTRATION FOR THE GBU-24

    EPA Science Inventory

    The primary goal of this project was to develop and demonstrate a life-cycle impact assessment (LCIA) approach using existing life-cycle inventory (LCI) data on one of the propellants, energetics, and pyro-technic (PEP) materials of interest to the U.S. Department of Defense (DoD...

  18. Life-cycle energy and emissions inventories for motorcycles, diesel automobiles, school buses, electric buses, Chicago rail, and New York City rail

    DOT National Transportation Integrated Search

    2009-05-01

    The development of life-cycle energy and emissions factors for passenger transportation modes : is critical for understanding the total environmental costs of travel. Previous life-cycle studies : have focused on the automobile given its dominating s...

  19. Cost and schedule estimation study report

    NASA Technical Reports Server (NTRS)

    Condon, Steve; Regardie, Myrna; Stark, Mike; Waligora, Sharon

    1993-01-01

    This report describes the analysis performed and the findings of a study of the software development cost and schedule estimation models used by the Flight Dynamics Division (FDD), Goddard Space Flight Center. The study analyzes typical FDD projects, focusing primarily on those developed since 1982. The study reconfirms the standard SEL effort estimation model that is based on size adjusted for reuse; however, guidelines for the productivity and growth parameters in the baseline effort model have been updated. The study also produced a schedule prediction model based on empirical data that varies depending on application type. Models for the distribution of effort and schedule by life-cycle phase are also presented. Finally, this report explains how to use these models to plan SEL projects.

  20. NA-42 TI Shared Software Component Library FY2011 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knudson, Christa K.; Rutz, Frederick C.; Dorow, Kevin E.

    The NA-42 TI program initiated an effort in FY2010 to standardize its software development efforts with the long term goal of migrating toward a software management approach that will allow for the sharing and reuse of code developed within the TI program, improve integration, ensure a level of software documentation, and reduce development costs. The Pacific Northwest National Laboratory (PNNL) has been tasked with two activities that support this mission. PNNL has been tasked with the identification, selection, and implementation of a Shared Software Component Library. The intent of the library is to provide a common repository that is accessiblemore » by all authorized NA-42 software development teams. The repository facilitates software reuse through a searchable and easy to use web based interface. As software is submitted to the repository, the component registration process captures meta-data and provides version control for compiled libraries, documentation, and source code. This meta-data is then available for retrieval and review as part of library search results. In FY2010, PNNL and staff from the Remote Sensing Laboratory (RSL) teamed up to develop a software application with the goal of replacing the aging Aerial Measuring System (AMS). The application under development includes an Advanced Visualization and Integration of Data (AVID) framework and associated AMS modules. Throughout development, PNNL and RSL have utilized a common AMS code repository for collaborative code development. The AMS repository is hosted by PNNL, is restricted to the project development team, is accessed via two different geographic locations and continues to be used. The knowledge gained from the collaboration and hosting of this repository in conjunction with PNNL software development and systems engineering capabilities were used in the selection of a package to be used in the implementation of the software component library on behalf of NA-42 TI. The second task managed by PNNL is the development and continued maintenance of the NA-42 TI Software Development Questionnaire. This questionnaire is intended to help software development teams working under NA-42 TI in documenting their development activities. When sufficiently completed, the questionnaire illustrates that the software development activities recorded incorporate significant aspects of the software engineering lifecycle. The questionnaire template is updated as comments are received from NA-42 and/or its development teams and revised versions distributed to those using the questionnaire. PNNL also maintains a list of questionnaire recipients. The blank questionnaire template, the AVID and AMS software being developed, and the completed AVID AMS specific questionnaire are being used as the initial content to be established in the TI Component Library. This report summarizes the approach taken to identify requirements, search for and evaluate technologies, and the approach taken for installation of the software needed to host the component library. Additionally, it defines the process by which users request access for the contribution and retrieval of library content.« less

  1. Building Maintenance and Repair Data for Life-Cycle Cost Analyses: Electrical Systems.

    DTIC Science & Technology

    1991-05-01

    Repair Data for Life-Cycle Cost Analyses: Electrical Systems by Edgar S. Neely Robert D. Neathammer James R. Stirn Robert P. Winkler This research...systems have been developed to assist planners in preparing DD Form 1391 documentation, designers in life-cycle cost component selection, and maintainers...Maintenance and Repair Data for Life-Cycle Cost Analyses: RDTE dated 1980 Electrical Systems REIMB 1984 - 1989 6. AUTH4OR(S) Edgar S. Neely, Robert D

  2. Clinician user involvement in the real world: Designing an electronic tool to improve interprofessional communication and collaboration in a hospital setting.

    PubMed

    Tang, Terence; Lim, Morgan E; Mansfield, Elizabeth; McLachlan, Alexander; Quan, Sherman D

    2018-02-01

    User involvement is vital to the success of health information technology implementation. However, involving clinician users effectively and meaningfully in complex healthcare organizations remains challenging. The objective of this paper is to share our real-world experience of applying a variety of user involvement methods in the design and implementation of a clinical communication and collaboration platform aimed at facilitating care of complex hospitalized patients by an interprofessional team of clinicians. We designed and implemented an electronic clinical communication and collaboration platform in a large community teaching hospital. The design team consisted of both technical and healthcare professionals. Agile software development methodology was used to facilitate rapid iterative design and user input. We involved clinician users at all stages of the development lifecycle using a variety of user-centered, user co-design, and participatory design methods. Thirty-six software releases were delivered over 24 months. User involvement has resulted in improvement in user interface design, identification of software defects, creation of new modules that facilitated workflow, and identification of necessary changes to the scope of the project early on. A variety of user involvement methods were complementary and benefited the design and implementation of a complex health IT solution. Combining these methods with agile software development methodology can turn designs into functioning clinical system to support iterative improvement. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Data Flow in Relation to Life-Cycle Costing of Construction Projects in the Czech Republic

    NASA Astrophysics Data System (ADS)

    Biolek, Vojtěch; Hanák, Tomáš; Marović, Ivan

    2017-10-01

    Life-cycle costing is an important part of every construction project, as it makes it possible to take into consideration future costs relating to the operation and demolition phase of a built structure. In this way, investors can optimize the project design to minimize the total project costs. Even though there have already been some attempts to implement BIM software in the Czech Republic, the current state of affairs does not support automated data flow between the bill of costs and applications that support building facility management. The main aim of this study is to critically evaluate the current situation and outline a future framework that should allow for the use of the data contained in the bill of costs to manage building operating costs.

  4. Study on Information Management for the Conservation of Traditional Chinese Architectural Heritage - 3d Modelling and Metadata Representation

    NASA Astrophysics Data System (ADS)

    Yen, Y. N.; Weng, K. H.; Huang, H. Y.

    2013-07-01

    After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.

  5. GCE Data Toolbox for MATLAB - a software framework for automating environmental data processing, quality control and documentation

    NASA Astrophysics Data System (ADS)

    Sheldon, W.; Chamblee, J.; Cary, R. H.

    2013-12-01

    Environmental scientists are under increasing pressure from funding agencies and journal publishers to release quality-controlled data in a timely manner, as well as to produce comprehensive metadata for submitting data to long-term archives (e.g. DataONE, Dryad and BCO-DMO). At the same time, the volume of digital data that researchers collect and manage is increasing rapidly due to advances in high frequency electronic data collection from flux towers, instrumented moorings and sensor networks. However, few pre-built software tools are available to meet these data management needs, and those tools that do exist typically focus on part of the data management lifecycle or one class of data. The GCE Data Toolbox has proven to be both a generalized and effective software solution for environmental data management in the Long Term Ecological Research Network (LTER). This open source MATLAB software library, developed by the Georgia Coastal Ecosystems LTER program, integrates metadata capture, creation and management with data processing, quality control and analysis to support the entire data lifecycle. Raw data can be imported directly from common data logger formats (e.g. SeaBird, Campbell Scientific, YSI, Hobo), as well as delimited text files, MATLAB files and relational database queries. Basic metadata are derived from the data source itself (e.g. parsed from file headers) and by value inspection, and then augmented using editable metadata templates containing boilerplate documentation, attribute descriptors, code definitions and quality control rules. Data and metadata content, quality control rules and qualifier flags are then managed together in a robust data structure that supports database functionality and ensures data validity throughout processing. A growing suite of metadata-aware editing, quality control, analysis and synthesis tools are provided with the software to support managing data using graphical forms and command-line functions, as well as developing automated workflows for unattended processing. Finalized data and structured metadata can be exported in a wide variety of text and MATLAB formats or uploaded to a relational database for long-term archiving and distribution. The GCE Data Toolbox can be used as a complete, light-weight solution for environmental data and metadata management, but it can also be used in conjunction with other cyber infrastructure to provide a more comprehensive solution. For example, newly acquired data can be retrieved from a Data Turbine or Campbell LoggerNet Database server for quality control and processing, then transformed to CUAHSI Observations Data Model format and uploaded to a HydroServer for distribution through the CUAHSI Hydrologic Information System. The GCE Data Toolbox can also be leveraged in analytical workflows developed using Kepler or other systems that support MATLAB integration or tool chaining. This software can therefore be leveraged in many ways to help researchers manage, analyze and distribute the data they collect.

  6. Towards a comprehensive framework for reuse: A reuse-enabling software evolution environment

    NASA Technical Reports Server (NTRS)

    Basili, V. R.; Rombach, H. D.

    1988-01-01

    Reuse of products, processes and knowledge will be the key to enable the software industry to achieve the dramatic improvement in productivity and quality required to satisfy the anticipated growing demand. Although experience shows that certain kinds of reuse can be successful, general success has been elusive. A software life-cycle technology which allows broad and extensive reuse could provide the means to achieving the desired order-of-magnitude improvements. The scope of a comprehensive framework for understanding, planning, evaluating and motivating reuse practices and the necessary research activities is outlined. As a first step towards such a framework, a reuse-enabling software evolution environment model is introduced which provides a basis for the effective recording of experience, the generalization and tailoring of experience, the formalization of experience, and the (re-)use of experience.

  7. The circle of life: A cross-cultural comparison of children's attribution of life-cycle traits.

    PubMed

    Burdett, Emily R R; Barrett, Justin L

    2016-06-01

    Do children attribute mortality and other life-cycle traits to all minded beings? The present study examined whether culture influences young children's ability to conceptualize and differentiate human beings from supernatural beings (such as God) in terms of life-cycle traits. Three-to-5-year-old Israeli and British children were questioned whether their mother, a friend, and God would be subject to various life-cycle processes: Birth, death, ageing, existence/longevity, and parentage. Children did not anthropomorphize but differentiated among human and supernatural beings, attributing life-cycle traits to humans, but not to God. Although 3-year-olds differentiated significantly among agents, 5-year-olds attributed correct life-cycle traits more consistently than younger children. The results also indicated some cross-cultural variation in these attributions. Implications for biological conceptual development are discussed. © 2015 The British Psychological Society.

  8. Environmental sustainability assessment of hydropower plant in Europe using life cycle assessment

    NASA Astrophysics Data System (ADS)

    Mahmud, M. A. P.; Huda, N.; Farjana, S. H.; Lang, C.

    2018-05-01

    Hydropower is the oldest and most common type of renewable source of electricity available on this planet. The end of life process of hydropower plant have significant environmental impacts, which needs to be identified and minimized to ensure an environment friendly power generation. However, identifying the environmental impacts and health hazards are very little explored in the hydropower processing routes despite a significant quantity of production worldwide. This paper highlight the life-cycle environmental impact assessment of the reservoir based hydropower generation system located in alpine and non-alpine region of Europe, addressing their ecological effects by the ReCiPe and CML methods under several impact-assessment categories such as human health, ecosystems, global warming potential, acidification potential, etc. The Australasian life-cycle inventory database and SimaPro software are utilized to accumulate life-cycle inventory dataset and to evaluate the impacts. The results reveal that plants of alpine region offer superior environmental performance for couple of considered categories: global warming and photochemical oxidation, whilst in the other cases the outcomes are almost similar. Results obtained from this study will take part an important role in promoting sustainable generation of hydropower, and thus towards environment friendly energy production.

  9. Making Use of a Decade of Widely Varying Historical Data: SARP Project "Full Life-cycle Defect Management"

    NASA Technical Reports Server (NTRS)

    Shull, Forrest; Bechtel, Andre; Feldmann, Raimund L.; Regardie, Myrna; Seaman, Carolyn

    2008-01-01

    This viewgraph presentation addresses the question of inspection and verification and validation (V&V) effectiveness of developing computer systems. A specific question is the relation between V&V effectiveness in the early lifecycle of development and the later testing of the developed system.

  10. A quantitative risk model for early lifecycle decision making

    NASA Technical Reports Server (NTRS)

    Feather, M. S.; Cornford, S. L.; Dunphy, J.; Hicks, K.

    2002-01-01

    Decisions made in the earliest phases of system development have the most leverage to influence the success of the entire development effort, and yet must be made when information is incomplete and uncertain. We have developed a scalable cost-benefit model to support this critical phase of early-lifecycle decision-making.

  11. Effectiveness comparison of partially executed t-way test suite based generated by existing strategies

    NASA Astrophysics Data System (ADS)

    Othman, Rozmie R.; Ahmad, Mohd Zamri Zahir; Ali, Mohd Shaiful Aziz Rashid; Zakaria, Hasneeza Liza; Rahman, Md. Mostafijur

    2015-05-01

    Consuming 40 to 50 percent of software development cost, software testing is one of the most resource consuming activities in software development lifecycle. To ensure an acceptable level of quality and reliability of a typical software product, it is desirable to test every possible combination of input data under various configurations. Due to combinatorial explosion problem, considering all exhaustive testing is practically impossible. Resource constraints, costing factors as well as strict time-to-market deadlines are amongst the main factors that inhibit such consideration. Earlier work suggests that sampling strategy (i.e. based on t-way parameter interaction or called as t-way testing) can be effective to reduce number of test cases without effecting the fault detection capability. However, for a very large system, even t-way strategy will produce a large test suite that need to be executed. In the end, only part of the planned test suite can be executed in order to meet the aforementioned constraints. Here, there is a need for test engineers to measure the effectiveness of partially executed test suite in order for them to assess the risk they have to take. Motivated by the abovementioned problem, this paper presents the effectiveness comparison of partially executed t-way test suite generated by existing strategies using tuples coverage method. Here, test engineers can predict the effectiveness of the testing process if only part of the original test cases is executed.

  12. LED street lighting evaluation -- phase II : LED specification and life-cycle cost analysis.

    DOT National Transportation Integrated Search

    2015-01-01

    Phase II of this study focused on developing a draft specification for LED luminaires to be used by IDOT : and a life-cycle cost analysis (LCCA) tool for solid state lighting technologies. The team also researched the : latest developments related to...

  13. Streamline Your Project: A Lifecycle Model.

    ERIC Educational Resources Information Center

    Viren, John

    2000-01-01

    Discusses one approach to project organization providing a baseline lifecycle model for multimedia/CBT development. This variation of the standard four-phase model of Analysis, Design, Development, and Implementation includes a Pre-Analysis phase, called Definition, and a Post-Implementation phase, known as Maintenance. Each phase is described.…

  14. Building Petascale Cyberinfrastructure and Science Support for Solar Physics: Approach of the DKIST Data Center

    NASA Astrophysics Data System (ADS)

    Berukoff, Steven; Reardon, Kevin; Hays, Tony; Spiess, DJ; Watson, Fraser

    2015-08-01

    When construction is complete in 2019, the Daniel K. Inouye Solar Telescope will be the most-capable large aperture, high-resolution, multi-instrument solar physics facility in the world. The telescope is designed as a four-meter off-axis Gregorian, with a rotating Coude laboratory designed to simultaneously house and support five first-light imaging and spectropolarimetric instruments. At current design, the facility and its instruments will generate data volumes of 5 PB, produce 108 images, and 107-109 metadata elements annually. This data will not only forge new understanding of solar phenomena at high resolution, but enhance participation in solar physics and further grow a small but vibrant international community.The DKIST Data Center is being designed to store, curate, and process this flood of information, while augmenting its value by providing association of science data and metadata to its acquisition and processing provenance. In early Operations, the Data Center will produce, by autonomous, semi-automatic, and manual means, quality-controlled and -assured calibrated data sets, closely linked to facility and instrument performance during the Operations lifecycle. These data sets will be made available to the community openly and freely, and software and algorithms made available through community repositories like Github for further collaboration and improvement.We discuss the current design and approach of the DKIST Data Center, describing the development cycle, early technology analysis and prototyping, and the roadmap ahead. In this budget-conscious era, a key design criterion is elasticity, the ability of the built system to adapt to changing work volumes, types, and the shifting scientific landscape, without undue cost or operational impact. We discuss our deep iterative development approach, the underappreciated challenges of calibrating ground-based solar data, the crucial integration of the Data Center within the larger Operations lifecycle, and how software and hardware support, intelligently deployed, will enable high-caliber solar physics research and community growth for the DKIST's 40-year lifespan.

  15. Sourcing Lifecycle for Software as a Service (SAAS)

    NASA Astrophysics Data System (ADS)

    Santy; Sikkel, K.

    2014-03-01

    In recent years, Software as a Service (SaaS) has changed from curiosity caused concept to an accepted well known concept. A key advantage of this model is that, by cautious engineering, it is possible to influence economy of scale to decrease total cost of ownership compared to on-premises solutions. By using the guideline elaborated in this paper, companies which has interest in implementing SaaS will be led throughout the entire implementation cycle starting before the company decides to implement SaaS until the stage when the company decides to shift from SaaS model to another model or when they shift to another SaaS company

  16. Assuring NASA's Safety and Mission Critical Software

    NASA Technical Reports Server (NTRS)

    Deadrick, Wesley

    2015-01-01

    What is IV&V? Independent Verification and Validation (IV&V) is an objective examination of safety and mission critical software processes and products. Independence: 3 Key parameters: Technical Independence; Managerial Independence; Financial Independence. NASA IV&V perspectives: Will the system's software: Do what it is supposed to do?; Not do what it is not supposed to do?; Respond as expected under adverse conditions?. Systems Engineering: Determines if the right system has been built and that it has been built correctly. IV&V Technical Approaches: Aligned with IEEE 1012; Captured in a Catalog of Methods; Spans the full project lifecycle. IV&V Assurance Strategy: The IV&V Project's strategy for providing mission assurance; Assurance Strategy is driven by the specific needs of an individual project; Implemented via an Assurance Design; Communicated via Assurance Statements.

  17. Development of a Methodology for Successful Multigeneration Life-Cycle Testing of the Estuarine Sheepshead Minnow, Cyprinodon variegatus.

    EPA Science Inventory

    Sustainability of wildlife populations exposed to endocrine disrupting chemicals in natural water bodies has sparked sufficient concern that the U.S.EPA is developing methods for multiple generation exposures of fishes. Established testing methods and the short life-cycle of the ...

  18. E-Learning Quality Assurance: A Process-Oriented Lifecycle Model

    ERIC Educational Resources Information Center

    Abdous, M'hammed

    2009-01-01

    Purpose: The purpose of this paper is to propose a process-oriented lifecycle model for ensuring quality in e-learning development and delivery. As a dynamic and iterative process, quality assurance (QA) is intertwined with the e-learning development process. Design/methodology/approach: After reviewing the existing literature, particularly…

  19. HANFORD RIVER PROTECTION PROJECT ENHANCED MISSION PLANNING THROUGH INNOVATIVE TOOLS LIFECYCLE COST MODELING AND AQUEOUS THERMODYNAMIC MODELING - 12134

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PIERSON KL; MEINERT FL

    2012-01-26

    Two notable modeling efforts within the Hanford Tank Waste Operations Simulator (HTWOS) are currently underway to (1) increase the robustness of the underlying chemistry approximations through the development and implementation of an aqueous thermodynamic model, and (2) add enhanced planning capabilities to the HTWOS model through development and incorporation of the lifecycle cost model (LCM). Since even seemingly small changes in apparent waste composition or treatment parameters can result in large changes in quantities of high-level waste (HLW) and low-activity waste (LAW) glass, mission duration or lifecycle cost, a solubility model that more accurately depicts the phases and concentrations ofmore » constituents in tank waste is required. The LCM enables evaluation of the interactions of proposed changes on lifecycle mission costs, which is critical for decision makers.« less

  20. Risk Management Considerations for Interoperable Acquisition

    DTIC Science & Technology

    2006-08-01

    Electronics Engineers (IEEE) to harmonize the standards for software (IEEE 12207 ) and system (IEEE 15288) life-cycle processes. A goal of this harmonization...management ( ISO /IEC 16085) is being generalized to apply to the systems level. The revised, generalized standard will add require- ments and guidance for the...risk management. The documents include the following: • ISO /IEC Guide 73: Risk Management—Vocabulary—Guidelines for use in stan- dards [ ISO 02

  1. Life Cycle Assessment of Wall Systems

    NASA Astrophysics Data System (ADS)

    Ramachandran, Sriranjani

    Natural resource depletion and environmental degradation are the stark realities of the times we live in. As awareness about these issues increases globally, industries and businesses are becoming interested in understanding and minimizing the ecological footprints of their activities. Evaluating the environmental impacts of products and processes has become a key issue, and the first step towards addressing and eventually curbing climate change. Additionally, companies are finding it beneficial and are interested in going beyond compliance using pollution prevention strategies and environmental management systems to improve their environmental performance. Life-cycle Assessment (LCA) is an evaluative method to assess the environmental impacts associated with a products' life-cycle from cradle-to-grave (i.e. from raw material extraction through to material processing, manufacturing, distribution, use, repair and maintenance, and finally, disposal or recycling). This study focuses on evaluating building envelopes on the basis of their life-cycle analysis. In order to facilitate this analysis, a small-scale office building, the University Services Building (USB), with a built-up area of 148,101 ft2 situated on ASU campus in Tempe, Arizona was studied. The building's exterior envelope is the highlight of this study. The current exterior envelope is made of tilt-up concrete construction, a type of construction in which the concrete elements are constructed horizontally and tilted up, after they are cured, using cranes and are braced until other structural elements are secured. This building envelope is compared to five other building envelope systems (i.e. concrete block, insulated concrete form, cast-in-place concrete, steel studs and curtain wall constructions) evaluating them on the basis of least environmental impact. The research methodology involved developing energy models, simulating them and generating changes in energy consumption due to the above mentioned envelope types. Energy consumption data, along with various other details, such as building floor area, areas of walls, columns, beams etc. and their material types were imported into Life-Cycle Assessment software called ATHENA impact estimator for buildings. Using this four-stepped LCA methodology, the results showed that the Steel Stud envelope performed the best and less environmental impact compared to other envelope types. This research methodology can be applied to other building typologies.

  2. The evolutionary ecology of complex lifecycle parasites: linking phenomena with mechanisms

    PubMed Central

    Auld, S KJR; Tinsley, M C

    2015-01-01

    Many parasitic infections, including those of humans, are caused by complex lifecycle parasites (CLPs): parasites that sequentially infect different hosts over the course of their lifecycle. CLPs come from a wide range of taxonomic groups—from single-celled bacteria to multicellular flatworms—yet share many common features in their life histories. Theory tells us when CLPs should be favoured by selection, but more empirical studies are required in order to quantify the costs and benefits of having a complex lifecycle, especially in parasites that facultatively vary their lifecycle complexity. In this article, we identify ecological conditions that favour CLPs over their simple lifecycle counterparts and highlight how a complex lifecycle can alter transmission rate and trade-offs between growth and reproduction. We show that CLPs participate in dynamic host–parasite coevolution, as more mobile hosts can fuel CLP adaptation to less mobile hosts. Then, we argue that a more general understanding of the evolutionary ecology of CLPs is essential for the development of effective frameworks to manage the many diseases they cause. More research is needed identifying the genetics of infection mechanisms used by CLPs, particularly into the role of gene duplication and neofunctionalisation in lifecycle evolution. We propose that testing for signatures of selection in infection genes will reveal much about how and when complex lifecycles evolved, and will help quantify complex patterns of coevolution between CLPs and their various hosts. Finally, we emphasise four key areas where new research approaches will provide fertile opportunities to advance this field. PMID:25227255

  3. Comparing Life-Cycle Costs of ESPCs and Appropriations-Funded Energy Projects: An Update to the 2002 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shonder, John A; Hughes, Patrick; Atkin, Erica

    2006-11-01

    A study was sponsored by FEMP in 2001 - 2002 to develop methods to compare life-cycle costs of federal energy conservation projects carried out through energy savings performance contracts (ESPCs) and projects that are directly funded by appropriations. The study described in this report follows up on the original work, taking advantage of new pricing data on equipment and on $500 million worth of Super ESPC projects awarded since the end of FY 2001. The methods developed to compare life-cycle costs of ESPCs and directly funded energy projects are based on the following tasks: (1) Verify the parity of equipmentmore » prices in ESPC vs. directly funded projects; (2) Develop a representative energy conservation project; (3) Determine representative cycle times for both ESPCs and appropriations-funded projects; (4) Model the representative energy project implemented through an ESPC and through appropriations funding; and (5) Calculate the life-cycle costs for each project.« less

  4. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  5. Cradle-to-gate life-cycle assessment of laminated veneer lumber produced in the southeast region of the United States

    Treesearch

    Richard D. Bergman; Sevda Alanya-Rosenbaum

    2017-01-01

    The goal of the present study was to develop life-cycle impact assessment (LCIA) data associated with gate-to-gate laminated veneer lumber (LVL) production in the southeast (SE) region of the U.S. with the ultimate aim of constructing an updated cradle-to-gate mill output life-cycle assessment (LCA). The authors collected primary (survey) mill data from LVL production...

  6. Ontology for Life-Cycle Modeling of Water Distribution Systems: Application of Model View Definition Attributes

    DTIC Science & Technology

    2013-06-01

    ER D C/ CE RL C R- 13 -5 Ontology for Life-Cycle Modeling of Water Distribution Systems : Application of Model View Definition...2013 Ontology for Life-Cycle Modeling of Water Distribution Systems : Application of Model View Definition Attributes Kristine K. Fallon, Robert A...interior plumbing systems and the information exchange requirements for every participant in the design. The findings were used to develop an

  7. Embedding X.509 Digital Certificates in Three-Dimensional Models for Authentication, Authorization, and Traceability of Product Data.

    PubMed

    Hedberg, Thomas D; Krima, Sylvere; Camelio, Jaime A

    2017-03-01

    Exchange and reuse of three-dimensional (3D)-product models are hampered by the absence of trust in product-lifecycle-data quality. The root cause of the missing trust is years of "silo" functions (e.g., engineering, manufacturing, quality assurance) using independent and disconnected processes. Those disconnected processes result in data exchanges that do not contain all of the required information for each downstream lifecycle process, which inhibits the reuse of product data and results in duplicate data. The X.509 standard, maintained by the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T), was first issued in 1988. Although originally intended as the authentication framework for the X.500 series for electronic directory services, the X.509 framework is used in a wide range of implementations outside the originally intended paradigm. These implementations range from encrypting websites to software-code signing, yet X.509 certificate use has not widely penetrated engineering and product realms. Our approach is not trying to provide security mechanisms, but equally as important, our method aims to provide insight into what is happening with product data to support trusting the data. This paper provides a review of the use of X.509 certificates and proposes a solution for embedding X.509 digital certificates in 3D models for authentication, authorization, and traceability of product data. This paper also describes an application within the Aerospace domain. Finally, the paper draws conclusions and provides recommendations for further research into using X.509 certificates in product lifecycle management (PLM) workflows to enable a product lifecycle of trust.

  8. Developing Second Graders' Creativity through Literacy-Science Integrated Lessons on Lifecycles

    ERIC Educational Resources Information Center

    Webb, Angela Naomi; Rule, Audrey C.

    2012-01-01

    Young children need to develop creative problem-solving skills to ensure success in an uncertain future workplace. Although most teachers recognize the importance of creativity, they do not always know how integrate it with content learning. This repeated measures study on animal and plant lifecycles examined student learning of vocabulary and…

  9. SCOS 2: An object oriented software development approach

    NASA Technical Reports Server (NTRS)

    Symonds, Martin; Lynenskjold, Steen; Mueller, Christian

    1994-01-01

    The Spacecraft Control and Operations System 2 (SCOS 2), is intended to provide the generic mission control system infrastructure for future ESA missions. It represents a bold step forward in order to take advantage of state-of-the-art technology and current practices in the area of software engineering. Key features include: (1) use of object oriented analysis and design techniques; (2) use of UNIX, C++ and a distributed architecture as the enabling implementation technology; (3) goal of re-use for development, maintenance and mission specific software implementation; and (4) introduction of the concept of a spacecraft control model. This paper touches upon some of the traditional beliefs surrounding Object Oriented development and describes their relevance to SCOS 2. It gives rationale for why particular approaches were adopted and others not, and describes the impact of these decisions. The development approach followed is discussed, highlighting the evolutionary nature of the overall process and the iterative nature of the various tasks carried out. The emphasis of this paper is on the process of the development with the following being covered: (1) the three phases of the SCOS 2 project - prototyping & analysis, design & implementation and configuration / delivery of mission specific systems; (2) the close cooperation and continual interaction with the users during the development; (3) the management approach - the split between client staff, industry and some of the required project management activities; (4) the lifecycle adopted being an enhancement of the ESA PSS-05 standard with SCOS 2 specific activities and approaches defined; and (5) an examination of some of the difficulties encountered and the solutions adopted. Finally, the lessons learned from the SCOS 2 experience are highlighted, identifying those issues to be used as feedback into future developments of this nature. This paper does not intend to describe the finished product and its operation, but focusing on the journey to arrive there, concentrating therefore on the process and not the products of the SCOS 2 software development.

  10. Second NASA Technical Interchange Meeting (TIM): Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    NASA Technical Reports Server (NTRS)

    ONeil, D. A.; Mankins, J. C.; Christensen, C. B.; Gresham, E. C.

    2005-01-01

    The Advanced Technology Lifecycle Analysis System (ATLAS), a spreadsheet analysis tool suite, applies parametric equations for sizing and lifecycle cost estimation. Performance, operation, and programmatic data used by the equations come from a Technology Tool Box (TTB) database. In this second TTB Technical Interchange Meeting (TIM), technologists, system model developers, and architecture analysts discussed methods for modeling technology decisions in spreadsheet models, identified specific technology parameters, and defined detailed development requirements. This Conference Publication captures the consensus of the discussions and provides narrative explanations of the tool suite, the database, and applications of ATLAS within NASA s changing environment.

  11. Product Lifecycle Management and Sustainable Space Exploration

    NASA Technical Reports Server (NTRS)

    Caruso, Pamela W.; Dumbacher, Daniel L.; Grieves, Michael

    2011-01-01

    This slide presentation reviews the use of product lifecycle management (PLM) in the general aerospace industry, its use and development at NASA and at Marshall Space Flight Center, and how the use of PLM can lead to sustainable space exploration.

  12. Revel8or: Model Driven Capacity Planning Tool Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Liming; Liu, Yan; Bui, Ngoc B.

    2007-05-31

    Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less

  13. Augmenting SCA project management and automation framework

    NASA Astrophysics Data System (ADS)

    Iyapparaja, M.; Sharma, Bhanupriya

    2017-11-01

    In our daily life we need to keep the records of the things in order to manage it in more efficient and proper way. Our Company manufactures semiconductor chips and sale it to the buyer. Sometimes it manufactures the entire product and sometimes partially and sometimes it sales the intermediary product obtained during manufacturing, so for the better management of the entire process there is a need to keep the track record of all the entity involved in it. Materials and Methods: Therefore to overcome with the problem the need raised to develop the framework for the maintenance of the project and for the automation testing. Project management framework provides an architecture which supports in managing the project by marinating the records of entire requirements, the test cases that were created for testing each unit of the software, defect raised from the past years. So through this the quality of the project can be maintained. Results: Automation framework provides the architecture which supports the development and implementation of the automation test script for the software testing process. Conclusion: For implementing project management framework the product of HP that is Application Lifecycle management is used which provides central repository to maintain the project.

  14. Software Requirements Analysis as Fault Predictor

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Waiting until the integration and system test phase to discover errors leads to more costly rework than resolving those same errors earlier in the lifecycle. Costs increase even more significantly once a software system has become operational. WE can assess the quality of system requirements, but do little to correlate this information either to system assurance activities or long-term reliability projections - both of which remain unclear and anecdotal. Extending earlier work on requirements accomplished by the ARM tool, measuring requirements quality information against code complexity and test data for the same system may be used to predict specific software modules containing high impact or deeply embedded faults now escaping in operational systems. Such knowledge would lead to more effective and efficient test programs. It may enable insight into whether a program should be maintained or started over.

  15. ProcessGene-Connect: SOA Integration between Business Process Models and Enactment Transactions of Enterprise Software Systems

    NASA Astrophysics Data System (ADS)

    Wasser, Avi; Lincoln, Maya

    In recent years, both practitioners and applied researchers have become increasingly interested in methods for integrating business process models and enterprise software systems through the deployment of enabling middleware. Integrative BPM research has been mainly focusing on the conversion of workflow notations into enacted application procedures, and less effort has been invested in enhancing the connectivity between design level, non-workflow business process models and related enactment systems such as: ERP, SCM and CRM. This type of integration is useful at several stages of an IT system lifecycle, from design and implementation through change management, upgrades and rollout. The paper presents an integration method that utilizes SOA for connecting business process models with corresponding enterprise software systems. The method is then demonstrated through an Oracle E-Business Suite procurement process and its ERP transactions.

  16. Life-cycle impacts of shower water waste heat recovery: case study of an installation at a university sport facility in the UK.

    PubMed

    Ip, Kenneth; She, Kaiming; Adeyeye, Kemi

    2017-10-18

    Recovering heat from waste water discharged from showers to preheat the incoming cold water has been promoted as a cost-effective, energy-efficient, and low-carbon design option which has been included in the UK's Standard Assessment Procedure (SAP) for demonstrating compliance with the Building Regulation for dwellings. Incentivized by its carbon cost-effectiveness, waste water heat exchangers (WWHX) have been selected and incorporated in a newly constructed Sports Pavilion at the University of Brighton in the UK. This £2-m sports development serving several football fields was completed in August 2015 providing eight water- and energy-efficient shower rooms for students, staff, and external organizations. Six of the shower rooms are located on the ground floor and two on the first floor, each fitted with five or six thermostatically controlled shower units. Inline type of WWHX were installed, each consisted of a copper pipe section wound by an external coil of smaller copper pipe through which the cold water would be warmed before entering the shower mixers. Using the installation at Sport Pavilion as the case study, this research aims to evaluate the environmental and financial sustainability of a vertical waste heat recovery device, over a life cycle of 50 years, with comparison to the normal use of a PVC-u pipe. A heat transfer mathematical model representing the system has been developed to inform the development of the methodology for measuring the in-situ thermal performance of individual and multiple use of showers in each changing room. Adopting a system thinking modeling technique, a quasi-dynamic simulation computer model was established enabling the prediction of annual energy consumptions under different shower usage profiles. Data based on the process map and inventory of a functional unit of WWHX were applied to a proprietary assessment software to establish the relevant outputs for the life-cycle environmental impact assessment. Life-cycle cost models were developed and industry price book data were applied. The results indicated that the seasonal thermal effectiveness was over 50% enabling significant energy savings through heat recovery that led to short carbon payback time of less than 2 years to compensate for the additional greenhouse gas emissions associated with the WWHX. However, the life-cycle cost of the WWHX is much higher than using the PVC pipe, even with significant heat recovered under heavy usage, highlighting the need to adopt more economic configurations, such as combining waste water through fewer units, in order to maximize the return on investment and improve the financial viability.

  17. Experiences in improving the state of the practice in verification and validation of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Culbert, Chris; French, Scott W.; Hamilton, David

    1994-01-01

    Knowledge-based systems (KBS's) are in general use in a wide variety of domains, both commercial and government. As reliance on these types of systems grows, the need to assess their quality and validity reaches critical importance. As with any software, the reliability of a KBS can be directly attributed to the application of disciplined programming and testing practices throughout the development life-cycle. However, there are some essential differences between conventional software and KBSs, both in construction and use. The identification of these differences affect the verification and validation (V&V) process and the development of techniques to handle them. The recognition of these differences is the basis of considerable on-going research in this field. For the past three years IBM (Federal Systems Company - Houston) and the Software Technology Branch (STB) of NASA/Johnson Space Center have been working to improve the 'state of the practice' in V&V of Knowledge-based systems. This work was motivated by the need to maintain NASA's ability to produce high quality software while taking advantage of new KBS technology. To date, the primary accomplishment has been the development and teaching of a four-day workshop on KBS V&V. With the hope of improving the impact of these workshops, we also worked directly with NASA KBS projects to employ concepts taught in the workshop. This paper describes two projects that were part of this effort. In addition to describing each project, this paper describes problems encountered and solutions proposed in each case, with particular emphasis on implications for transferring KBS V&V technology beyond the NASA domain.

  18. A Sensitivity Analysis of the Rigid Pavement Life-Cycle Cost Analysis Program

    DOT National Transportation Integrated Search

    2000-12-01

    Original Report Date: September 1999. This report describes the sensitivity analysis performed on the Rigid Pavement Life-Cycle Cost Analysis program, a computer program developed by the Center for Transportation Research for the Texas Department of ...

  19. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and detection and responses that can be tested in VMET and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM. The plan for VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.

  20. An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency

    NASA Astrophysics Data System (ADS)

    Phillips, Dewanne Marie

    Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software architecture framework and acquisition methodology to improve the resiliency of space systems from a software perspective with an emphasis on the early phases of the systems engineering life cycle. This methodology involves seven steps: 1) Define technical resiliency requirements, 1a) Identify standards/policy for software resiliency, 2) Develop a request for proposal (RFP)/statement of work (SOW) for resilient space systems software, 3) Define software resiliency goals for space systems, 4) Establish software resiliency quality attributes, 5) Perform architectural tradeoffs and identify risks, 6) Conduct architecture assessments as part of the procurement process, and 7) Ascertain space system software architecture resiliency metrics. Data illustrates that software vulnerabilities can lead to opportunities for malicious cyber activities, which could degrade the space mission capability for the user community. Reducing the number of vulnerabilities by improving architecture and software system engineering practices can contribute to making space systems more resilient. Since cyber-attacks are enabled by shortfalls in software, robust software engineering practices and an architectural design are foundational to resiliency, which is a quality that allows the system to "take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". To achieve software resiliency for space systems, acquirers and suppliers must identify relevant factors and systems engineering practices to apply across the lifecycle, in software requirements analysis, architecture development, design, implementation, verification and validation, and maintenance phases.

  1. Software forecasting as it is really done: A study of JPL software engineers

    NASA Technical Reports Server (NTRS)

    Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.

    1993-01-01

    This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.

  2. The Chicago Center for Green Technology: life-cycle assessment of a brownfield redevelopment project

    NASA Astrophysics Data System (ADS)

    Brecheisen, Thomas; Theis, Thomas

    2013-03-01

    The sustainable development of brownfields reflects a fundamental, yet logical, shift in thinking and policymaking regarding pollution prevention. Life-cycle assessment (LCA) is a tool that can be used to assist in determining the conformity of brownfield development projects to the sustainability paradigm. LCA was applied to the process of a real brownfield redevelopment project, now known as the Chicago Center for Green Technology, to determine the cumulative energy required to complete the following redevelopment stages: (1) brownfield assessment and remediation, (2) building rehabilitation and site development and (3) ten years of operation. The results of the LCA have shown that operational energy is the dominant life-cycle stage after ten years of operation. The preservation and rehabilitation of the existing building, the installation of renewable energy systems (geothermal and photovoltaic) on-site and the use of more sustainable building products resulted in 72 terajoules (TJ) of avoided energy impacts, which would provide 14 years of operational energy for the site. Methodological note: data for this life-cycle assessment were obtained from project reports, construction blueprints and utility bills.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zitney, S.E.; McCorkle, D.; Yang, C.

    Process modeling and simulation tools are widely used for the design and operation of advanced power generation systems. These tools enable engineers to solve the critical process systems engineering problems that arise throughout the lifecycle of a power plant, such as designing a new process, troubleshooting a process unit or optimizing operations of the full process. To analyze the impact of complex thermal and fluid flow phenomena on overall power plant performance, the Department of Energy’s (DOE) National Energy Technology Laboratory (NETL) has developed the Advanced Process Engineering Co-Simulator (APECS). The APECS system is an integrated software suite that combinesmore » process simulation (e.g., Aspen Plus) and high-fidelity equipment simulations such as those based on computational fluid dynamics (CFD), together with advanced analysis capabilities including case studies, sensitivity analysis, stochastic simulation for risk/uncertainty analysis, and multi-objective optimization. In this paper we discuss the initial phases of the integration of the APECS system with the immersive and interactive virtual engineering software, VE-Suite, developed at Iowa State University and Ames Laboratory. VE-Suite uses the ActiveX (OLE Automation) controls in the Aspen Plus process simulator wrapped by the CASI library developed by Reaction Engineering International to run process/CFD co-simulations and query for results. This integration represents a necessary step in the development of virtual power plant co-simulations that will ultimately reduce the time, cost, and technical risk of developing advanced power generation systems.« less

  4. The Robust Software Feedback Model: An Effective Waterfall Model Tailoring for Space SW

    NASA Astrophysics Data System (ADS)

    Tipaldi, Massimo; Gotz, Christoph; Ferraguto, Massimo; Troiano, Luigi; Bruenjes, Bernhard

    2013-08-01

    The selection of the most suitable software life cycle process is of paramount importance in any space SW project. Despite being the preferred choice, the waterfall model is often exposed to some criticism. As matter of fact, its main assumption of moving to a phase only when the preceding one is completed and perfected (and under the demanding SW schedule constraints) is not easily attainable. In this paper, a tailoring of the software waterfall model (named “Robust Software Feedback Model”) is presented. The proposed methodology sorts out these issues by combining a SW waterfall model with a SW prototyping approach. The former is aligned with the SW main production line and is based on the full ECSS-E-ST-40C life-cycle reviews, whereas the latter is carried out in advance versus the main SW streamline (so as to inject its lessons learnt into the main streamline) and is based on a lightweight approach.

  5. Embedding X.509 Digital Certificates in Three-Dimensional Models for Authentication, Authorization, and Traceability of Product Data

    PubMed Central

    Hedberg, Thomas D.; Krima, Sylvere; Camelio, Jaime A.

    2016-01-01

    Exchange and reuse of three-dimensional (3D)-product models are hampered by the absence of trust in product-lifecycle-data quality. The root cause of the missing trust is years of “silo” functions (e.g., engineering, manufacturing, quality assurance) using independent and disconnected processes. Those disconnected processes result in data exchanges that do not contain all of the required information for each downstream lifecycle process, which inhibits the reuse of product data and results in duplicate data. The X.509 standard, maintained by the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T), was first issued in 1988. Although originally intended as the authentication framework for the X.500 series for electronic directory services, the X.509 framework is used in a wide range of implementations outside the originally intended paradigm. These implementations range from encrypting websites to software-code signing, yet X.509 certificate use has not widely penetrated engineering and product realms. Our approach is not trying to provide security mechanisms, but equally as important, our method aims to provide insight into what is happening with product data to support trusting the data. This paper provides a review of the use of X.509 certificates and proposes a solution for embedding X.509 digital certificates in 3D models for authentication, authorization, and traceability of product data. This paper also describes an application within the Aerospace domain. Finally, the paper draws conclusions and provides recommendations for further research into using X.509 certificates in product lifecycle management (PLM) workflows to enable a product lifecycle of trust. PMID:27840596

  6. Generalized Nanosatellite Avionics Testbed Lab

    NASA Technical Reports Server (NTRS)

    Frost, Chad R.; Sorgenfrei, Matthew C.; Nehrenz, Matt

    2015-01-01

    The Generalized Nanosatellite Avionics Testbed (G-NAT) lab at NASA Ames Research Center provides a flexible, easily accessible platform for developing hardware and software for advanced small spacecraft. A collaboration between the Mission Design Division and the Intelligent Systems Division, the objective of the lab is to provide testing data and general test protocols for advanced sensors, actuators, and processors for CubeSat-class spacecraft. By developing test schemes for advanced components outside of the standard mission lifecycle, the lab is able to help reduce the risk carried by advanced nanosatellite or CubeSat missions. Such missions are often allocated very little time for testing, and too often the test facilities must be custom-built for the needs of the mission at hand. The G-NAT lab helps to eliminate these problems by providing an existing suite of testbeds that combines easily accessible, commercial-offthe- shelf (COTS) processors with a collection of existing sensors and actuators.

  7. Current State of Agile User-Centered Design: A Survey

    NASA Astrophysics Data System (ADS)

    Hussain, Zahid; Slany, Wolfgang; Holzinger, Andreas

    Agile software development methods are quite popular nowadays and are being adopted at an increasing rate in the industry every year. However, these methods are still lacking usability awareness in their development lifecycle, and the integration of usability/User-Centered Design (UCD) into agile methods is not adequately addressed. This paper presents the preliminary results of a recently conducted online survey regarding the current state of the integration of agile methods and usability/UCD. A world wide response of 92 practitioners was received. The results show that the majority of practitioners perceive that the integration of agile methods with usability/UCD has added value to their adopted processes and to their teams; has resulted in the improvement of usability and quality of the product developed; and has increased the satisfaction of the end-users of the product developed. The top most used HCI techniques are low-fidelity prototyping, conceptual designs, observational studies of users, usability expert evaluations, field studies, personas, rapid iterative testing, and laboratory usability testing.

  8. Resurrecting Legacy Code Using Ontosoft Knowledge-Sharing and Digital Object Management to Revitalize and Reproduce Software for Groundwater Management Research

    NASA Astrophysics Data System (ADS)

    Kwon, N.; Gentle, J.; Pierce, S. A.

    2015-12-01

    Software code developed for research is often used for a relatively short period of time before it is abandoned, lost, or becomes outdated. This unintentional abandonment of code is a valid problem in the 21st century scientific process, hindering widespread reusability and increasing the effort needed to develop research software. Potentially important assets, these legacy codes may be resurrected and documented digitally for long-term reuse, often with modest effort. Furthermore, the revived code may be openly accessible in a public repository for researchers to reuse or improve. For this study, the research team has begun to revive the codebase for Groundwater Decision Support System (GWDSS), originally developed for participatory decision making to aid urban planning and groundwater management, though it may serve multiple use cases beyond those originally envisioned. GWDSS was designed as a java-based wrapper with loosely federated commercial and open source components. If successfully revitalized, GWDSS will be useful for both practical applications as a teaching tool and case study for groundwater management, as well as informing theoretical research. Using the knowledge-sharing approaches documented by the NSF-funded Ontosoft project, digital documentation of GWDSS is underway, from conception to development, deployment, characterization, integration, composition, and dissemination through open source communities and geosciences modeling frameworks. Information assets, documentation, and examples are shared using open platforms for data sharing and assigned digital object identifiers. Two instances of GWDSS version 3.0 are being created: 1) a virtual machine instance for the original case study to serve as a live demonstration of the decision support tool, assuring the original version is usable, and 2) an open version of the codebase, executable installation files, and developer guide available via an open repository, assuring the source for the application is accessible with version control and potential for new branch developments. Finally, metadata about the software has been completed within the OntoSoft portal to provide descriptive curation, make GWDSS searchable, and complete documentation of the scientific software lifecycle.

  9. PDS4: Current Status and Future Vision

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Hughes, J. S.; Hardman, S. H.; Law, E. S.; Beebe, R. F.

    2017-12-01

    In 2010, the Planetary Data System began the largest standards and software upgrade in its history called "PDS4". PDS4 was architected with core principles, applying years of experience and lessons learned working with scientific data returned from robotic solar system missions. In addition to applying those lessons learned, the PDS team was able to take advantage of modern software and data architecture approaches and emerging information technologies which has enabled the capture, management, discovery, and distribution of data from planetary science archives world-wide. What has emerged is a foundational set of standards, services, and common tools to construct and enable interoperability of planetary science archives from distributed repositories. Early in the PDS4 development, PDS selected two missions as drivers to be used to validate the PDS4 approach: LADEE and MAVEN. Additionally, PDS partnered with international agencies to begin discussing the architecture, design, and implementation to ensure that PDS4 would be architected as a world-wide standard and platform for archive development and interoperability. Given the evolving requirements, an agile software development methodology known as the "Evolutionary Software Development Lifecycle" was chosen. This led to incremental releases of increasing capability over time which were matched against emerging mission and user needs. To date, PDS has now performed 16 releases of PDS4 with adoption of over 12 missions world-wide. PDS has also increased from approximately 200 TBs in 2010 to approximately 1.3 PBs of data today, bringing it into the era of big data. The development of PDS4 has not only focused on the construction of compatible archives, but also on increasing access and use of the data in the big data era. As PDS looks forward, it is focused on achieving the recommendations of the Planetary Science Decadal Survey (2013-2022): "support the ongoing effort to evolve the Planetary Data System to an effective online resource for the NASA and international communities". The foundation laid by the standards, software services, and tools positions PDS to develop and adopt new approaches and technologies to enable users to effectively search, extract, integrate, and analyze with the wealth of observational data across international boundaries.

  10. Benchmarking Software Assurance Implementation

    DTIC Science & Technology

    2011-05-18

    product The chicken#. (a.k.a. Process Focused Assessment ) – Management Systems ( ISO 9001, ISO 27001 , ISO 2000) – Capability Maturity Models (CMMI...Assurance PRM, RMM, Assurance for CMMI)) – Lifecycle Processes ( ISO /IEEE 15288, ISO /IEEE 12207) – COBIT, ITIL, MS SDL, OSAMM, BSIMM 5 The egg...a.k.a Product Focused Assessments) – SCAP - NIST-SCAP – ISO /OMG W3C – KDM, BPMN, RIF, XMI, RDF – OWASP Top 10 – SANS TOP 25 – Secure Code Check Lists

  11. Overview of Design, Lifecycle, and Safety for Computer-Based Systems

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This document describes the need and justification for the development of a design guide for safety-relevant computer-based systems. This document also makes a contribution toward the design guide by presenting an overview of computer-based systems design, lifecycle, and safety.

  12. Enabling Smart Manufacturing Research and Development using a Product Lifecycle Test Bed.

    PubMed

    Helu, Moneer; Hedberg, Thomas

    2015-01-01

    Smart manufacturing technologies require a cyber-physical infrastructure to collect and analyze data and information across the manufacturing enterprise. This paper describes a concept for a product lifecycle test bed built on a cyber-physical infrastructure that enables smart manufacturing research and development. The test bed consists of a Computer-Aided Technologies (CAx) Lab and a Manufacturing Lab that interface through the product model creating a "digital thread" of information across the product lifecycle. The proposed structure and architecture of the test bed is presented, which highlights the challenges and requirements of implementing a cyber-physical infrastructure for manufacturing. The novel integration of systems across the product lifecycle also helps identify the technologies and standards needed to enable interoperability between design, fabrication, and inspection. Potential research opportunities enabled by the test bed are also discussed, such as providing publicly accessible CAx and manufacturing reference data, virtual factory data, and a representative industrial environment for creating, prototyping, and validating smart manufacturing technologies.

  13. Enabling Smart Manufacturing Research and Development using a Product Lifecycle Test Bed

    PubMed Central

    Helu, Moneer; Hedberg, Thomas

    2017-01-01

    Smart manufacturing technologies require a cyber-physical infrastructure to collect and analyze data and information across the manufacturing enterprise. This paper describes a concept for a product lifecycle test bed built on a cyber-physical infrastructure that enables smart manufacturing research and development. The test bed consists of a Computer-Aided Technologies (CAx) Lab and a Manufacturing Lab that interface through the product model creating a “digital thread” of information across the product lifecycle. The proposed structure and architecture of the test bed is presented, which highlights the challenges and requirements of implementing a cyber-physical infrastructure for manufacturing. The novel integration of systems across the product lifecycle also helps identify the technologies and standards needed to enable interoperability between design, fabrication, and inspection. Potential research opportunities enabled by the test bed are also discussed, such as providing publicly accessible CAx and manufacturing reference data, virtual factory data, and a representative industrial environment for creating, prototyping, and validating smart manufacturing technologies. PMID:28664167

  14. Object links in the repository

    NASA Technical Reports Server (NTRS)

    Beck, Jon; Eichmann, David

    1991-01-01

    Some of the architectural ramifications of extending the Eichmann/Atkins lattice-based classification scheme to encompass the assets of the full life-cycle of software development are explored. In particular, we wish to consider a model which provides explicit links between objects in addition to the edges connecting classification vertices in the standard lattice. The model we consider uses object-oriented terminology. Thus, the lattice is viewed as a data structure which contains class objects which exhibit inheritance. A description of the types of objects in the repository is presented, followed by a discussion of how they interrelate. We discuss features of the object-oriented model which support these objects and their links, and consider behavior which an implementation of the model should exhibit. Finally, we indicate some thoughts on implementing a prototype of this repository architecture.

  15. Towards a systems approach to risk considerations for concurrent design

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Oberto, Robert E.

    2004-01-01

    This paper describes the new process used by the Project Design Center at NASA's Jet Propulsion Laboratory for the identification, assessment and communication of risk elements throughout the lifecycle of a mission design. This process includes a software tool, 'RAP' that collects and communicates risk information between the various designers and a 'risk expert' who mediates this process. The establishment of this process is an attempt towards the systematic consideration of risk in the design decision making process. Using this process, we are able to better keep track of the risks associated with the design decisions. Furthermore, it helps us develop better risk profiles for the studies under consideration. We aim to refine and expand the current process to enable more thorough risk analysis capabilities in the future.

  16. Analyzing and Predicting Effort Associated with Finding and Fixing Software Faults

    NASA Technical Reports Server (NTRS)

    Hamill, Maggie; Goseva-Popstojanova, Katerina

    2016-01-01

    Context: Software developers spend a significant amount of time fixing faults. However, not many papers have addressed the actual effort needed to fix software faults. Objective: The objective of this paper is twofold: (1) analysis of the effort needed to fix software faults and how it was affected by several factors and (2) prediction of the level of fix implementation effort based on the information provided in software change requests. Method: The work is based on data related to 1200 failures, extracted from the change tracking system of a large NASA mission. The analysis includes descriptive and inferential statistics. Predictions are made using three supervised machine learning algorithms and three sampling techniques aimed at addressing the imbalanced data problem. Results: Our results show that (1) 83% of the total fix implementation effort was associated with only 20% of failures. (2) Both safety critical failures and post-release failures required three times more effort to fix compared to non-critical and pre-release counterparts, respectively. (3) Failures with fixes spread across multiple components or across multiple types of software artifacts required more effort. The spread across artifacts was more costly than spread across components. (4) Surprisingly, some types of faults associated with later life-cycle activities did not require significant effort. (5) The level of fix implementation effort was predicted with 73% overall accuracy using the original, imbalanced data. Using oversampling techniques improved the overall accuracy up to 77%. More importantly, oversampling significantly improved the prediction of the high level effort, from 31% to around 85%. Conclusions: This paper shows the importance of tying software failures to changes made to fix all associated faults, in one or more software components and/or in one or more software artifacts, and the benefit of studying how the spread of faults and other factors affect the fix implementation effort.

  17. Environmental impact assessment of european non-ferro mining industries through life-cycle assessment

    NASA Astrophysics Data System (ADS)

    Hisan Farjana, Shahjadi; Huda, Nazmul; Parvez Mahmud, M. A.

    2018-05-01

    European mining industries are the vast industrial sector which contributes largely on their economy which constitutes of ferro and non-ferro metals and minerals industries. The non-ferro metals extraction and processing industries require focus of attention due to sustainability concerns as their manufacturing processes are highly energy intensive and impacts globally on environment. This paper analyses major environmental effects caused by European metal industries based on the life-cycle impact analysis technologies. This research work is the first work in considering the comparative environmental impact analysis of European non-ferro metal industries which will reveal their technological similarities and dissimilarities to assess their environmental loads. The life-cycle inventory datasets are collected from the EcoInvent database while the analysis is done using the CML baseline and ReCipe endpoint method using SimaPro software version 8.4. The CML and ReCipe method are chosen because they are specialized impact assessment methods for European continent. The impact categories outlined for discussion here are human health, global warming and ecotoxicity. The analysis results reveal that the gold industry is vulnerable for the environment due to waste emission and similar result retained by silver mines a little bit. But copper, lead, manganese and zinc mining processes and industries are environment friendly in terms of metal extraction technologies and waste emissions.

  18. The Lifecycle of a South African Non-governmental Organisation: Primary Science Programme, 1983-1999.

    ERIC Educational Resources Information Center

    Harvey, Stephen; Peacock, Alan

    2001-01-01

    Traces the lifecycle of the Primary Science Programme (PSP), 1983-99, a representative South African nongovernmental organization. Shows how the social and economic environment shaped PSP development and demise. Highlights tensions between quality versus quantity, subject versus holistic focus, and participatory versus authoritarian management…

  19. Model-Driven Engineering: Automatic Code Generation and Beyond

    DTIC Science & Technology

    2015-03-01

    and Weblogic as well as cloud environments such as Mi- crosoft Azure and Amazon Web Services®. Finally, while the generated code has dependencies on...code generation in the context of the full system lifecycle from development to sustainment. Acquisition programs in govern- ment or large commercial...Acquirers are concerned with the full system lifecycle, and they need confidence that the development methods will enable the system to meet the functional

  20. Ontology for Life-Cycle Modeling of Electrical Distribution Systems: Application of Model View Definition Attributes

    DTIC Science & Technology

    2013-06-01

    Building in- formation exchange (COBie), Building Information Modeling ( BIM ) 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...to develop a life-cycle building model have resulted in the definition of a “core” building information model that contains general information de...develop an information -exchange Model View Definition (MVD) for building electrical systems. The objective of the current work was to document the

  1. Value Engineering: An Application to Computer Software

    DTIC Science & Technology

    1995-06-01

    Ref. 4: P.2081 [Parentheses added] Figure S shows the cost function C(x) graphed with the Total Value function TV(x). It can be seen that for any...to be meaningful and ae-c-nrarp far tIha use V4a 4& trvite #nr- all nrnswn4_v%# s * 4 nna 33 since cost structures for each sotware de....o..n. project...maintainability quality characteristics due to longterm considerations affecting life-cycle costs .(0) S . VE applications provide alternative

  2. Comparing the environmental footprints of home-care and personal-hygiene products: the relevance of different life-cycle phases.

    PubMed

    Koehler, Annette; Wildbolz, Caroline

    2009-11-15

    An in-depth life-cycle assessment of nine home-care and personal-hygiene products was conducted to determine the ecological relevance of different life-cycle phases and compare the environmental profiles of products serving equal applications. Using detailed data from industry and consumer-behavior studies a broad range of environmental impacts were analyzed to identify the main drivers in each life-cycle stage and potentials for improving the environmental footprints. Although chemical production significantly adds to environmental burdens, substantial impacts are caused in the consumer-use phase. As such, this research provides recommendations for product development, supply chain management, product policies, and consumer use. To reduce environmental burdens products should, for instance, be produced in concentrated form, while consumers should apply correct product dosages and low water temperatures during product application.

  3. Life-cycle effects of single-walled carbon nanotubes (SWNTs) on an estuarine meiobenthic copepod.

    PubMed

    Templeton, Ryan C; Ferguson, P Lee; Washburn, Kate M; Scrivens, Wally A; Chandler, G Thomas

    2006-12-01

    Single-walled carbon nanotubes (SWNT) are finding increasing use in consumer electronics and structural composites. These nanomaterials and their manufacturing byproducts may eventually reach estuarine systems through wastewater discharge. The acute and chronic toxicity of SWNTs were evaluated using full life-cycle bioassays with the estuarine copepod Amphiascus tenuiremis (ASTM method E-2317-04). A synchronous cohort of naupliar larvae was assayed by culturing individual larvae to adulthood in individual 96-well microplate wells amended with SWNTs in seawater. Copepods were exposed to "as prepared" (AP) SWNTs, electrophoretically purified SWNTs, or a fluorescent fraction of nanocarbon synthetic byproducts. Copepods ingesting purified SWNTs showed no significant effects on mortality, development, and reproduction across exposures (p < 0.05). In contrast, exposure to the more complex AP-SWNT mixture significantly increased life-cycle mortality, reduced fertilization rates, and reduced molting success in the highest exposure (10 mg x L(-1)) (p < 0.05). Exposure to small fluorescent nanocarbon byproducts caused significantly increased life-cycle mortality at 10 mg x L(-1) (p < 0.05). The fluorescent nanocarbon fraction also caused significant reduction in life-cycle molting success for all exposures (p < 0.05). These results suggest size-dependent toxicity of SWNT-based nanomaterials, with the smallest synthetic byproduct fractions causing increased mortality and delayed copepod development over the concentration ranges tested.

  4. Globus: Service and Platform for Research Data Lifecycle Management

    NASA Astrophysics Data System (ADS)

    Ananthakrishnan, R.; Foster, I.

    2017-12-01

    Globus offers a range of data management capabilities to the community as hosted services, encompassing data transfer and sharing, user identity and authorization, and data publication. Globus capabilities are accessible via both a web browser and REST APIs. Web access allows researchers to use Globus capabilities through a software-as-a-service model; and the REST APIs address the needs of developers of research services, who can now use Globus as a platform, outsourcing complex user and data management tasks to Globus services. In this presentation, we review Globus capabilities and outline how it is being applied as a platform for scientific services, and highlight work done to link computational analysis flows to the underlying data through an interactive Jupyter notebook environment to promote immediate data usability, reusability of these flows by other researchers, and future analysis extensibility.

  5. Governance of extended lifecycle in large-scale eHealth initiatives: analyzing variability of enterprise architecture elements.

    PubMed

    Mykkänen, Juha; Virkanen, Hannu; Tuomainen, Mika

    2013-01-01

    The governance of large eHealth initiatives requires traceability of many requirements and design decisions. We provide a model which we use to conceptually analyze variability of several enterprise architecture (EA) elements throughout the extended lifecycle of development goals using interrelated projects related to the national ePrescription in Finland.

  6. The evolution, approval and implementation of the U.S. Geological Survey Science Data Lifecycle Model

    USGS Publications Warehouse

    Faundeen, John L.; Hutchison, Vivian

    2017-01-01

    This paper details how the United States Geological Survey (USGS) Community for Data Integration (CDI) Data Management Working Group developed a Science Data Lifecycle Model, and the role the Model plays in shaping agency-wide policies. Starting with an extensive literature review of existing data Lifecycle models, representatives from various backgrounds in USGS attended a two-day meeting where the basic elements for the Science Data Lifecycle Model were determined. Refinements and reviews spanned two years, leading to finalization of the model and documentation in a formal agency publication . The Model serves as a critical framework for data management policy, instructional resources, and tools. The Model helps the USGS address both the Office of Science and Technology Policy (OSTP) for increased public access to federally funded research, and the Office of Management and Budget (OMB) 2013 Open Data directives, as the foundation for a series of agency policies related to data management planning, metadata development, data release procedures, and the long-term preservation of data. Additionally, the agency website devoted to data management instruction and best practices (www2.usgs.gov/datamanagement) is designed around the Model’s structure and concepts. This paper also illustrates how the Model is being used to develop tools for supporting USGS research and data management processes.

  7. A review of radio frequency identification technology for the anatomic pathology or biorepository laboratory: Much promise, some progress, and more work needed.

    PubMed

    Lou, Jerry J; Andrechak, Gary; Riben, Michael; Yong, William H

    2011-01-01

    Patient safety initiatives throughout the anatomic laboratory and in biorepository laboratories have mandated increasing emphasis on the need for accurately identifying and tracking biospecimen assets throughout their production lifecycle and for archiving/retrieval purposes. However, increasing production volume along with complex workflow characteristics, reliance on manual production processes, and required asset movement to disparate destinations throughout asset lifecycles continue to challenge laboratory efforts. Radio Frequency Identification (RFID) technology, use of radio waves to communicate data between electronic tags attached to objects and a reader, shows significant potential to facilitate and overcome these hurdles. Advantages over traditional barcode labeling include readability without direct line-of-sight alignment to the reader, ability to read multiple tags simultaneously, higher data storage capacity, faster data transmission rate, and capacity to perform multiple read-writes of data to the tag. Most importantly, use of radio waves decreases the need to manually scan each asset, and at each step, identification or tracking event is needed. Temperature monitoring by on-board sensors and three-dimensional position tracking are additional potential benefits of using RFID technology. To date, barriers to implementation of RFID systems in the anatomic laboratory include increased associated costs of tags and readers, system software, data security concerns, lack of specific data standards for stored information, and potential for technological obsolescence during decades of specimen storage. Novel RFID production techniques and increased production capacity are projected to lower costs of some tags to a few cents each. Potentially, information security concerns can be addressed by techniques such as shielding, data encryption, and tag pseudonyms. Commitment by stakeholder groups to develop RFID tag data standards for anatomic pathology and biorepository laboratories could avoid or mitigate the "islands of data" dilemma presented by barcode usage where there are innumerable standards and a consequent paucity of hardware or software "plug and play" interoperability. Work remains to be done to establish the durability and appropriate shielding of individual tag types for use in harsh laboratory environmental conditions, and for long-term archival storage. Finally, given the requirements for long-term storage of biospecimen assets, consideration should be given to ways of mitigating data isolation due to eventual technological obsolescence of a particular RFID technology or software.

  8. A review of radio frequency identification technology for the anatomic pathology or biorepository laboratory: Much promise, some progress, and more work needed

    PubMed Central

    Lou, Jerry J.; Andrechak, Gary; Riben, Michael; Yong, William H.

    2011-01-01

    Patient safety initiatives throughout the anatomic laboratory and in biorepository laboratories have mandated increasing emphasis on the need for accurately identifying and tracking biospecimen assets throughout their production lifecycle and for archiving/retrieval purposes. However, increasing production volume along with complex workflow characteristics, reliance on manual production processes, and required asset movement to disparate destinations throughout asset lifecycles continue to challenge laboratory efforts. Radio Frequency Identification (RFID) technology, use of radio waves to communicate data between electronic tags attached to objects and a reader, shows significant potential to facilitate and overcome these hurdles. Advantages over traditional barcode labeling include readability without direct line-of-sight alignment to the reader, ability to read multiple tags simultaneously, higher data storage capacity, faster data transmission rate, and capacity to perform multiple read-writes of data to the tag. Most importantly, use of radio waves decreases the need to manually scan each asset, and at each step, identification or tracking event is needed. Temperature monitoring by on-board sensors and three-dimensional position tracking are additional potential benefits of using RFID technology. To date, barriers to implementation of RFID systems in the anatomic laboratory include increased associated costs of tags and readers, system software, data security concerns, lack of specific data standards for stored information, and potential for technological obsolescence during decades of specimen storage. Novel RFID production techniques and increased production capacity are projected to lower costs of some tags to a few cents each. Potentially, information security concerns can be addressed by techniques such as shielding, data encryption, and tag pseudonyms. Commitment by stakeholder groups to develop RFID tag data standards for anatomic pathology and biorepository laboratories could avoid or mitigate the “islands of data” dilemma presented by barcode usage where there are innumerable standards and a consequent paucity of hardware or software “plug and play” interoperability. Work remains to be done to establish the durability and appropriate shielding of individual tag types for use in harsh laboratory environmental conditions, and for long-term archival storage. Finally, given the requirements for long-term storage of biospecimen assets, consideration should be given to ways of mitigating data isolation due to eventual technological obsolescence of a particular RFID technology or software. PMID:21886890

  9. Using Teamcenter engineering software for a successive punching tool lifecycle management

    NASA Astrophysics Data System (ADS)

    Blaga, F.; Pele, A.-V.; Stǎnǎşel, I.; Buidoş, T.; Hule, V.

    2015-11-01

    The paper presents studies and researches results of the implementation of Teamcenter (TC) integrated management of a product lifecycle, in a virtual enterprise. The results are able to be implemented also in a real enterprise. The product was considered a successive punching and cutting tool, designed to materialize a metal sheet part. The paper defines the technical documentation flow (flow of information) in the process of constructive computer aided design of the tool. After the design phase is completed a list of parts is generated containing standard or manufactured components (BOM, Bill of Materials). The BOM may be exported to MS Excel (.xls) format and can be transferred to other departments of the company in order to supply the necessary materials and resources to achieve the final product. This paper describes the procedure to modify or change certain dimensions of sheet metal part obtained by punching. After 3D and 2D design, the digital prototype of punching tool moves to following lifecycle phase of the manufacturing process. For each operation of the technological process the corresponding phases are described in detail. Teamcenter enables to describe manufacturing company structure, underlying workstations that carry out various operations of manufacturing process. The paper revealed that the implementation of Teamcenter PDM in a company, improves efficiency of managing product information, eliminating time working with search, verification and correction of documentation, while ensuring the uniqueness and completeness of the product data.

  10. Certification of production-quality gLite Job Management components

    NASA Astrophysics Data System (ADS)

    Andreetto, P.; Bertocco, S.; Capannini, F.; Cecchi, M.; Dorigo, A.; Frizziero, E.; Giacomini, F.; Gianelle, A.; Mezzadri, M.; Molinari, E.; Monforte, S.; Prelz, F.; Rebatto, D.; Sgaravatto, M.; Zangrando, L.

    2011-12-01

    With the advent of the recent European Union (EU) funded projects aimed at achieving an open, coordinated and proactive collaboration among the European communities that provide distributed computing services, more strict requirements and quality standards will be asked to middleware providers. Such a highly competitive and dynamic environment, organized to comply a business-oriented model, has already started pursuing quality criteria, thus requiring to formally define rigorous procedures, interfaces and roles for each step of the software life-cycle. This will ensure quality-certified releases and updates of the Grid middleware. In the European Middleware Initiative (EMI), the release management for one or more components will be organized into Product Team (PT) units, fully responsible for delivering production ready, quality-certified software and for coordinating each other to contribute to the EMI release as a whole. This paper presents the certification process, with respect to integration, installation, configuration and testing, adopted at INFN by the Product Team responsible for the gLite Web-Service based Computing Element (CREAM CE) and for the Workload Management System (WMS). The used resources, the testbeds layout, the integration and deployment methods, the certification steps to provide feedback to developers and to grant quality results are described.

  11. Acid Rain Data System: Progressive application of information technology for operation of a market-based environmental program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, D.A.

    1995-12-31

    Under the Acid Rain Program, by statute and regulation, affected utility units are allocated annual allowances. Each allowance permits a unit to emit one ton of SO{sub 2} during or after a specified year. At year end, utilities must hold allowances equal to or greater than the cumulative SO{sub 2} emissions throughout the year from their affected units. The program has been developing, on a staged basis, two major computer-based information systems: the Allowance Tracking System (ATS) for tracking creation, transfer, and ultimate use of allowances; and the Emissions Tracking System (ETS) for transmission, receipt, processing, and inventory of continuousmore » emissions monitoring (CEM) data. The systems collectively form a logical Acid Rain Data System (ARDS). ARDS will be the largest information system ever used to operate and evaluate an environmental program. The paper describes the progressive software engineering approach the Acid Rain Program has been using to develop ARDS. Iterative software version releases, keyed to critical program deadlines, add the functionality required to support specific statutory and regulatory provisions. Each software release also incorporates continual improvements for efficiency, user-friendliness, and lower life-cycle costs. The program is migrating the independent ATS and ETS systems into a logically coordinated True-Up processing model, to support the end-of-year reconciliation for balancing allowance holdings against annual emissions and compliance plans for Phase 1 affected utility units. The paper provides specific examples and data to illustrate exciting applications of today`s information technology in ARDS.« less

  12. SAVANT: Solar Array Verification and Analysis Tool Demonstrated

    NASA Technical Reports Server (NTRS)

    Chock, Ricaurte

    2000-01-01

    The photovoltaics (PV) industry is now being held to strict specifications, such as end-oflife power requirements, that force them to overengineer their products to avoid contractual penalties. Such overengineering has been the only reliable way to meet such specifications. Unfortunately, it also results in a more costly process than is probably necessary. In our conversations with the PV industry, the issue of cost has been raised again and again. Consequently, the Photovoltaics and Space Environment Effects branch at the NASA Glenn Research Center at Lewis Field has been developing a software tool to address this problem. SAVANT, Glenn's tool for solar array verification and analysis is in the technology demonstration phase. Ongoing work has proven that more efficient and less costly PV designs should be possible by using SAVANT to predict the on-orbit life-cycle performance. The ultimate goal of the SAVANT project is to provide a user-friendly computer tool to predict PV on-orbit life-cycle performance. This should greatly simplify the tasks of scaling and designing the PV power component of any given flight or mission. By being able to predict how a particular PV article will perform, designers will be able to balance mission power requirements (both beginning-of-life and end-of-life) with survivability concerns such as power degradation due to radiation and/or contamination. Recent comparisons with actual flight data from the Photovoltaic Array Space Power Plus Diagnostics (PASP Plus) mission validate this approach.

  13. Life-Cycle Inventory Analysis of Laminated Veneer Lumber Production in the United States

    Treesearch

    Richard D. Bergman

    2015-01-01

    Documenting the environmental performance of building products is becoming increasingly common. Developing environmental product declarations (EPDs) based on life-cycle assessment (LCA) data is one way to provide scientific documentation. Many U.S. structural wood products have LCA-based “eco-labels” using the ISO standard. However, the standard requires underlying...

  14. Life-Cycle Inventory Analysis of I-joist Production in the United States

    Treesearch

    Richard D. Bergman

    2015-01-01

    Documenting the environmental performance of building products is becoming increasingly common. Creating environmental product declarations (EPDs) based on life-cycle assessment (LCA) data is one approach to provide scientific documentation of the products’ environmental performance. Many U.S. structural wood products have LCA-based “eco-labels” developed under the ISO...

  15. Life-Cycle Cost/Benefit Assessment of Expedite Departure Path (EDP)

    NASA Technical Reports Server (NTRS)

    Wang, Jianzhong Jay; Chang, Paul; Datta, Koushik

    2005-01-01

    This report presents a life-cycle cost/benefit assessment (LCCBA) of Expedite Departure Path (EDP), an air traffic control Decision Support Tool (DST) currently under development at NASA. This assessment is an update of a previous study performed by bd Systems, Inc. (bd) during FY01, with the following revisions: The life-cycle cost assessment methodology developed by bd for the previous study was refined and calibrated using Free Flight Phase 1 (FFP1) cost information for Traffic Management Advisor (TMA, or TMA-SC in the FAA's terminology). Adjustments were also made to the site selection and deployment scheduling methodology to include airspace complexity as a factor. This technique was also applied to the benefit extrapolation methodology to better estimate potential benefits for other years, and at other sites. This study employed a new benefit estimating methodology because bd s previous single year potential benefit assessment of EDP used unrealistic assumptions that resulted in optimistic estimates. This methodology uses an air traffic simulation approach to reasonably predict the impacts from the implementation of EDP. The results of the costs and benefits analyses were then integrated into a life-cycle cost/benefit assessment.

  16. Course of induced infection by Eimeria krijgsmannni in immunocompetent and immunodeficient mice.

    PubMed

    Ono, Yuina; Matsubayashi, Makoto; Kawaguchi, Hiroaki; Tsujio, Masashi; Mizuno, Masanobu; Tanaka, Tetsuya; Masatani, Tatsunori; Matsui, Toshihiro; Matsuo, Tomohide

    2016-01-01

    Recently, we have demonstrated the utility of Eimeria krijgsmanni as a novel mouse eimerian parasite for elucidating the biological diversity. The parasite showed notable infectivity to mice with various levels of immune status and susceptibility to antimicrobial agents including coccidiostat. However, the detailed lifecycle of E. krijgsmanni had not yet been determined and this information was lacking in discussion of previous findings. In the present study, we clarified the morphological characteristics of E. krijgsmanni and its lifecycle in normal mice, and examined the effects in immunodeficient mice and lifecycle stage for challenge infections after the primary inoculation. In immunocompetent mice, the lifecycle consisted of four asexual stages and the sexual sages followed by formation of oocysts during the prepatent periods. Interestingly, the second-generation meronts were detected in all observation periods after the disappearance of the other stages. For the challenge infection of immunodeficient mice, all developmental stages except for the second generation meronts were temporarily vanished. This finding suggests a "rest" or marked delay in development and a "restart" of the promotion toward the next generations. The second generation meronts may play an important role in the lifecycle of E. krijgsmanni.

  17. Feasibility analysis of a smart grid photovoltaics system for the subarctic rural region in Alaska

    NASA Astrophysics Data System (ADS)

    Yao, Lei

    A smart grid photovoltaics system was developed to demonstrate that the system is feasible for a similar off-grid rural community in the subarctic region in Alaska. A system generation algorithm and a system business model were developed to determine feasibility. Based on forecasts by the PV F-Chart software, a 70° tilt angle in winter, and a 34° tilt angle in summer were determined to be the best angles for electrical output. The proposed system's electricity unit cost was calculated at 32.3 cents/kWh that is cheaper than current unsubsidized electricity price (46.8 cents/kWh) in off-grid rural communities. Given 46.8 cents/kWh as the electricity unit price, the system provider can break even when 17.3 percent of the total electrical revenue through power generated by the proposed system is charged. Given these results, the system can be economically feasible during the life-cycle period. With further incentives, the system may have a competitive advantage.

  18. Introduction and Highlights of the Workshop

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Venneri, Samuel L.

    1997-01-01

    Four generations of CAD/CAM systems can be identified, corresponding to changes in both modeling functionality and software architecture. The systems evolved from 2D and wireframes to solid modeling, to parametric/variational modelers to the current simulation-embedded systems. Recent developments have enabled design engineers to perform many of the complex analysis tasks, typically performed by analysis experts. Some of the characteristics of the current and emerging CAD/CAM/CAE systems are described in subsequent presentations. The focus of the workshop is on the potential of CAD/CAM/CAE systems for use in simulating the entire mission and life-cycle of future aerospace systems, and the needed development to realize this potential. First, the major features of the emerging computing, communication and networking environment are outlined; second, the characteristics and design drivers of future aerospace systems are identified; third, the concept of intelligent synthesis environment being planned by NASA, the UVA ACT Center and JPL is presented; and fourth, the objectives and format of the workshop are outlined.

  19. From users involvement to users' needs understanding: a case study.

    PubMed

    Niès, Julie; Pelayo, Sylvia

    2010-04-01

    Companies developing and commercializing Healthcare IT applications may decide to involve the users in the software development lifecycle in order to better understand the users' needs and to optimize their products. Unfortunately direct developers-users dialogues are not sufficient to ensure a proper understanding of the users' needs. It is also necessary to involve human factors specialists to analyze the users' expression of their needs and to properly formalize the requirements for design purposes. The objective of this paper is to present a case study reporting the collaborative work between HF experts and a company developing and commercializing a CPOE. This study shows how this collaboration helps resolve the limits of direct users involvement and usual problems pertaining to users' needs description and understanding. The company participating in the study has implemented a procedure to convene regular meetings allowing direct exchanges between the development team and users' representatives. Those meetings aim at getting users' feedbacks on the existing products and at validating further developments. In parallel with usual HF methods supporting the analysis of the work system (onsite observations followed by debriefing interviews) and the usability evaluation of the application (usability inspection and usability tests), HF experts took the opportunity of the meetings organized by the company to collect, re-interpret and re-formulate the needs expressed by the users. The developers perceive the physicians' requirements concerning the display of the patient's list of medication as contradictory. In a previous meeting round the users had required a detailed view of the medication list against the synthesized existing one. Once this requirement satisfied, the users participating in the current meeting round require a synthesized view against the existing detailed one. The development team is unable to understand what they perceive as a reverse claim. Relying on a cognitive analysis of the physicians' decision making concerning the patient's treatment, the HF experts help re-formulate the physicians' cognitive needs in terms of synthesized/detailed display of the medication list depending on the stage of the decision making process. This led to an astute re-engineering of the application allowing the physicians to easily navigate back and forth between the synthesized and detailed views depending on the progress of their decision making. This study demonstrates that the integration of users' representatives in the software lifecycle is a good point for the end users. But it remains insufficient to resolve the complex usability problems of the system. Such solutions require the integration of HF expertise. Moreover, such an involvement of HF experts may generate benefits in terms of reduction of (i) the number of iterative developments and (ii) the users' training costs. (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  20. 5 CFR 1601.40 - Lifecycle Funds.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Lifecycle Funds. 1601.40 Section 1601.40 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD PARTICIPANTS' CHOICES OF TSP FUNDS Lifecycle Funds § 1601.40 Lifecycle Funds. The Executive Director will establish TSP Lifecycle Funds, which are...

  1. 5 CFR 1601.40 - Lifecycle Funds.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Lifecycle Funds. 1601.40 Section 1601.40 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD PARTICIPANTS' CHOICES OF TSP FUNDS Lifecycle Funds § 1601.40 Lifecycle Funds. The Executive Director will establish TSP Lifecycle Funds, which are...

  2. Applied Space Systems Engineering. Chapter 17; Manage Technical Data

    NASA Technical Reports Server (NTRS)

    Kent, Peter

    2008-01-01

    Effective space systems engineering (SSE) is conducted in a fully electronic manner. Competitive hardware, software, and system designs are created in a totally digital environment that enables rapid product design and manufacturing cycles, as well as a multitude of techniques such as modeling, simulation, and lean manufacturing that significantly reduce the lifecycle cost of systems. Because the SSE lifecycle depends on the digital environment, managing the enormous volumes of technical data needed to describe, build, deploy, and operate systems is a critical factor in the success of a project. This chapter presents the key aspects of Technical Data Management (TDM) within the SSE process. It is written from the perspective of the System Engineer tasked with establishing the TDM process and infrastructure for a major project. Additional perspectives are reflected from the point of view of the engineers on the project who work within the digital engineering environment established by the TDM toolset and infrastructure, and from the point of view of the contactors who interface via the TDM infrastructure. Table 17.1 lists the TDM process as it relates to SSE.

  3. Optimization and life-cycle cost of health clinic PV system for a rural area in southern Iraq using HOMER software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Karaghouli, Ali; Kazmerski, L.L.

    2010-04-15

    This paper addresses the need for electricity of rural areas in southern Iraq and proposes a photovoltaic (PV) solar system to power a health clinic in that region. The total daily health clinic load is 31.6 kW h and detailed loads are listed. The National Renewable Energy Laboratory (NREL) optimization computer model for distributed power, ''HOMER,'' is used to estimate the system size and its life-cycle cost. The analysis shows that the optimal system's initial cost, net present cost, and electricity cost is US$ 50,700, US$ 60,375, and US$ 0.238/kW h, respectively. These values for the PV system are comparedmore » with those of a generator alone used to supply the load. We found that the initial cost, net present cost of the generator system, and electricity cost are US$ 4500, US$ 352,303, and US$ 1.332/kW h, respectively. We conclude that using the PV system is justified on humanitarian, technical, and economic grounds. (author)« less

  4. Development and flight test experiences with a flight-crucial digital control system

    NASA Technical Reports Server (NTRS)

    Mackall, Dale A.

    1988-01-01

    Engineers and scientists in the advanced fighter technology integration (AFTI) F-16 program investigated the integration of emerging technologies into an advanced fighter aircraft. AFTI's three major technologies included: flight-crucial digital control, decoupled aircraft flight control, and integration of avionics, flight control, and pilot displays. In addition to investigating improvements in fighter performance, researchers studied the generic problems confronting the designers of highly integrated flight-crucial digital control. An overview is provided of both the advantages and problems of integration digital control systems. Also, an examination of the specification, design, qualification, and flight test life-cycle phase is provided. An overview is given of the fault-tolerant design, multimoded decoupled flight control laws, and integrated avionics design. The approach to qualifying the software and system designs is discussed, and the effects of design choices on system qualification are highlighted.

  5. Updated System-Availability and Resource-Allocation Program

    NASA Technical Reports Server (NTRS)

    Viterna, Larry

    2004-01-01

    A second version of the Availability, Cost and Resource Allocation (ACARA) computer program has become available. The first version was reported in an earlier tech brief. To recapitulate: ACARA analyzes the availability, mean-time-between-failures of components, life-cycle costs, and scheduling of resources of a complex system of equipment. ACARA uses a statistical Monte Carlo method to simulate the failure and repair of components while complying with user-specified constraints on spare parts and resources. ACARA evaluates the performance of the system on the basis of a mathematical model developed from a block-diagram representation. The previous version utilized the MS-DOS operating system and could not be run by use of the most recent versions of the Windows operating system. The current version incorporates the algorithms of the previous version but is compatible with Windows and utilizes menus and a file-management approach typical of Windows-based software.

  6. Updating of U.S. Wood Product Life-Cycle Assessment Data for Environmental Product Declarations

    Treesearch

    Richard Bergman; Elaine Oneil; Maureen Puettmann; Ivan Eastin; Indroneil Ganguly

    2014-01-01

    The marketplace has an increasing desire for credible and transparent product eco-labels based on life-cycle assessment (LCA) data, especially involving international trade. Over the past several years, stakeholders in the U.S. wood products industry have developed many such “eco-labels” under the ISO standard of LCA-based environmental product declarations (EPDs). The...

  7. Consideration of black carbon and primary organic carbon emissions in life-cycle analysis of Greenhouse gas emissions of vehicle systems and fuels.

    PubMed

    Cai, Hao; Wang, Michael Q

    2014-10-21

    The climate impact assessment of vehicle/fuel systems may be incomplete without considering short-lived climate forcers of black carbon (BC) and primary organic carbon (POC). We quantified life-cycle BC and POC emissions of a large variety of vehicle/fuel systems with an expanded Greenhouse gases, Regulated Emissions, and Energy use in Transportation model developed at Argonne National Laboratory. Life-cycle BC and POC emissions have small impacts on life-cycle greenhouse gas (GHG) emissions of gasoline, diesel, and other fuel vehicles, but would add 34, 16, and 16 g CO2 equivalent (CO2e)/mile, or 125, 56, and 56 g CO2e/mile with the 100 or 20 year Global Warming Potentials of BC and POC emissions, respectively, for vehicles fueled with corn stover-, willow tree-, and Brazilian sugarcane-derived ethanol, mostly due to BC- and POC-intensive biomass-fired boilers in cellulosic and sugarcane ethanol plants for steam and electricity production, biomass open burning in sugarcane fields, and diesel-powered agricultural equipment for biomass feedstock production/harvest. As a result, life-cycle GHG emission reduction potentials of these ethanol types, though still significant, are reduced from those without considering BC and POC emissions. These findings, together with a newly expanded GREET version, help quantify the previously unknown impacts of BC and POC emissions on life-cycle GHG emissions of U.S. vehicle/fuel systems.

  8. Petascale Computing for Ground-Based Solar Physics with the DKIST Data Center

    NASA Astrophysics Data System (ADS)

    Berukoff, Steven J.; Hays, Tony; Reardon, Kevin P.; Spiess, DJ; Watson, Fraser; Wiant, Scott

    2016-05-01

    When construction is complete in 2019, the Daniel K. Inouye Solar Telescope will be the most-capable large aperture, high-resolution, multi-instrument solar physics facility in the world. The telescope is designed as a four-meter off-axis Gregorian, with a rotating Coude laboratory designed to simultaneously house and support five first-light imaging and spectropolarimetric instruments. At current design, the facility and its instruments will generate data volumes of 3 PB per year, and produce 107-109 metadata elements.The DKIST Data Center is being designed to store, curate, and process this flood of information, while providing association of science data and metadata to its acquisition and processing provenance. The Data Center will produce quality-controlled calibrated data sets, and make them available freely and openly through modern search interfaces and APIs. Documented software and algorithms will also be made available through community repositories like Github for further collaboration and improvement.We discuss the current design and approach of the DKIST Data Center, describing the development cycle, early technology analysis and prototyping, and the roadmap ahead. We discuss our iterative development approach, the underappreciated challenges of calibrating ground-based solar data, the crucial integration of the Data Center within the larger Operations lifecycle, and how software and hardware support, intelligently deployed, will enable high-caliber solar physics research and community growth for the DKIST's 40-year lifespan.

  9. A Validation of Object-Oriented Design Metrics as Quality Indicators

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio

    1997-01-01

    This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.

  10. Flexibility Support for Homecare Applications Based on Models and Multi-Agent Technology

    PubMed Central

    Armentia, Aintzane; Gangoiti, Unai; Priego, Rafael; Estévez, Elisabet; Marcos, Marga

    2015-01-01

    In developed countries, public health systems are under pressure due to the increasing percentage of population over 65. In this context, homecare based on ambient intelligence technology seems to be a suitable solution to allow elderly people to continue to enjoy the comforts of home and help optimize medical resources. Thus, current technological developments make it possible to build complex homecare applications that demand, among others, flexibility mechanisms for being able to evolve as context does (adaptability), as well as avoiding service disruptions in the case of node failure (availability). The solution proposed in this paper copes with these flexibility requirements through the whole life-cycle of the target applications: from design phase to runtime. The proposed domain modeling approach allows medical staff to design customized applications, taking into account the adaptability needs. It also guides software developers during system implementation. The application execution is managed by a multi-agent based middleware, making it possible to meet adaptation requirements, assuring at the same time the availability of the system even for stateful applications. PMID:26694416

  11. A Validation of Object-Oriented Design Metrics

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.

    1995-01-01

    This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.

  12. Collaborative business processes for enhancing partnerships among software services providers

    NASA Astrophysics Data System (ADS)

    Heil Cancian, Maiara; Rabelo, Ricardo; Gresse von Wangenheim, Christiane

    2015-08-01

    Software services have represented a powerful view to support the realisation of the service-oriented architecture (SOA) paradigm. Using open standards and facilitating systems projects, they have increasingly been used as a corporate architectural approach to create interoperable services-based software solutions that can more easily be reused and shared across disparate applications. In the context of software companies, most of them are small firms having enormous difficulties to keep competitive. One strategy to enhance their sustainability is to enlarge partnerships among them at a more valuable level by jointly offering (web) services-based solutions. However, their culture of collaboration is low, and partnerships are usually done with the same companies and sporadically. This article presents an approach to support a more intense collaboration among software companies to attend business opportunities in a more agile way, joining capacities and capabilities which they would not have if they worked alone. This requires, however, some preparedness. From the perspective of business processes, they should understand how to carry out a collaboration more properly. This is essentially what this article is about. It presents a comprehensive list of collaborative business processes and base practices that can also act as a guide for service providers' managers to implement and manage the collaboration along its lifecycle. Processes have been validated and results are discussed.

  13. Active Mirror Predictive and Requirements Verification Software (AMP-ReVS)

    NASA Technical Reports Server (NTRS)

    Basinger, Scott A.

    2012-01-01

    This software is designed to predict large active mirror performance at various stages in the fabrication lifecycle of the mirror. It was developed for 1-meter class powered mirrors for astronomical purposes, but is extensible to other geometries. The package accepts finite element model (FEM) inputs and laboratory measured data for large optical-quality mirrors with active figure control. It computes phenomenological contributions to the surface figure error using several built-in optimization techniques. These phenomena include stresses induced in the mirror by the manufacturing process and the support structure, the test procedure, high spatial frequency errors introduced by the polishing process, and other process-dependent deleterious effects due to light-weighting of the mirror. Then, depending on the maturity of the mirror, it either predicts the best surface figure error that the mirror will attain, or it verifies that the requirements for the error sources have been met once the best surface figure error has been measured. The unique feature of this software is that it ties together physical phenomenology with wavefront sensing and control techniques and various optimization methods including convex optimization, Kalman filtering, and quadratic programming to both generate predictive models and to do requirements verification. This software combines three distinct disciplines: wavefront control, predictive models based on FEM, and requirements verification using measured data in a robust, reusable code that is applicable to any large optics for ground and space telescopes. The software also includes state-of-the-art wavefront control algorithms that allow closed-loop performance to be computed. It allows for quantitative trade studies to be performed for optical systems engineering, including computing the best surface figure error under various testing and operating conditions. After the mirror manufacturing process and testing have been completed, the software package can be used to verify that the underlying requirements have been met.

  14. Infrastructure and automobile shifts: positioning transit to reduce life-cycle environmental impacts for urban sustainability goals

    NASA Astrophysics Data System (ADS)

    Chester, Mikhail; Pincetl, Stephanie; Elizabeth, Zoe; Eisenstein, William; Matute, Juan

    2013-03-01

    Public transportation systems are often part of strategies to reduce urban environmental impacts from passenger transportation, yet comprehensive energy and environmental life-cycle measures, including upfront infrastructure effects and indirect and supply chain processes, are rarely considered. Using the new bus rapid transit and light rail lines in Los Angeles, near-term and long-term life-cycle impact assessments are developed, including consideration of reduced automobile travel. Energy consumption and emissions of greenhouse gases and criteria pollutants are assessed, as well the potential for smog and respiratory impacts. Results show that life-cycle infrastructure, vehicle, and energy production components significantly increase the footprint of each mode (by 48-100% for energy and greenhouse gases, and up to 6200% for environmental impacts), and emerging technologies and renewable electricity standards will significantly reduce impacts. Life-cycle results are identified as either local (in Los Angeles) or remote, and show how the decision to build and operate a transit system in a city produces environmental impacts far outside of geopolitical boundaries. Ensuring shifts of between 20-30% of transit riders from automobiles will result in passenger transportation greenhouse gas reductions for the city, and the larger the shift, the quicker the payback, which should be considered for time-specific environmental goals.

  15. Long-term shifts in life-cycle energy efficiency and carbon intensity.

    PubMed

    Yeh, Sonia; Mishra, Gouri Shankar; Morrison, Geoff; Teter, Jacob; Quiceno, Raul; Gillingham, Kenneth; Riera-Palou, Xavier

    2013-03-19

    The quantity of primary energy needed to support global human activity is in large part determined by how efficiently that energy is converted to a useful form. We estimate the system-level life-cycle energy efficiency (EF) and carbon intensity (CI) across primary resources for 2005-2100. Our results underscore that although technological improvements at each energy conversion process will improve technology efficiency and lead to important reductions in primary energy use, market mediated effects and structural shifts toward less efficient pathways and pathways with multiple stages of conversion will dampen these efficiency gains. System-level life-cycle efficiency may decrease as mitigation efforts intensify, since low-efficiency renewable systems with high output have much lower GHG emissions than some high-efficiency fossil fuel systems. Climate policies accelerate both improvements in EF and the adoption of renewable technologies, resulting in considerably lower primary energy demand and GHG emissions. Life-cycle EF and CI of useful energy provide a useful metric for understanding dynamics of implementing climate policies. The approaches developed here reiterate the necessity of a combination of policies that target efficiency and decarbonized energy technologies. We also examine life-cycle exergy efficiency (ExF) and find that nearly all of the qualitative results hold regardless of whether we use ExF or EF.

  16. National Geospatial Data Asset Lifecycle Baseline Maturity Assessment for the Federal Geographic Data Committee

    NASA Astrophysics Data System (ADS)

    Peltz-Lewis, L. A.; Blake-Coleman, W.; Johnston, J.; DeLoatch, I. B.

    2014-12-01

    The Federal Geographic Data Committee (FGDC) is designing a portfolio management process for 193 geospatial datasets contained within the 16 topical National Spatial Data Infrastructure themes managed under OMB Circular A-16 "Coordination of Geographic Information and Related Spatial Data Activities." The 193 datasets are designated as National Geospatial Data Assets (NGDA) because of their significance in implementing to the missions of multiple levels of government, partners and stakeholders. As a starting point, the data managers of these NGDAs will conduct a baseline maturity assessment of the dataset(s) for which they are responsible. The maturity is measured against benchmarks related to each of the seven stages of the data lifecycle management framework promulgated within the OMB Circular A-16 Supplemental Guidance issued by OMB in November 2010. This framework was developed by the interagency Lifecycle Management Work Group (LMWG), consisting of 16 Federal agencies, under the 2004 Presidential Initiative the Geospatial Line of Business,using OMB Circular A-130" Management of Federal Information Resources" as guidance The seven lifecycle stages are: Define, Inventory/Evaluate, Obtain, Access, Maintain, Use/Evaluate, and Archive. This paper will focus on the Lifecycle Baseline Maturity Assessment, and efforts to integration the FGDC approach with other data maturity assessments.

  17. Cradle-to-gate life-cycle assessment of laminated veneer lumber (LVL) produced in the Pacific Northwest region of the United States

    Treesearch

    Richard D. Bergman; Sevda Alanya-Rosenbaum

    2017-01-01

    The goal of the present study was to develop life-cycle impact assessment (LCIA) data associated with laminated veneer lumber (LVL) production in the Pacific Northwest (PNW) region of the United States from cradle-to-gate mill output. The authors collected primary (survey) mill data from LVL production facilities per Consortium on Research for Renewable Industrial...

  18. Integrating life-cycle environmental and economic assessment with transportation and land use planning.

    PubMed

    Chester, Mikhail V; Nahlik, Matthew J; Fraser, Andrew M; Kimball, Mindy A; Garikapati, Venu M

    2013-01-01

    The environmental outcomes of urban form changes should couple life-cycle and behavioral assessment methods to better understand urban sustainability policy outcomes. Using Phoenix, Arizona light rail as a case study, an integrated transportation and land use life-cycle assessment (ITLU-LCA) framework is developed to assess the changes to energy consumption and air emissions from transit-oriented neighborhood designs. Residential travel, commercial travel, and building energy use are included and the framework integrates household behavior change assessment to explore the environmental and economic outcomes of policies that affect infrastructure. The results show that upfront environmental and economic investments are needed (through more energy-intense building materials for high-density structures) to produce long run benefits in reduced building energy use and automobile travel. The annualized life-cycle benefits of transit-oriented developments in Phoenix can range from 1.7 to 230 Gg CO2e depending on the aggressiveness of residential density. Midpoint impact stressors for respiratory effects and photochemical smog formation are also assessed and can be reduced by 1.2-170 Mg PM10e and 41-5200 Mg O3e annually. These benefits will come at an additional construction cost of up to $410 million resulting in a cost of avoided CO2e at $16-29 and household cost savings.

  19. Impact of Life-Cycle Stage and Gender on the Ability to Balance Work and Family Responsibilities.

    ERIC Educational Resources Information Center

    Higgins, Christopher; And Others

    1994-01-01

    Examined impact of gender and life-cycle stage on three components of work-family conflict using sample of 3,616 respondents. For men, levels of work-family conflict were moderately lower in each successive life-cycle stage. For women, levels were similar in two early life-cycle stages but were significantly lower in later life-cycle stage.…

  20. Lifecycle Prognostics Architecture for Selected High-Cost Active Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    N. Lybeck; B. Pham; M. Tawfik

    There are an extensive body of knowledge and some commercial products available for calculating prognostics, remaining useful life, and damage index parameters. The application of these technologies within the nuclear power community is still in its infancy. Online monitoring and condition-based maintenance is seeing increasing acceptance and deployment, and these activities provide the technological bases for expanding to add predictive/prognostics capabilities. In looking to deploy prognostics there are three key aspects of systems that are presented and discussed: (1) component/system/structure selection, (2) prognostic algorithms, and (3) prognostics architectures. Criteria are presented for component selection: feasibility, failure probability, consequences of failure,more » and benefits of the prognostics and health management (PHM) system. The basis and methods commonly used for prognostics algorithms are reviewed and summarized. Criteria for evaluating PHM architectures are presented: open, modular architecture; platform independence; graphical user interface for system development and/or results viewing; web enabled tools; scalability; and standards compatibility. Thirteen software products were identified and discussed in the context of being potentially useful for deployment in a PHM program applied to systems in a nuclear power plant (NPP). These products were evaluated by using information available from company websites, product brochures, fact sheets, scholarly publications, and direct communication with vendors. The thirteen products were classified into four groups of software: (1) research tools, (2) PHM system development tools, (3) deployable architectures, and (4) peripheral tools. Eight software tools fell into the deployable architectures category. Of those eight, only two employ all six modules of a full PHM system. Five systems did not offer prognostic estimates, and one system employed the full health monitoring suite but lacked operations and maintenance support. Each product is briefly described in Appendix A. Selection of the most appropriate software package for a particular application will depend on the chosen component, system, or structure. Ongoing research will determine the most appropriate choices for a successful demonstration of PHM systems in aging NPPs.« less

  1. Design and life-cycle considerations for unconventional-reservoir wells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miskimins, J.L.

    2009-05-15

    This paper provides an overview of design and life-cycle considerations for certain unconventional-reservoir wells. An overview of unconventional-reservoir definitions is provided. Well design and life-cycle considerations are addressed from three aspects: upfront reservoir development, initial well completion, and well-life and long-term considerations. Upfront-reservoir-development issues discussed include well spacing, well orientation, reservoir stress orientations, and tubular metallurgy. Initial-well-completion issues include maximum treatment pressures and rates, treatment diversion, treatment staging, flowback and cleanup, and dewatering needs. Well-life and long-term discussions include liquid loading, corrosion, refracturing and associated fracture reorientation, and the cost of abandonment. These design considerations are evaluated with case studiesmore » for five unconventional-reservoir types: shale gas (Barnett shale), tight gas (Jonah feld), tight oil (Bakken play), coalbed methane (CBM) (San Juan basin), and tight heavy oil (Lost Hills field). In evaluating the life cycle and design of unconventional-reservoir wells, 'one size' does not fit all and valuable knowledge and a shortening of the learning curve can be achieved for new developments by studying similar, more-mature fields.« less

  2. Life-cycle: simulating the problems of aging and the aged.

    PubMed

    Chaisson, G M

    1977-01-01

    A review of the problems that led to the development of a social simulation game, entitled "Life-Cycle" and an explanation of the objectives of the game and how it is used in the training of health care personnel in geriatrics is presented. Additionally, the results of a controlled experimental evaluation of the game's impact upon participants in terms of change in emotional responses and attitudes toward the elderly is covered.

  3. Orbit Determination and Navigation Software Testing for the Mars Reconnaissance Orbiter

    NASA Technical Reports Server (NTRS)

    Pini, Alex

    2011-01-01

    During the extended science phase of the Mars Reconnaissance Orbiter's lifecycle, the operational duties pertaining to navigation primarily involve orbit determination. The orbit determination process utilizes radiometric tracking data and is used for the prediction and reconstruction of MRO's trajectories. Predictions are done twice per week for ephemeris updates on-board the spacecraft and for planning purposes. Orbit Trim Maneuvers (OTM-s) are also designed using the predicted trajectory. Reconstructions, which incorporate a batch estimator, provide precise information about the spacecraft state to be synchronized with scientific measurements. These tasks were conducted regularly to validate the results obtained by the MRO Navigation Team. Additionally, the team is in the process of converting to newer versions of the navigation software and operating system. The capability to model multiple densities in the Martian atmosphere is also being implemented. However, testing outputs among these different configurations was necessary to ensure compliance to a satisfactory degree.

  4. SMART Layer and SMART Suitcase for structural health monitoring applications

    NASA Astrophysics Data System (ADS)

    Lin, Mark; Qing, Xinlin; Kumar, Amrita; Beard, Shawn J.

    2001-06-01

    Knowledge of integrity of in-service structures can greatly enhance their safety and reliability and lower structural maintenance cost. Current practices limit the extent of real-time knowledge that can be obtained from structures during inspection, are labor-intensive and thereby increase life-cycle costs. Utilization of distributed sensors integrated with the structure is a viable and cost-effective means of monitoring the structure and reducing inspection costs. Acellent Technologies is developing a novel system for actively and passively interrogating the health of a structure through an integrated network of sensors and actuators. Acellent's system comprises of SMART Layers, SMART Suitcase and diagnostic software. The patented SMART Layer is a thin dielectric film with an embedded network of distributed piezoelectric actuators/sensors that can be surface-mounted on metallic structures or embedded inside composite structures. The SMART Suitcase is a portable diagnostic unit designed with multiple sensor/actuator channels to interface with the SMART Layer, generate diagnostic signals from actuators and record measurements from the embedded sensors. With appropriate diagnostic software, Acellent's system can be used for monitoring structural condition and for detecting damage while the structures are in service. This paper enumerates on the SMART Layer and SMART Suitcase and their applicability to composite and metal structures.

  5. Transportability, distributability and rehosting experience with a kernel operating system interface set

    NASA Technical Reports Server (NTRS)

    Blumberg, F. C.; Reedy, A.; Yodis, E.

    1986-01-01

    For the past two years, PRC has been transporting and installing a software engineering environment framework, the Automated Product control Environment (APCE), at a number of PRC and government sites on a variety of different hardware. The APCE was designed using a layered architecture which is based on a standardized set of interfaces to host system services. This interface set called the APCE Interface Set (AIS), was designed to support many of the same goals as the Common Ada Programming Support Environment (APSE) Interface Set (CAIS). The APCE was developed to provide support for the full software lifecycle. Specific requirements of the APCE design included: automation of labor intensive administrative and logistical tasks: freedom for project team members to use existing tools: maximum transportability for APCE programs, interoperability of APCE database data, and distributability of both processes and data: and maximum performance on a wide variety of operating systems. A brief description is given of the APCE and AIS, a comparison of the AIS and CAIS both in terms of functionality and of philosophy and approach and a presentation of PRC's experience in rehosting AIS and transporting APCE programs and project data. Conclusions are drawn from this experience with respect to both the CAIS efforts and Space Station plans.

  6. Life-cycle analysis of shale gas and natural gas.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, C.E.; Han, J.; Burnham, A.

    2012-01-27

    The technologies and practices that have enabled the recent boom in shale gas production have also brought attention to the environmental impacts of its use. Using the current state of knowledge of the recovery, processing, and distribution of shale gas and conventional natural gas, we have estimated up-to-date, life-cycle greenhouse gas emissions. In addition, we have developed distribution functions for key parameters in each pathway to examine uncertainty and identify data gaps - such as methane emissions from shale gas well completions and conventional natural gas liquid unloadings - that need to be addressed further. Our base case results showmore » that shale gas life-cycle emissions are 6% lower than those of conventional natural gas. However, the range in values for shale and conventional gas overlap, so there is a statistical uncertainty regarding whether shale gas emissions are indeed lower than conventional gas emissions. This life-cycle analysis provides insight into the critical stages in the natural gas industry where emissions occur and where opportunities exist to reduce the greenhouse gas footprint of natural gas.« less

  7. 10 CFR 435.8 - Life-cycle costing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Life-cycle costing. 435.8 Section 435.8 Energy DEPARTMENT...-cycle costing. Each Federal agency shall determine life-cycle cost-effectiveness by using the procedures..., including lower life-cycle costs, positive net savings, savings-to-investment ratio that is estimated to be...

  8. 10 CFR 435.8 - Life-cycle costing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Life-cycle costing. 435.8 Section 435.8 Energy DEPARTMENT...-cycle costing. Each Federal agency shall determine life-cycle cost-effectiveness by using the procedures..., including lower life-cycle costs, positive net savings, savings-to-investment ratio that is estimated to be...

  9. 10 CFR 435.8 - Life-cycle costing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Life-cycle costing. 435.8 Section 435.8 Energy DEPARTMENT...-cycle costing. Each Federal agency shall determine life-cycle cost-effectiveness by using the procedures..., including lower life-cycle costs, positive net savings, savings-to-investment ratio that is estimated to be...

  10. 77 FR 38766 - Proposed Information Collection; Comment Request; International Client Life-Cycle Multi-Purpose...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-29

    ... Request; International Client Life-Cycle Multi-Purpose Forms AGENCY: International Trade Administration... aspects of an international organization's life-cycle with CS. CS is mandated by Congress to help U.S... trade events to U.S. organizations. The International Client Life-cycle Multi-Purpose Forms, previously...

  11. 10 CFR 435.8 - Life-cycle costing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Life-cycle costing. 435.8 Section 435.8 Energy DEPARTMENT... BUILDINGS Mandatory Energy Efficiency Standards for Federal Low-Rise Residential Buildings. § 435.8 Life-cycle costing. Each Federal agency shall determine life-cycle cost-effectiveness by using the procedures...

  12. 77 FR 38582 - Proposed Information Collection; Comment Request; Domestic Client Life-Cycle Multi-Purpose Forms

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-28

    ... Request; Domestic Client Life-Cycle Multi-Purpose Forms AGENCY: International Trade Administration. ACTION... life-cycle with CS. CS is mandated by Congress to help U.S. organizations, particularly small and... Client Life-cycle Multi-Purpose Forms, previously titled Export Information Services Order Forms, are...

  13. 10 CFR 435.8 - Life-cycle costing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Life-cycle costing. 435.8 Section 435.8 Energy DEPARTMENT... BUILDINGS Mandatory Energy Efficiency Standards for Federal Low-Rise Residential Buildings. § 435.8 Life-cycle costing. Each Federal agency shall determine life-cycle cost-effectiveness by using the procedures...

  14. Alternative Fuels Data Center: Lifecycle Energy Balance

    Science.gov Websites

    Energy Balance to someone by E-mail Share Alternative Fuels Data Center: Lifecycle Energy Balance on Facebook Tweet about Alternative Fuels Data Center: Lifecycle Energy Balance on Twitter Bookmark Alternative Fuels Data Center: Lifecycle Energy Balance on Google Bookmark Alternative Fuels Data Center

  15. Life-Cycle environmental impact assessment of mineral industries

    NASA Astrophysics Data System (ADS)

    Hisan Farjana, Shahjadi; Huda, Nazmul; Parvez Mahmud, M. A.

    2018-05-01

    Mining is the extraction and processing of valuable ferro and non-ferro metals and minerals to be further used in manufacturing industries. Valuable metals and minerals are extracted from the geological deposits and ores deep in the surface through complex manufacturing technologies. The extraction and processing of mining industries involve particle emission to air or water, toxicity to the environment, contamination of water resources, ozone layer depletion and most importantly decay of human health. Despite all these negative impacts towards sustainability, mining industries are working throughout the world to facilitate the employment sector, economy and technological growth. The five most important miners in the world are South Africa, Russia, Australia, Ukraine, Guinea. The mining industries contributes to their GDP significantly. However, the most important issue is making the mining world sustainable thus reducing the emissions. To address the environmental impacts caused by the mining sectors, this paper is going to analyse the environmental impacts caused by the 5 major minerals extraction processes, which are bauxite, ilmenite, iron ore, rutile and uranium by using the life-cycle impact assessment technologies. The analysis is done here using SimaPro software version 8.4 using ReCipe, CML and Australian indicator method.

  16. Life-cycle energy impacts for adapting an urban water supply system to droughts.

    PubMed

    Lam, Ka Leung; Stokes-Draut, Jennifer R; Horvath, Arpad; Lane, Joe L; Kenway, Steven J; Lant, Paul A

    2017-12-15

    In recent years, cities in some water stressed regions have explored alternative water sources such as seawater desalination and potable water recycling in spite of concerns over increasing energy consumption. In this study, we evaluate the current and future life-cycle energy impacts of four alternative water supply strategies introduced during a decade-long drought in South East Queensland (SEQ), Australia. These strategies were: seawater desalination, indirect potable water recycling, network integration, and rainwater tanks. Our work highlights the energy burden of alternative water supply strategies which added approximately 24% life-cycle energy use to the existing supply system (with surface water sources) in SEQ even for a current post-drought low utilisation status. Over half of this additional life-cycle energy use was from the centralised alternative supply strategies. Rainwater tanks contributed an estimated 3% to regional water supply, but added over 10% life-cycle energy use to the existing system. In the future scenario analysis, we compare the life-cycle energy use between "Normal", "Dry", "High water demand" and "Design capacity" scenarios. In the "Normal" scenario, a long-term low utilisation of the desalination system and the water recycling system has greatly reduced the energy burden of these centralised strategies to only 13%. In contrast, higher utilisation in the unlikely "Dry" and "Design capacity" scenarios add 86% and 140% to life-cycle energy use of the existing system respectively. In the "High water demand" scenario, a 20% increase in per capita water use over 20 years "consumes" more energy than is used by the four alternative strategies in the "Normal" scenario. This research provides insight for developing more realistic long-term scenarios to evaluate and compare life-cycle energy impacts of drought-adaptation infrastructure and regional decentralised water sources. Scenario building for life-cycle assessments of water supply systems should consider i) climate variability and, therefore, infrastructure utilisation rate, ii) potential under-utilisation for both installed centralised and decentralised sources, and iii) the potential energy penalty for operating infrastructure well below its design capacity (e.g., the operational energy intensity of the desalination system is three times higher at low utilisation rates). This study illustrates that evaluating the life-cycle energy use and intensity of these type of supply sources without considering their realistic long-term operating scenario(s) can potentially distort and overemphasise their energy implications. To other water stressed regions, this work shows that managing long-term water demand is also important, in addition to acknowledging the energy-intensive nature of some alternative water sources. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirhonen, P.

    Life-cycle assessment is usually based on regular discharges that occur at a more or less constant rate. Nevertheless, the more factors that are taken into account in the LCA the better picture it gives on the environmental aspects of a product. In this study an approach to incorporate accidental releases into a products` life-cycle assessment was developed. In this approach accidental releases are divided into two categories. The first category consists of those unplanned releases which occur with a predicted level and frequency. Due to the high frequency and small release size at a time, these accidental releases can bemore » compared to continuous emissions. Their global impacts are studied in this approach. Accidental releases of the second category are sudden, unplanned releases caused by exceptional situations, e.g. technical failure, action error or disturbances in process conditions. These releases have a singular character and local impacts are typical of them. As far as the accidental releases of the second category are concerned, the approach introduced in this study results in a risk value for every stage of a life-cycle, the sum of which is a risk value for the whole life-cycle. Risk value is based on occurrence frequencies of incidents and potential environmental damage caused by releases. Risk value illustrates the level of potential damage caused by accidental releases related to the system under study and is meant to be used for comparison of these levels of two different products. It can also be used to compare the risk levels of different stages of the life-cycle. An approach was illustrated using petrol as an example product. The whole life-cycle of petrol from crude oil production to the consumption of petrol was studied.« less

  18. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments.

    PubMed

    Szőke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-06-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation's lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers.IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry.This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors.

  19. 32 CFR Appendix to Part 162 - Reporting Procedures

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... generated. e. Projected Life-Cycle Savings. For each PIF project provide the estimated amount of savings the project is projected to earn over the project's economic life. f. Projected Life-Cycle Cost Avoidance. For... Projected Life-Cycle Savings. e. Total Projected Life-Cycle Cost Avoidance. 3. CSI. Each DoD Component that...

  20. 10 CFR 433.8 - Life-cycle costing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Life-cycle costing. 433.8 Section 433.8 Energy DEPARTMENT... HIGH-RISE RESIDENTIAL BUILDINGS § 433.8 Life-cycle costing. Each Federal agency shall determine life... choose to use any of four methods, including lower life-cycle costs, positive net savings, savings-to...

  1. 10 CFR 436.42 - Evaluation of Life-Cycle Cost Effectiveness.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Evaluation of Life-Cycle Cost Effectiveness. 436.42... PROGRAMS Agency Procurement of Energy Efficient Products § 436.42 Evaluation of Life-Cycle Cost...) ENERGY STAR qualified and FEMP designated products may be assumed to be life-cycle cost-effective. (b) In...

  2. 10 CFR 433.8 - Life-cycle costing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Life-cycle costing. 433.8 Section 433.8 Energy DEPARTMENT... HIGH-RISE RESIDENTIAL BUILDINGS § 433.8 Life-cycle costing. Each Federal agency shall determine life... choose to use any of four methods, including lower life-cycle costs, positive net savings, savings-to...

  3. 10 CFR 436.42 - Evaluation of Life-Cycle Cost Effectiveness.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Evaluation of Life-Cycle Cost Effectiveness. 436.42... PROGRAMS Agency Procurement of Energy Efficient Products § 436.42 Evaluation of Life-Cycle Cost...) ENERGY STAR qualified and FEMP designated products may be assumed to be life-cycle cost-effective. (b) In...

  4. 10 CFR 436.42 - Evaluation of Life-Cycle Cost Effectiveness.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Evaluation of Life-Cycle Cost Effectiveness. 436.42... PROGRAMS Agency Procurement of Energy Efficient Products § 436.42 Evaluation of Life-Cycle Cost...) ENERGY STAR qualified and FEMP designated products may be assumed to be life-cycle cost-effective. (b) In...

  5. 10 CFR 433.8 - Life-cycle costing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Life-cycle costing. 433.8 Section 433.8 Energy DEPARTMENT... HIGH-RISE RESIDENTIAL BUILDINGS § 433.8 Life-cycle costing. Each Federal agency shall determine life... choose to use any of four methods, including lower life-cycle costs, positive net savings, savings-to...

  6. 10 CFR 433.8 - Life-cycle costing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Life-cycle costing. 433.8 Section 433.8 Energy DEPARTMENT... FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH-RISE RESIDENTIAL BUILDINGS § 433.8 Life-cycle costing. Each Federal agency shall determine life-cycle cost-effectiveness by using the procedures set out in subpart A...

  7. 10 CFR 433.8 - Life-cycle costing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Life-cycle costing. 433.8 Section 433.8 Energy DEPARTMENT... FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH-RISE RESIDENTIAL BUILDINGS § 433.8 Life-cycle costing. Each Federal agency shall determine life-cycle cost-effectiveness by using the procedures set out in subpart A...

  8. 10 CFR 436.42 - Evaluation of Life-Cycle Cost Effectiveness.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Evaluation of Life-Cycle Cost Effectiveness. 436.42... PROGRAMS Agency Procurement of Energy Efficient Products § 436.42 Evaluation of Life-Cycle Cost...) ENERGY STAR qualified and FEMP designated products may be assumed to be life-cycle cost-effective. (b) In...

  9. 10 CFR 436.42 - Evaluation of Life-Cycle Cost Effectiveness.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the life-cycle cost analysis method in part 436, subpart A, of title 10 of the Code of Federal... 10 Energy 3 2011-01-01 2011-01-01 false Evaluation of Life-Cycle Cost Effectiveness. 436.42... PROGRAMS Agency Procurement of Energy Efficient Products § 436.42 Evaluation of Life-Cycle Cost...

  10. HIV-1 Gag as an Antiviral Target: Development of Assembly and Maturation Inhibitors.

    PubMed

    Spearman, Paul

    2016-01-01

    HIV-1 Gag is the master orchestrator of particle assembly. The central role of Gag at multiple stages of the HIV lifecycle has led to efforts to develop drugs that directly target Gag and prevent the formation and release of infectious particles. Until recently, however, only the catalytic site protease inhibitors have been available to inhibit late stages of HIV replication. This review summarizes the current state of development of antivirals that target Gag or disrupt late events in the retrovirus lifecycle such as maturation of the viral capsid. Maturation inhibitors represent an exciting new series of antiviral compounds, including those that specifically target CA-SP1 cleavage and the allosteric integrase inhibitors that inhibit maturation by a completely different mechanism. Numerous small molecules and peptides targeting CA have been studied in attempts to disrupt steps in assembly. Efforts to target CA have recently gained considerable momentum from the development of small molecules that bind CA and alter capsid stability at the post-entry stage of the lifecycle. Efforts to develop antivirals that inhibit incorporation of genomic RNA or to inhibit late budding events remain in preliminary stages of development. Overall, the development of novel antivirals targeting Gag and the late stages in HIV replication appears much closer to success than ever, with the new maturation inhibitors leading the way.

  11. 10 CFR 455.64 - Life-cycle cost methodology.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Life-cycle cost methodology. 455.64 Section 455.64 Energy..., Hospitals, Units of Local Government, and Public Care Institutions § 455.64 Life-cycle cost methodology. (a) The life-cycle cost methodology under § 455.63(b) of this part is a systematic comparison of the...

  12. 10 CFR 455.64 - Life-cycle cost methodology.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Life-cycle cost methodology. 455.64 Section 455.64 Energy..., Hospitals, Units of Local Government, and Public Care Institutions § 455.64 Life-cycle cost methodology. (a) The life-cycle cost methodology under § 455.63(b) of this part is a systematic comparison of the...

  13. UTILITY OF A FULL LIFE-CYCLE COPEPOD BIOASSAY APPROACH FOR ASSESSMENT OF SEDIMENT-ASSOCIATED CONTAMINANT MIXTURES. (R825279)

    EPA Science Inventory

    Abstract

    We compared a 21 day full life-cycle bioassay with an existing 14 day partial life-cycle bioassay for two species of meiobenthic copepods, Microarthridion littorale and Amphiascus tenuiremis. We hypothesized that full life-cycle tests would bette...

  14. [Design of medical devices management system supporting full life-cycle process management].

    PubMed

    Su, Peng; Zhong, Jianping

    2014-03-01

    Based on the analysis of the present status of medical devices management, this paper optimized management process, developed a medical devices management system with Web technologies. With information technology to dynamic master the use of state of the entire life-cycle of medical devices. Through the closed-loop management with pre-event budget, mid-event control and after-event analysis, improved the delicacy management level of medical devices, optimized asset allocation, promoted positive operation of devices.

  15. JPL Innovation Foundry

    NASA Technical Reports Server (NTRS)

    Sherwood, Brent; McCleese, Daniel J.

    2012-01-01

    NASA supports the community of mission principal investigators by helping them ideate, mature, and propose concepts for new missions. As NASA's Federally Funded Research and Development Center (FFRDC), JPL is a primary resource for providing this service. The environmental context for the formulation lifecycle evolves continuously. Contemporary trends include: more competitors; more-complex mission ideas; scarcer formulation resources; and higher standards for technical evaluation. Derived requirements for formulation support include: stable, clear, reliable methods tailored for each stage of the formulation lifecycle; on-demand access to standout technical and programmatic subject-matter experts; optimized, outfitted facilities; smart access to learning embodied in a vast oeuvre of prior formulation work; hands-on method coaching. JPL has retooled its provision of integrated formulation lifecycle support to PIs, teams, and program offices in response to this need. This mission formulation enterprise is the JPL Innovation Foundry.

  16. The United States Geological Survey Science Data Lifecycle Model

    USGS Publications Warehouse

    Faundeen, John L.; Burley, Thomas E.; Carlino, Jennifer A.; Govoni, David L.; Henkel, Heather S.; Holl, Sally L.; Hutchison, Vivian B.; Martín, Elizabeth; Montgomery, Ellyn T.; Ladino, Cassandra; Tessler, Steven; Zolly, Lisa S.

    2014-01-01

    U.S. Geological Survey (USGS) data represent corporate assets with potential value beyond any immediate research use, and therefore need to be accounted for and properly managed throughout their lifecycle. Recognizing these motives, a USGS team developed a Science Data Lifecycle Model (SDLM) as a high-level view of data—from conception through preservation and sharing—to illustrate how data management activities relate to project workflows, and to assist with understanding the expectations of proper data management. In applying the Model to research activities, USGS scientists can ensure that data products will be well-described, preserved, accessible, and fit for re-use. The Model also serves as a structure to help the USGS evaluate and improve policies and practices for managing scientific data, and to identify areas in which new tools and standards are needed.

  17. The TMIS life-cycle process document, revision A

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Technical and Management Information System (TMIS) Life-Cycle Process Document describes the processes that shall be followed in the definition, design, development, test, deployment, and operation of all TMIS products and data base applications. This document is a roll out of TMIS Standards Document (SSP 30546). The purpose of this document is to define the life cycle methodology that the developers of all products and data base applications and any subsequent modifications shall follow. Included in this methodology are descriptions of the tasks, deliverables, reviews, and approvals that are required before a product or data base application is accepted in the TMIS environment.

  18. How Metamorphosis Is Different in Plethodontids: Larval Life History Perspectives on Life-Cycle Evolution

    PubMed Central

    Beachy, Christopher K.; Ryan, Travis J.; Bonett, Ronald M.

    2017-01-01

    Plethodontid salamanders exhibit biphasic, larval form paedomorphic, and direct developing life cycles. This diversity of developmental strategies exceeds that of any other family of terrestrial vertebrate. Here we compare patterns of larval development among the three divergent lineages of biphasic plethodontids and other salamanders. We discuss how patterns of life-cycle evolution and larval ecology might have produced a wide array of larval life histories. Compared with many other salamanders, most larval plethodontids have relatively slow growth rates and sometimes exceptionally long larval periods (up to 60 mo). Recent phylogenetic analyses of life-cycle evolution indicate that ancestral plethodontids were likely direct developers. If true, then biphasic and paedomorphic lineages might have been independently derived through different developmental mechanisms. Furthermore, biphasic plethodontids largely colonized stream habitats, which tend to have lower productivity than seasonally ephemeral ponds. Consistent with this, plethodontid larvae grow very slowly, and metamorphic timing does not appear to be strongly affected by growth history. On the basis of this, we speculate that feeding schedules and stress hormones might play a comparatively reduced role in governing the timing of metamorphosis of stream-dwelling salamanders, particularly plethodontids. PMID:29269959

  19. Proteomic Analysis of the Schistosoma mansoni Miracidium.

    PubMed

    Wang, Tianfang; Zhao, Min; Rotgans, Bronwyn A; Strong, April; Liang, Di; Ni, Guoying; Limpanont, Yanin; Ramasoota, Pongrama; McManus, Donald P; Cummins, Scott F

    2016-01-01

    Despite extensive control efforts, schistosomiasis continues to be a major public health problem in developing nations in the tropics and sub-tropics. The miracidium, along with the cercaria, both of which are water-borne and free-living, are the only two stages in the life-cycle of Schistosoma mansoni which are involved in host invasion. Miracidia penetrate intermediate host snails and develop into sporocysts, which lead to cercariae that can infect humans. Infection of the snail host by the miracidium represents an ideal point at which to interrupt the parasite's life-cycle. This research focuses on an analysis of the miracidium proteome, including those proteins that are secreted. We have identified a repertoire of proteins in the S. mansoni miracidium at 2 hours post-hatch, including proteases, venom allergen-like proteins, receptors and HSP70, which might play roles in snail-parasite interplay. Proteins involved in energy production and conservation were prevalent, as were proteins predicted to be associated with defence. This study also provides a strong foundation for further understanding the roles that neurohormones play in host-seeking by schistosomes, with the potential for development of novel anthelmintics that interfere with its various life-cycle stages.

  20. Understanding future emissions from low-carbon power systems by integration of life-cycle assessment and integrated energy modelling

    NASA Astrophysics Data System (ADS)

    Pehl, Michaja; Arvesen, Anders; Humpenöder, Florian; Popp, Alexander; Hertwich, Edgar G.; Luderer, Gunnar

    2017-12-01

    Both fossil-fuel and non-fossil-fuel power technologies induce life-cycle greenhouse gas emissions, mainly due to their embodied energy requirements for construction and operation, and upstream CH4 emissions. Here, we integrate prospective life-cycle assessment with global integrated energy-economy-land-use-climate modelling to explore life-cycle emissions of future low-carbon power supply systems and implications for technology choice. Future per-unit life-cycle emissions differ substantially across technologies. For a climate protection scenario, we project life-cycle emissions from fossil fuel carbon capture and sequestration plants of 78-110 gCO2eq kWh-1, compared with 3.5-12 gCO2eq kWh-1 for nuclear, wind and solar power for 2050. Life-cycle emissions from hydropower and bioenergy are substantial (˜100 gCO2eq kWh-1), but highly uncertain. We find that cumulative emissions attributable to upscaling low-carbon power other than hydropower are small compared with direct sectoral fossil fuel emissions and the total carbon budget. Fully considering life-cycle greenhouse gas emissions has only modest effects on the scale and structure of power production in cost-optimal mitigation scenarios.

  1. Role of neural networks for avionics

    NASA Astrophysics Data System (ADS)

    Bowman, Christopher L.; DeYong, Mark R.; Eskridge, Thomas C.

    1995-08-01

    Neural network (NN) architectures provide a thousand-fold speed-up in computational power per watt along with the flexibility to learn/adapt so as to reduce software life-cycle costs. Thus NNs are posed to provide a key supporting role to meet the avionics upgrade challenge for affordable improved mission capability especially near hardware where flexible and powerful smart processing is needed. This paper summarizes the trends for air combat and the resulting avionics needs. A paradigm for information fusion and response management is then described from which viewpoint the role for NNs as a complimentary technology in meeting these avionics challenges is explained along with the key obstacles for NNs.

  2. Product Lifecycle Management and the Quest for Sustainable Space Exploration Solutions

    NASA Technical Reports Server (NTRS)

    Caruso, Pamela W.; Dumbacher, Daniel L.; Grieves, Michael

    2011-01-01

    Product Lifecycle Management (PLM) is an outcome of lean thinking to eliminate waste and increase productivity. PLM is inextricably tied to the systems engineering business philosophy, coupled with a methodology by which personnel, processes and practices, and information technology combine to form an architecture platform for product design, development, manufacturing, operations, and decommissioning. In this model, which is being implemented by the Marshall Space Flight Center (MSFC) Engineering Directorate, total lifecycle costs are important variables for critical decision-making. With the ultimate goal to deliver quality products that meet or exceed requirements on time and within budget, PLM is a powerful concept to shape everything from engineering trade studies and testing goals, to integrated vehicle operations and retirement scenarios. This briefing will demonstrate how the MSFC Engineering Directorate is implementing PLM as part of an overall strategy to deliver safe, reliable, and affordable space exploration solutions and how that strategy aligns with the Agency and Center systems engineering policies and processes. Sustainable space exploration solutions demand that all lifecycle phases be optimized, and engineering the next generation space transportation system requires a paradigm shift such that digital tools and knowledge management, which are central elements of PLM, are used consistently to maximum effect. Adopting PLM, which has been used by the aerospace and automotive industry for many years, for spacecraft applications provides a foundation for strong, disciplined systems engineering and accountable return on investment. PLM enables better solutions using fewer resources by making lifecycle considerations in an integrative decision-making process.

  3. Consensus-Driven Development of a Terminology for Biobanking, the Duke Experience.

    PubMed

    Ellis, Helena; Joshi, Mary-Beth; Lynn, Aenoch J; Walden, Anita

    2017-04-01

    Biobanking at Duke University has existed for decades and has grown over time in silos and based on specialized needs, as is true with most biomedical research centers. These silos developed informatics systems to support their own individual requirements, with no regard for semantic or syntactic interoperability. Duke undertook an initiative to implement an enterprise-wide biobanking information system to serve its many diverse biobanking entities. A significant part of this initiative was the development of a common terminology for use in the commercial software platform. Common terminology provides the foundation for interoperability across biobanks for data and information sharing. We engaged experts in research, informatics, and biobanking through a consensus-driven process to agree on 361 terms and their definitions that encompass the lifecycle of a biospecimen. Existing standards, common terms, and data elements from published articles provided a foundation on which to build the biobanking terminology; a broader set of stakeholders then provided additional input and feedback in a secondary vetting process. The resulting standardized biobanking terminology is now available for sharing with the biobanking community to serve as a foundation for other institutions who are considering a similar initiative.

  4. Consensus-Driven Development of a Terminology for Biobanking, the Duke Experience

    PubMed Central

    Joshi, Mary-Beth; Lynn, Aenoch J.; Walden, Anita

    2017-01-01

    Biobanking at Duke University has existed for decades and has grown over time in silos and based on specialized needs, as is true with most biomedical research centers. These silos developed informatics systems to support their own individual requirements, with no regard for semantic or syntactic interoperability. Duke undertook an initiative to implement an enterprise-wide biobanking information system to serve its many diverse biobanking entities. A significant part of this initiative was the development of a common terminology for use in the commercial software platform. Common terminology provides the foundation for interoperability across biobanks for data and information sharing. We engaged experts in research, informatics, and biobanking through a consensus-driven process to agree on 361 terms and their definitions that encompass the lifecycle of a biospecimen. Existing standards, common terms, and data elements from published articles provided a foundation on which to build the biobanking terminology; a broader set of stakeholders then provided additional input and feedback in a secondary vetting process. The resulting standardized biobanking terminology is now available for sharing with the biobanking community to serve as a foundation for other institutions who are considering a similar initiative. PMID:28338350

  5. Missile signal processing common computer architecture for rapid technology upgrade

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  6. Cost-effectiveness Analysis for Technology Acquisition.

    PubMed

    Chakravarty, A; Naware, S S

    2008-01-01

    In a developing country with limited resources, it is important to utilize the total cost visibility approach over the entire life-cycle of the technology and then analyse alternative options for acquiring technology. The present study analysed cost-effectiveness of an "In-house" magnetic resonance imaging (MRI) scan facility of a large service hospital against outsourcing possibilities. Cost per unit scan was calculated by operating costing method and break-even volume was calculated. Then life-cycle cost analysis was performed to enable total cost visibility of the MRI scan in both "In-house" and "outsourcing of facility" configuration. Finally, cost-effectiveness analysis was performed to identify the more acceptable decision option. Total cost for performing unit MRI scan was found to be Rs 3,875 for scans without contrast and Rs 4,129 with contrast. On life-cycle cost analysis, net present value (NPV) of the "In-house" configuration was found to be Rs-(4,09,06,265) while that of "outsourcing of facility" configuration was Rs-(5,70,23,315). Subsequently, cost-effectiveness analysis across eight Figures of Merit showed the "In-house" facility to be the more acceptable option for the system. Every decision for acquiring high-end technology must be subjected to life-cycle cost analysis.

  7. Providing Data Quality Information for Remote Sensing Applications

    NASA Astrophysics Data System (ADS)

    Albrecht, F.; Blaschke, T.; Lang, S.; Abdulmutalib, H. M.; Szabó, G.; Barsi, Á.; Batini, C.; Bartsch, A.; Kugler, Zs.; Tiede, D.; Huang, G.

    2018-04-01

    The availability and accessibility of remote sensing (RS) data, cloud processing platforms and provided information products and services has increased the size and diversity of the RS user community. This development also generates a need for validation approaches to assess data quality. Validation approaches employ quality criteria in their assessment. Data Quality (DQ) dimensions as the basis for quality criteria have been deeply investigated in the database area and in the remote sensing domain. Several standards exist within the RS domain but a general classification - established for databases - has been adapted only recently. For an easier identification of research opportunities, a better understanding is required how quality criteria are employed in the RS lifecycle. Therefore, this research investigates how quality criteria support decisions that guide the RS lifecycle and how they relate to the measured DQ dimensions. Subsequently follows an overview of the relevant standards in the RS domain that is matched to the RS lifecycle. Conclusively, the required research needs are identified that would enable a complete understanding of the interrelationships between the RS lifecycle, the data sources and the DQ dimensions, an understanding that would be very valuable for designing validation approaches in RS.

  8. Commercial Building Energy Asset Score

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    This software (Asset Scoring Tool) is designed to help building owners and managers to gain insight into the as-built efficiency of their buildings. It is a web tool where users can enter their building information and obtain an asset score report. The asset score report consists of modeled building energy use (by end use and by fuel type), building systems (envelope, lighting, heating, cooling, service hot water) evaluations, and recommended energy efficiency measures. The intended users are building owners and operators who have limited knowledge of building energy efficiency. The scoring tool collects minimum building data (~20 data entries) frommore » users and build a full-scale energy model using the inference functionalities from Facility Energy Decision System (FEDS). The scoring tool runs real-time building energy simulation using EnergyPlus and performs life-cycle cost analysis using FEDS. An API is also under development to allow the third-party applications to exchange data with the web service of the scoring tool.« less

  9. A Life-Cycle Cost Estimating Methodology for NASA-Developed Air Traffic Control Decision Support Tools

    NASA Technical Reports Server (NTRS)

    Wang, Jianzhong Jay; Datta, Koushik; Landis, Michael R. (Technical Monitor)

    2002-01-01

    This paper describes the development of a life-cycle cost (LCC) estimating methodology for air traffic control Decision Support Tools (DSTs) under development by the National Aeronautics and Space Administration (NASA), using a combination of parametric, analogy, and expert opinion methods. There is no one standard methodology and technique that is used by NASA or by the Federal Aviation Administration (FAA) for LCC estimation of prospective Decision Support Tools. Some of the frequently used methodologies include bottom-up, analogy, top-down, parametric, expert judgement, and Parkinson's Law. The developed LCC estimating methodology can be visualized as a three-dimensional matrix where the three axes represent coverage, estimation, and timing. This paper focuses on the three characteristics of this methodology that correspond to the three axes.

  10. Assessing Requirements Volatility and Risk Using Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Russell, Michael S.

    2010-01-01

    There are many factors that affect the level of requirements volatility a system experiences over its lifecycle and the risk that volatility imparts. Improper requirements generation, undocumented user expectations, conflicting design decisions, and anticipated / unanticipated world states are representative of these volatility factors. Combined, these volatility factors can increase programmatic risk and adversely affect successful system development. This paper proposes that a Bayesian Network can be used to support reasonable judgments concerning the most likely sources and types of requirements volatility a developing system will experience prior to starting development and by doing so it is possible to predict the level of requirements volatility the system will experience over its lifecycle. This assessment offers valuable insight to the system's developers, particularly by providing a starting point for risk mitigation planning and execution.

  11. Improving Life-Cycle Cost Management of Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Clardy, Dennon

    2010-01-01

    This presentation will explore the results of a recent NASA Life-Cycle Cost study and how project managers can use the findings and recommendations to improve planning and coordination early in the formulation cycle and avoid common pitfalls resulting in cost overruns. The typical NASA space science mission will exceed both the initial estimated and the confirmed life-cycle costs by the end of the mission. In a fixed-budget environment, these overruns translate to delays in starting or launching future missions, or in the worst case can lead to cancelled missions. Some of these overruns are due to issues outside the control of the project; others are due to the unpredictable problems (unknown unknowns) that can affect any development project. However, a recent study of life-cycle cost growth by the Discovery and New Frontiers Program Office identified a number of areas that are within the scope of project management to address. The study also found that the majority of the underlying causes for cost overruns are embedded in the project approach during the formulation and early design phases, but the actual impacts typically are not experienced until late in the project life cycle. Thus, project management focus in key areas such as integrated schedule development, management structure and contractor communications processes, heritage and technology assumptions, and operations planning, can be used to validate initial cost assumptions and set in place management processes to avoid the common pitfalls resulting in cost overruns.

  12. Life of Sugar: Developing Lifecycle Methods to Evaluate the Energy and Environmental Impacts of Sugarcane Biofuels

    NASA Astrophysics Data System (ADS)

    Gopal, Anand Raja

    Lifecycle Assessment (LCA) is undergoing a period of rapid change as it strives to become more policy-relevant. Attributional LCA, the traditional LCA category, is beginning to be seen as particularly ill-equipped to assess the consequences of a policy. This has given birth to a new category of LCA known as Consequential LCA that is designed for use in LCA-based policies but is still largely unknown, even to LCA experts, and suffers from a lack of well developed methods. As a result, many LCA-based policies, like the California Low Carbon Fuel Standard (LCFS), use poor LCA methods that are both scientifically suspect and unable to model many biofuels, especially ones manufactured from byproduct feedstocks. Biofuels made from byproduct feedstocks, primarily molasses ethanol from Asia and the Caribbean, can contribute significantly to LCFS' carbon intensity targets in the near-term at low costs, a desperate need for the policy ever since US corn ethanol was rated as having a worse global warming impact than gasoline. In this dissertation, I develop the first fully consequential lifecycle assessment of a byproduct-based biofuel using a partial equilibrium foundation. I find that the lifecycle carbon content of Indian molasses ethanol is just 5 gCO2/MJ using this method, making it one of the cleanest first generation biofuels in the LCFS. I also show that Indian molasses ethanol remains one of the cleanest first-generation biofuels even when using the flawed methodology ratified for the LCFS, with a lifecycle carbon content of 24 gCO2/MJ. My fully consequential LCA model also shows that India's Ethanol Blending program, which currently subsidizes blending of molasses ethanol and gasoline for domestic consumption, can meet its objective of supporting domestic agriculture more cost-effectively by helping producers export their molasses ethanol to fuel markets that value carbon. However, this objective will be achieved at a significant cost to the poor who will face a 39% increase in the price of sorghum because of the policy.

  13. Using PIDs to Support the Full Research Data Publishing Lifecycle

    NASA Astrophysics Data System (ADS)

    Waard, A. D.

    2016-12-01

    Persistent identifiers can help support scientific research, track scientific impact and let researchers achieve recognition for their work. We discuss a number of ways in which Elsevier utilizes PIDs to support the scholarly lifecycle: To improve the process of storing and sharing data, Mendeley Data (http://data.mendeley.com) makes use of persistent identifiers to support the dynamic nature of data and software, by tracking and recording the provenance and versioning of datasets. This system now allows the comparison of different versions of a dataset, to see precisely what was changed during a versioning update. To present research data in context for the reader, we include PIDs in research articles as hyperlinks: https://www.elsevier.com/books-and-journals/content-innovation/data-base-linking. In some cases, PIDs fetch data files from the repositories provide that allow the embedding of visualizations, e.g. with PANGAEA and PubChem: https://www.elsevier.com/books-and-journals/content-innovation/protein-viewer; https://www.elsevier.com/books-and-journals/content-innovation/pubchem. To normalize referenced data elements, the Resource Identification Initiative - which we developed together with members of the Force11 RRID group - introduces a unified standard for resource identifiers (RRIDs) that can easily be interpreted by both humans and text mining tools. https://www.force11.org/group/resource-identification-initiative/update-resource-identification-initiative, as can be seen in our Antibody Data app: https://www.elsevier.com/books-and-journals/content-innovation/antibody-data To enable better citation practices and support robust metrics system for sharing research data, we have helped develop, and are early adopters of the Force11 Data Citation Principles and Implementation groups (https://www.force11.org/group/dcip) Lastly, through our work with the Research Data Alliance Publishing Data Services group, we helped create a set of guidelines (http://www.scholix.org/guidelines) and a demonstrator service (http://dliservice.research-infrastructures.eu/#/) for a linked data network connecting datasets, articles, and individuals, which all rely on robust PIDs.

  14. Evaluating nanotechnology opportunities and risks through integration of life-cycle and risk assessment.

    PubMed

    Tsang, Michael P; Kikuchi-Uehara, Emi; Sonnemann, Guido W; Aymonier, Cyril; Hirao, Masahiko

    2017-08-04

    It has been some 15 years since the topics of sustainability and nanotechnologies first appeared together in the scientific literature and became a focus of organizations' research and policy developments. On the one hand, this focus is directed towards approaches and tools for risk assessment and management and on the other hand towards life-cycle thinking and assessment. Comparable to their application for regular chemicals, each tool is seen to serve separate objectives as it relates to evaluating nanotechnologies' safety or resource efficiency, respectively. While nanomaterials may provide resource efficient production and consumption, this must balance any potential hazards they pose across their life-cycles. This Perspective advocates for integrating these two tools at the methodological level for achieving this objective, and it explains what advantages and challenges this offers decision-makers while highlighting what research is needed to further enhance integration.

  15. The influence of socializing agents in the female sport-participation process.

    PubMed

    Higginson, D C

    1985-01-01

    This study investigated the influence of socializing agents on the female athletes who participated in the Empire State Games in Syracuse, New York (N = 587). When comparisons were made across life-cycle states (under-13, junior high, and senior high), it was discovered that statistically significant differences existed (p less than 0.0001). It was apparent that the socializing agent influences changed from being mostly parental at the under-13 life-cycle stage to mostly coach/teacher oriented during junior and senior high school years. No significant differences occurred across the three life-cycle stages when the following items were examined: (a) the number of sports in which the female athletes participated, (b) the number of sports learned by the female athletes, and (c) the amount of sport interest developed. It was concluded that socializing agent influences do exist, but change during different periods in the life cycle.

  16. Clinical Research Informatics: Supporting the Research Study Lifecycle.

    PubMed

    Johnson, S B

    2017-08-01

    Objectives: The primary goal of this review is to summarize significant developments in the field of Clinical Research Informatics (CRI) over the years 2015-2016. The secondary goal is to contribute to a deeper understanding of CRI as a field, through the development of a strategy for searching and classifying CRI publications. Methods: A search strategy was developed to query the PubMed database, using medical subject headings to both select and exclude articles, and filtering publications by date and other characteristics. A manual review classified publications using stages in the "research study lifecycle", with key stages that include study definition, participant enrollment, data management, data analysis, and results dissemination. Results: The search strategy generated 510 publications. The manual classification identified 125 publications as relevant to CRI, which were classified into seven different stages of the research lifecycle, and one additional class that pertained to multiple stages, referring to general infrastructure or standards. Important cross-cutting themes included new applications of electronic media (Internet, social media, mobile devices), standardization of data and procedures, and increased automation through the use of data mining and big data methods. Conclusions: The review revealed increased interest and support for CRI in large-scale projects across institutions, regionally, nationally, and internationally. A search strategy based on medical subject headings can find many relevant papers, but a large number of non-relevant papers need to be detected using text words which pertain to closely related fields such as computational statistics and clinical informatics. The research lifecycle was useful as a classification scheme by highlighting the relevance to the users of clinical research informatics solutions. Georg Thieme Verlag KG Stuttgart.

  17. Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center

    NASA Technical Reports Server (NTRS)

    Guillebeau, P. L.

    2004-01-01

    The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining real-time support. An important aspect of the paper will involve challenges and lessons learned. product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support. This paper will also address the deployment approach including user involvement in testing and the , This includes COTS product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support.

  18. Towards a Lifecycle Information Framework and Technology in Manufacturing.

    PubMed

    Hedberg, Thomas; Feeney, Allison Barnard; Helu, Moneer; Camelio, Jaime A

    2017-06-01

    Industry has been chasing the dream of integrating and linking data across the product lifecycle and enterprises for decades. However, industry has been challenged by the fact that the context in which data is used varies based on the function / role in the product lifecycle that is interacting with the data. Holistically, the data across the product lifecycle must be considered an unstructured data-set because multiple data repositories and domain-specific schema exist in each phase of the lifecycle. This paper explores a concept called the Lifecycle Information Framework and Technology (LIFT). LIFT is a conceptual framework for lifecycle information management and the integration of emerging and existing technologies, which together form the basis of a research agenda for dynamic information modeling in support of digital-data curation and reuse in manufacturing. This paper provides a discussion of the existing technologies and activities that the LIFT concept leverages. Also, the paper describes the motivation for applying such work to the domain of manufacturing. Then, the LIFT concept is discussed in detail, while underlying technologies are further examined and a use case is detailed. Lastly, potential impacts are explored.

  19. Strategic balance of drug lifecycle management options differs between domestic and foreign companies in Japan.

    PubMed

    Yamanaka, Takayuki; Kano, Shingo

    2016-01-01

    Drug approvals and patent protections are critical in drug lifecycle management (LCM) in order to maximize drug discovery investment returns. We analyzed drug LCM activities implemented by 10 top companies in Japan, focusing on drug approvals and patent term extensions. Foreign companies acquired numerous drug approvals primarily for new molecular entities (NMEs), while Japanese companies mainly obtained approvals for improved drugs including new indications, and intensively extended patent terms. Furthermore, we discovered three factors likely responsible for differences in drug LCM strategies of Japanese and foreign companies: research and development capacities for drugs, drug lags of foreign-origin NMEs, and cooperation between Research and Development Departments and Intellectual Property Departments.

  20. Comprehensive Lifecycle for Assuring System Safety

    NASA Technical Reports Server (NTRS)

    Knight, John C.; Rowanhill, Jonathan C.

    2017-01-01

    CLASS is a novel approach to the enhancement of system safety in which the system safety case becomes the focus of safety engineering throughout the system lifecycle. CLASS also expands the role of the safety case across all phases of the system's lifetime, from concept formation to decommissioning. As CLASS has been developed, the concept has been generalized to a more comprehensive notion of assurance becoming the driving goal, where safety is an important special case. This report summarizes major aspects of CLASS and contains a bibliography of papers that provide additional details.

  1. Methods Used to Support a Life Cycle of Complex Engineering Products

    NASA Astrophysics Data System (ADS)

    Zakharova, Alexandra A.; Kolegova, Olga A.; Nekrasova, Maria E.; Eremenko, Andrey O.

    2016-08-01

    Management of companies involved in the design, development and operation of complex engineering products recognize the relevance of creating systems for product lifecycle management. A system of methods is proposed to support life cycles of complex engineering products, based on fuzzy set theory and hierarchical analysis. The system of methods serves to demonstrate the grounds for making strategic decisions in an environment of uncertainty, allows the use of expert knowledge, and provides interconnection of decisions at all phases of strategic management and all stages of a complex engineering product lifecycle.

  2. Emerging technologies for V&V of ISHM software for space exploration

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.; Markosian, Lawrence Z.

    2006-01-01

    Systems1,2 required to exhibit high operational reliability often rely on some form of fault protection to recognize and respond to faults, preventing faults' escalation to catastrophic failures. Integrated System Health Management (ISHM) extends the functionality of fault protection to both scale to more complex systems (and systems of systems), and to maintain capability rather than just avert catastrophe. Forms of ISHM have been utilized to good effect in the maintenance phase of systems' total lifecycles (often referred to as 'condition-based mainte-nance'), but less so in a 'fault protection' role during actual operations. One of the impediments to such use lies in the challenges of verification, validation and certification of ISHM systems themselves. This paper makes the case that state-of-the-practice V&V and certification techniques will not suffice for emerging forms of ISHM systems; however, a number of maturing software engineering assurance technologies show particular promise for addressing these ISHM V&V challenges.

  3. The ALICE Software Release Validation cluster

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Krzewicki, M.

    2015-12-01

    One of the most important steps of software lifecycle is Quality Assurance: this process comprehends both automatic tests and manual reviews, and all of them must pass successfully before the software is approved for production. Some tests, such as source code static analysis, are executed on a single dedicated service: in High Energy Physics, a full simulation and reconstruction chain on a distributed computing environment, backed with a sample “golden” dataset, is also necessary for the quality sign off. The ALICE experiment uses dedicated and virtualized computing infrastructures for the Release Validation in order not to taint the production environment (i.e. CVMFS and the Grid) with non-validated software and validation jobs: the ALICE Release Validation cluster is a disposable virtual cluster appliance based on CernVM and the Virtual Analysis Facility, capable of deploying on demand, and with a single command, a dedicated virtual HTCondor cluster with an automatically scalable number of virtual workers on any cloud supporting the standard EC2 interface. Input and output data are externally stored on EOS, and a dedicated CVMFS service is used to provide the software to be validated. We will show how the Release Validation Cluster deployment and disposal are completely transparent for the Release Manager, who simply triggers the validation from the ALICE build system's web interface. CernVM 3, based entirely on CVMFS, permits to boot any snapshot of the operating system in time: we will show how this allows us to certify each ALICE software release for an exact CernVM snapshot, addressing the problem of Long Term Data Preservation by ensuring a consistent environment for software execution and data reprocessing in the future.

  4. Materials Lifecycle and Environmental Consideration at NASA

    NASA Technical Reports Server (NTRS)

    Clark-Ingram, Marceia

    2010-01-01

    The aerospace community faces tremendous challenges with continued availability of existing material supply chains during the lifecycle of a program. Many obsolescence drivers affect the availability of materials: environmental safety ahd health regulations, vendor and supply economics, market sector demands,and natural disasters. Materials selection has become increasingly more critical when designing aerospace hardware. NASA and DoD conducted a workshop with subject matter experts to discuss issues and define solutions for materials selections during the lifecycle phases of a product/system/component. The three primary lifecycle phases were: Conceptualization/Design, Production & Sustainment, and End of life / Reclamation. Materials obsolescence and pollution prevention considerations were explored for the aforementioned lifecycle phases. The recommended solutions from the workshop are being presented.

  5. Training Module on the Development of Best Modeling Practices

    EPA Pesticide Factsheets

    This module continues the fundamental concepts outlined in the previous modules. Objectives are to identify the ‘best modeling practices’ and strategies for the Development Stage of the model life-cycle and define the steps of model development.

  6. Bus Lifecycle Cost Model for Federal Land Management Agencies.

    DOT National Transportation Integrated Search

    2011-09-30

    The Bus Lifecycle Cost Model is a spreadsheet-based planning tool that estimates capital, operating, and maintenance costs for various bus types over the full lifecycle of the vehicle. The model is based on a number of operating characteristics, incl...

  7. Life-cycle greenhouse gas emissions of shale gas, natural gas, coal, and petroleum.

    PubMed

    Burnham, Andrew; Han, Jeongwoo; Clark, Corrie E; Wang, Michael; Dunn, Jennifer B; Palou-Rivera, Ignasi

    2012-01-17

    The technologies and practices that have enabled the recent boom in shale gas production have also brought attention to the environmental impacts of its use. It has been debated whether the fugitive methane emissions during natural gas production and transmission outweigh the lower carbon dioxide emissions during combustion when compared to coal and petroleum. Using the current state of knowledge of methane emissions from shale gas, conventional natural gas, coal, and petroleum, we estimated up-to-date life-cycle greenhouse gas emissions. In addition, we developed distribution functions for key parameters in each pathway to examine uncertainty and identify data gaps such as methane emissions from shale gas well completions and conventional natural gas liquid unloadings that need to be further addressed. Our base case results show that shale gas life-cycle emissions are 6% lower than conventional natural gas, 23% lower than gasoline, and 33% lower than coal. However, the range in values for shale and conventional gas overlap, so there is a statistical uncertainty whether shale gas emissions are indeed lower than conventional gas. Moreover, this life-cycle analysis, among other work in this area, provides insight on critical stages that the natural gas industry and government agencies can work together on to reduce the greenhouse gas footprint of natural gas.

  8. Towards a Lifecycle Information Framework and Technology in Manufacturing

    PubMed Central

    Hedberg, Thomas; Feeney, Allison Barnard; Helu, Moneer; Camelio, Jaime A.

    2016-01-01

    Industry has been chasing the dream of integrating and linking data across the product lifecycle and enterprises for decades. However, industry has been challenged by the fact that the context in which data is used varies based on the function / role in the product lifecycle that is interacting with the data. Holistically, the data across the product lifecycle must be considered an unstructured data-set because multiple data repositories and domain-specific schema exist in each phase of the lifecycle. This paper explores a concept called the Lifecycle Information Framework and Technology (LIFT). LIFT is a conceptual framework for lifecycle information management and the integration of emerging and existing technologies, which together form the basis of a research agenda for dynamic information modeling in support of digital-data curation and reuse in manufacturing. This paper provides a discussion of the existing technologies and activities that the LIFT concept leverages. Also, the paper describes the motivation for applying such work to the domain of manufacturing. Then, the LIFT concept is discussed in detail, while underlying technologies are further examined and a use case is detailed. Lastly, potential impacts are explored. PMID:28265224

  9. [Life-cycles, psychopathology and suicidal behaviour].

    PubMed

    Osváth, Péter

    2012-12-01

    According to modern psychological theories the human life implies continuous development, the efficient solution of age-specific problems is necessary to the successful transition of age-periods. The phases of transition are very vulnerable against the accidental stressors and negative life-events. Thus the problem-solving capacity may run out, which impairs chance of the successful coping with stressful events. It may result in some negative consequences, such as different psychopathological symptoms (depression, anxiety, psychosis) or even suicidal behaviour. For that reason we have to pay special attention to the symptoms of psychological crisis and the presuicidal syndrome. In certain life-cycle transitions (such as adolescent, middle or elderly age) the personality has special vulnerability to the development of psychological and psychopathological problems. In this article the most important features of life-cycles and psychopathological symptoms are reviewed. The developmental and age-specific characteristics have special importance in understanding the background of the actual psychological crisis and improving the efficacy of the treatment. Using the complex bio-psycho-socio-spiritual approach not only the actual psychopatological problems, but the individual psychological features can be recognised. Thus the effective treatment relieves not only the actual symptoms, but will increase the chance for solving further crises.

  10. Optimization of monitoring and inspections in the life-cycle of wind turbines

    NASA Astrophysics Data System (ADS)

    Hanish Nithin, Anu; Omenzetter, Piotr

    2016-04-01

    The past decade has witnessed a surge in the offshore wind farm developments across the world. Although this form of cleaner and greener energy is beneficial and eco-friendly, the production of wind energy entails high life-cycle costs. The costs associated with inspections, monitoring and repairs of wind turbines are primary contributors to the high costs of electricity produced in this way and are disadvantageous in today's competitive economic environment. There is limited research being done in the probabilistic optimization of life-cycle costs of offshore wind turbines structures and their components. This paper proposes a framework for assessing the life cycle cost of wind turbine structures subject to damage and deterioration. The objective of the paper is to develop a mathematical probabilistic cost assessment framework which considers deterioration, inspection, monitoring, repair and maintenance models and their uncertainties. The uncertainties are etched in the accuracy and precision of the monitoring and inspection methods and can be considered through the probability of damage detection of each method. Schedules for inspection, monitoring and repair actions are demonstrated using a decision tree. Examples of a generalised deterioration process integrated with the cost analysis using a decision tree are shown for a wind turbine foundation structure.

  11. Reducing Life-Cycle Costs.

    ERIC Educational Resources Information Center

    Roodvoets, David L.

    2003-01-01

    Presents factors to consider when determining roofing life-cycle costs, explaining that costs do not tell the whole story; discussing components that should go into the decision (cost, maintenance, energy use, and environmental costs); and concluding that important elements in reducing life-cycle costs include energy savings through increased…

  12. Project Morpheus: Lean Development of a Terrestrial Flight Testbed for Maturing NASA Lander Technologies

    NASA Technical Reports Server (NTRS)

    Devolites, Jennifer L.; Olansen, Jon B.

    2015-01-01

    NASA's Morpheus Project has developed and tested a prototype planetary lander capable of vertical takeoff and landing that is designed to serve as a testbed for advanced spacecraft technologies. The lander vehicle, propelled by a Liquid Oxygen (LOX)/Methane engine and sized to carry a 500kg payload to the lunar surface, provides a platform for bringing technologies from the laboratory into an integrated flight system at relatively low cost. In 2012, Morpheus began integrating the Autonomous Landing and Hazard Avoidance Technology (ALHAT) sensors and software onto the vehicle in order to demonstrate safe, autonomous landing and hazard avoidance. From the beginning, one of goals for the Morpheus Project was to streamline agency processes and practices. The Morpheus project accepted a challenge to tailor the traditional NASA systems engineering approach in a way that would be appropriate for a lower cost, rapid prototype engineering effort, but retain the essence of the guiding principles. This paper describes the tailored project life cycle and systems engineering approach for the Morpheus project, including the processes, tools, and amount of rigor employed over the project's multiple lifecycles since the project began in fiscal year (FY) 2011.

  13. Integrated Vehicle Health Management (IVHM) for Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Baroth, Edmund C.; Pallix, Joan

    2006-01-01

    To achieve NASA's ambitious Integrated Space Transportation Program objectives, aerospace systems will implement a variety of new concept in health management. System level integration of IVHM technologies for real-time control and system maintenance will have significant impact on system safety and lifecycle costs. IVHM technologies will enhance the safety and success of complex missions despite component failures, degraded performance, operator errors, and environment uncertainty. IVHM also has the potential to reduce, or even eliminate many of the costly inspections and operations activities required by current and future aerospace systems. This presentation will describe the array of NASA programs participating in the development of IVHM technologies for NASA missions. Future vehicle systems will use models of the system, its environment, and other intelligent agents with which they may interact. IVHM will be incorporated into future mission planners, reasoning engines, and adaptive control systems that can recommend or execute commands enabling the system to respond intelligently in real time. In the past, software errors and/or faulty sensors have been identified as significant contributors to mission failures. This presentation will also address the development and utilization of highly dependable sohare and sensor technologies, which are key components to ensure the reliability of IVHM systems.

  14. Process change evaluation framework for allogeneic cell therapies: impact on drug development and commercialization.

    PubMed

    Hassan, Sally; Huang, Hsini; Warren, Kim; Mahdavi, Behzad; Smith, David; Jong, Simcha; Farid, Suzanne S

    2016-04-01

    Some allogeneic cell therapies requiring a high dose of cells for large indication groups demand a change in cell expansion technology, from planar units to microcarriers in single-use bioreactors for the market phase. The aim was to model the optimal timing for making this change. A development lifecycle cash flow framework was created to examine the implications of process changes to microcarrier cultures at different stages of a cell therapy's lifecycle. The analysis performed under assumptions used in the framework predicted that making this switch earlier in development is optimal from a total expected out-of-pocket cost perspective. From a risk-adjusted net present value view, switching at Phase I is economically competitive but a post-approval switch can offer the highest risk-adjusted net present value as the cost of switching is offset by initial market penetration with planar technologies. The framework can facilitate early decision-making during process development.

  15. DISSECTING COLONY DEVELOPMENT OF NEUROSPORA CRASSA USING mRNA PROFILING AND COMPARTATIVE GENOMICS APPROACHES

    USDA-ARS?s Scientific Manuscript database

    Colony development, which includes hyphal extension, branching, anastomosis and asexual sporulation are fundamental aspects of the lifecycle of filamentous fungi; genetic mechanisms underlying these phenomena are poorly understood. We conducted transcriptional profiling during colony development of...

  16. Federated provenance of oceanographic research cruises: from metadata to data

    NASA Astrophysics Data System (ADS)

    Thomas, Rob; Leadbetter, Adam; Shepherd, Adam

    2016-04-01

    The World Wide Web Consortium's Provenance Data Model and associated Semantic Web ontology (PROV-O) have created much interest in the Earth and Space Science Informatics community (Ma et al., 2014). Indeed, PROV-O has recently been posited as an upper ontology for the alignment of various data models (Cox, 2015). Similarly, PROV-O has been used as the building blocks of a data release lifecycle ontology (Leadbetter & Buck, 2015). In this presentation we show that the alignment between different local data descriptions of an oceanographic research cruise can be achieved through alignment with PROV-O and that descriptions of the funding bodies, organisations and researchers involved in a cruise and its associated data release lifecycle can be modelled within a PROV-O based environment. We show that, at a first-order, this approach is scalable by presenting results from three endpoints (the Biological and Chemical Oceanography Data Management Office at Woods Hole Oceanographic Institution, USA; the British Oceanographic Data Centre at the National Oceanography Centre, UK; and the Marine Institute, Ireland). Current advances in ontology engineering, provide pathways to resolving reasoning issues from varying perspectives on implementing PROV-O. This includes the use of the Information Object design pattern where such edge cases as research cruise scheduling efforts are considered. PROV-O describes only things which have happened, but the Information Object design pattern allows for the description of planned research cruises through its statement that the local data description is not the the entity itself (in this case the planned research cruise) and therefore the local data description itself can be described using the PROV-O model. In particular, we present the use of the data lifecycle ontology to show the connection between research cruise activities and their associated datasets, and the publication of those data sets online with Digital Object Identifiers and more formally in data journals. Use of the SPARQL 1.1 standard allows queries to be federated across these endpoints to create a distributed network of provenance documents. Future research directions will add further nodes to the federated network of oceanographic research cruise provenance to determine the true scalability of this approach, and will involve analysis of and possible evolution of the data release lifecycle ontology. References Nitin Arora et al., 2006. Information object design pattern for modeling domain specific knowledge. 1st ECOOP Workshop on Domain-Specific Program Development. Simon Cox, 2015. Pitfalls in alignment of observation models resolved using PROV as an upper ontology. Abstract IN33F-07 presented at the American Geophysical Union Fall Meeting, 14-18 December, San Francisco. Adam Leadbetter & Justin Buck, 2015. Where did my data layer come from?" The semantics of data release. Geophysical Research Abstracts 17, EGU2015-3746-1. Xiaogang Ma et al., 2014. Ontology engineering in provenance enablement for the National Climate Assessment. Environmental Modelling & Software 61, 191-205. http://dx.doi.org/10.1016/j.envsoft.2014.08.002

  17. Closed-Loop Lifecycle Management of Service and Product in the Internet of Things: Semantic Framework for Knowledge Integration.

    PubMed

    Yoo, Min-Jung; Grozel, Clément; Kiritsis, Dimitris

    2016-07-08

    This paper describes our conceptual framework of closed-loop lifecycle information sharing for product-service in the Internet of Things (IoT). The framework is based on the ontology model of product-service and a type of IoT message standard, Open Messaging Interface (O-MI) and Open Data Format (O-DF), which ensures data communication. (1) BACKGROUND: Based on an existing product lifecycle management (PLM) methodology, we enhanced the ontology model for the purpose of integrating efficiently the product-service ontology model that was newly developed; (2) METHODS: The IoT message transfer layer is vertically integrated into a semantic knowledge framework inside which a Semantic Info-Node Agent (SINA) uses the message format as a common protocol of product-service lifecycle data transfer; (3) RESULTS: The product-service ontology model facilitates information retrieval and knowledge extraction during the product lifecycle, while making more information available for the sake of service business creation. The vertical integration of IoT message transfer, encompassing all semantic layers, helps achieve a more flexible and modular approach to knowledge sharing in an IoT environment; (4) Contribution: A semantic data annotation applied to IoT can contribute to enhancing collected data types, which entails a richer knowledge extraction. The ontology-based PLM model enables as well the horizontal integration of heterogeneous PLM data while breaking traditional vertical information silos; (5) CONCLUSION: The framework was applied to a fictive case study with an electric car service for the purpose of demonstration. For the purpose of demonstrating the feasibility of the approach, the semantic model is implemented in Sesame APIs, which play the role of an Internet-connected Resource Description Framework (RDF) database.

  18. Closed-Loop Lifecycle Management of Service and Product in the Internet of Things: Semantic Framework for Knowledge Integration

    PubMed Central

    Yoo, Min-Jung; Grozel, Clément; Kiritsis, Dimitris

    2016-01-01

    This paper describes our conceptual framework of closed-loop lifecycle information sharing for product-service in the Internet of Things (IoT). The framework is based on the ontology model of product-service and a type of IoT message standard, Open Messaging Interface (O-MI) and Open Data Format (O-DF), which ensures data communication. (1) Background: Based on an existing product lifecycle management (PLM) methodology, we enhanced the ontology model for the purpose of integrating efficiently the product-service ontology model that was newly developed; (2) Methods: The IoT message transfer layer is vertically integrated into a semantic knowledge framework inside which a Semantic Info-Node Agent (SINA) uses the message format as a common protocol of product-service lifecycle data transfer; (3) Results: The product-service ontology model facilitates information retrieval and knowledge extraction during the product lifecycle, while making more information available for the sake of service business creation. The vertical integration of IoT message transfer, encompassing all semantic layers, helps achieve a more flexible and modular approach to knowledge sharing in an IoT environment; (4) Contribution: A semantic data annotation applied to IoT can contribute to enhancing collected data types, which entails a richer knowledge extraction. The ontology-based PLM model enables as well the horizontal integration of heterogeneous PLM data while breaking traditional vertical information silos; (5) Conclusion: The framework was applied to a fictive case study with an electric car service for the purpose of demonstration. For the purpose of demonstrating the feasibility of the approach, the semantic model is implemented in Sesame APIs, which play the role of an Internet-connected Resource Description Framework (RDF) database. PMID:27399717

  19. Major weapon system environmental life-cycle cost estimating for Conservation, Cleanup, Compliance and Pollution Prevention (C3P2)

    NASA Technical Reports Server (NTRS)

    Hammond, Wesley; Thurston, Marland; Hood, Christopher

    1995-01-01

    The Titan 4 Space Launch Vehicle Program is one of many major weapon system programs that have modified acquisition plans and operational procedures to meet new, stringent environmental rules and regulations. The Environmental Protection Agency (EPA) and the Department of Defense (DOD) mandate to reduce the use of ozone depleting chemicals (ODC's) is just one of the regulatory changes that has affected the program. In the last few years, public environmental awareness, coupled with stricter environmental regulations, has created the need for DOD to produce environmental life-cycle cost estimates (ELCCE) for every major weapon system acquisition program. The environmental impact of the weapon system must be assessed and budgeted, considering all costs, from cradle to grave. The Office of the Secretary of Defense (OSD) has proposed that organizations consider Conservation, Cleanup, Compliance and Pollution Prevention (C(sup 3)P(sup 2)) issues associated with each acquisition program to assess life-cycle impacts and costs. The Air Force selected the Titan 4 system as the pilot program for estimating life-cycle environmental costs. The estimating task required participants to develop an ELCCE methodology, collect data to test the methodology and produce a credible cost estimate within the DOD C(sup 3)P(sup 2) definition. The estimating methodology included using the Program Office weapon system description and work breakdown structure together with operational site and manufacturing plant visits to identify environmental cost drivers. The results of the Titan IV ELCCE process are discussed and expanded to demonstrate how they can be applied to satisfy any life-cycle environmental cost estimating requirement.

  20. Parking infrastructure: energy, emissions, and automobile life-cycle environmental accounting

    NASA Astrophysics Data System (ADS)

    Chester, Mikhail; Horvath, Arpad; Madanat, Samer

    2010-07-01

    The US parking infrastructure is vast and little is known about its scale and environmental impacts. The few parking space inventories that exist are typically regionalized and no known environmental assessment has been performed to determine the energy and emissions from providing this infrastructure. A better understanding of the scale of US parking is necessary to properly value the total costs of automobile travel. Energy and emissions from constructing and maintaining the parking infrastructure should be considered when assessing the total human health and environmental impacts of vehicle travel. We develop five parking space inventory scenarios and from these estimate the range of infrastructure provided in the US to be between 105 million and 2 billion spaces. Using these estimates, a life-cycle environmental inventory is performed to capture the energy consumption and emissions of greenhouse gases, CO, SO2, NOX, VOC (volatile organic compounds), and PM10 (PM: particulate matter) from raw material extraction, transport, asphalt and concrete production, and placement (including direct, indirect, and supply chain processes) of space construction and maintenance. The environmental assessment is then evaluated within the life-cycle performance of sedans, SUVs (sports utility vehicles), and pickups. Depending on the scenario and vehicle type, the inclusion of parking within the overall life-cycle inventory increases energy consumption from 3.1 to 4.8 MJ by 0.1-0.3 MJ and greenhouse gas emissions from 230 to 380 g CO2e by 6-23 g CO2e per passenger kilometer traveled. Life-cycle automobile SO2 and PM10 emissions show some of the largest increases, by as much as 24% and 89% from the baseline inventory. The environmental consequences of providing the parking spaces are discussed as well as the uncertainty in allocating paved area between parking and roadways.

  1. Lifecycle greenhouse gas emissions of coal, conventional and unconventional natural gas for electricity generation

    EPA Science Inventory

    An analysis of the lifecycle greenhouse gas (GHG) emissions associated with natural gas use recently published by Howarth et al. (2011) stated that use of natural gas produced from shale formations via hydraulic fracturing would generate greater lifecycle GHG emissions than petro...

  2. Enterprise Information Lifecycle Management

    DTIC Science & Technology

    2011-01-01

    Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington...Need for Information Lifecycle Management .......................................................... 6 3.3 Challenges of Information Lifecycle

  3. Evaluation of life-cycle air emission factors of freight transportation.

    PubMed

    Facanha, Cristiano; Horvath, Arpad

    2007-10-15

    Life-cycle air emission factors associated with road, rail, and air transportation of freight in the United States are analyzed. All life-cycle phases of vehicles, infrastructure, and fuels are accounted for in a hybrid life-cycle assessment (LCA). It includes not only fuel combustion, but also emissions from vehicle manufacturing, maintenance, and end of life, infrastructure construction, operation, maintenance, and end of life, and petroleum exploration, refining, and fuel distribution. Results indicate that total life-cycle emissions of freight transportation modes are underestimated if only tailpipe emissions are accounted for. In the case of CO2 and NOx, tailpipe emissions underestimate total emissions by up to 38%, depending on the mode. Total life-cycle emissions of CO and SO2 are up to seven times higher than tailpipe emissions. Sensitivity analysis considers the effects of vehicle type, geography, and mode efficiency on the final results. Policy implications of this analysis are also discussed. For example, while it is widely assumed that currently proposed regulations will result in substantial reductions in emissions, we find that this is true for NOx, emissions, because fuel combustion is the main cause, and to a lesser extent for SO2, but not for PM10 emissions, which are significantly affected by the other life-cycle phases.

  4. A case of a facultative life-cycle diversification in the fluke Pleurogenoides sp. (Lecithodendriidae, Plagiorchiida).

    PubMed

    Hassl, Andreas R

    2010-10-01

    Numerous specimens of the native, intestinal digenean fluke Pleurogenoides sp. (Lecithodendriidae, Plagiorchiida), a genus known for the simultaneous co-existence of genuine adults and progenetic, adult-like metacercaria, were found by chance parasitizing in the oesophagus of a recently imported, tropical Bristly Bush Viper (Atheris hispida). The snake had before been force-fed with native water frogs, the assumed definitive host of these flukes. Hence water frogs act as the second intermediate host or as a paratenic host for Pleurogenoides flukes, as they must house progenetic fluke larvae, which develop to genuine adults when transmitted to an appropriate consecutive host, the ancestral definitive host, a reptile. The European Pleurogenoides fluke species seem to display a facultative life-cycle diversification, they can adjust their life-history strategy according to their immediate transmission opportunities. This phenotypic plasticity allows the parasite to respond quickly to any changes in the abundance of a host; usually this biological oddity results in a life-cycle truncation by the elimination of the definitive host.

  5. Connecting Research and Practice: An Experience Report on Research Infusion with SAVE

    NASA Technical Reports Server (NTRS)

    Lindvall, Mikael; Stratton, William C.; Sibol, Deane E.; Ackermann, Christopher; Reid, W. Mark; Ganesan, Dharmalingam; McComas, David; Bartholomew, Maureen; Godfrey, Sally

    2009-01-01

    NASA systems need to be highly dependable to avoid catastrophic mission failures. This calls for rigorous engineering processes including meticulous validation and verification. However, NASA systems are often highly distributed and overwhelmingly complex, making the software portion of these systems challenging to understand, maintain, change, reuse, and test. NASA's systems are long-lived and the software maintenance process typically constitutes 60-80% of the total cost of the entire lifecycle. Thus, in addition to the technical challenges of ensuring high life-time quality of NASA's systems, the post-development phase also presents a significant financial burden. Some of NASA's software-related challenges could potentially be addressed by some of the many powerful technologies that are being developed in software research laboratories. Many of these research technologies seek to facilitate maintenance and evolution by for example architecting, designing and modeling for quality, flexibility, and reuse. Other technologies attempt to detect and remove defects and other quality issues by various forms of automated defect detection, architecture analysis, and various forms of sophisticated simulation and testing. However promising, most such research technologies nevertheless do not make the transition from the research lab to the software lab. One reason the transition from research to practice seldom occurs is that research infusion and technology transfer is difficult. For example, factors related to the technology are sometimes overshadowed by other types of factors such as reluctance to change and therefore prohibits the technology from sticking. Successful infusion might also take very long time. One famous study showed that the discrepancy between the conception of the idea and its practical use was 18 years plus or minus three. Nevertheless, infusing new technology is possible. We have found that it takes special circumstances for such research infusion to succeed: 1) there must be evidence that the technology works in the practitioner's particular domain, 2) there must be a potential for great improvements and enhanced competitive edge for the practitioner, 3) the practitioner has to have strong individual curiosity and continuous interest in trying out new technologies, 4) the practitioner has to have support on multiple levels (i.e. from the researchers, from management, from sponsors etc), and 5) to remain infused, the new technology has to be integrated into the practitioner's processes so that it becomes a natural part of the daily work. NASA IV&V's Research Infusion initiative sponsored by NASA's Office of Safety & Mission Assurance (OSMA) through the Software Assurance Research Program (SARP), strives to overcome some of the problems related to research infusion.

  6. Life cycle assessment of medium-density fiberboard (MDF) manufacturing process in Brazil.

    PubMed

    Piekarski, Cassiano Moro; de Francisco, Antonio Carlos; da Luz, Leila Mendes; Kovaleski, João Luiz; Silva, Diogo Aparecido Lopes

    2017-01-01

    Brazil is one of the largest producers of medium-density fibreboard (MDF) in the world, and also the MDF has the highest domestic consumption and production rate in the country. MDF applications are highlighted into residential and commercial furniture design and also a wide participation in the building sector. This study aimed to propose ways of improving the environmental cradle-to-gate life-cycle of one cubic meter MDF panel by means of a life-cycle assessment (LCA) study. Complying with requirements of ISO 14040 and 14,044 standards, different MDF manufacturing scenarios were modelled using Umberto® v.5.6 software and the Ecoinvent v.2.2 life-cycle inventory (LCI) database for the Brazilian context. Environmental and human health impacts were assessed by using the CML (2001) and USEtox (2008) methods. The evaluated impact categories were: acidification, global warming, ozone layer depletion, abiotic resource depletion, photochemical formation of tropospheric ozone, ecotoxicity, eutrophication and human toxicity. Results identified the following hotspots: gas consumption at the thermal plant, urea-formaldehyde resin, power consumption, wood chip consumption and wood chip transportation to the plant. The improvement scenario proposals comprised the following actions: eliminate natural gas consumption at the thermal plant, reduce electrical power consumption, reduce or replace urea-formaldehyde resin consumption, reduce wood consumption and minimize the distance to wood chip suppliers. The proposed actions were analysed to verify the influence of each action on the set of impact categories. Among the results, it can be noted that a joint action of the proposed improvements can result in a total reduction of up to 38.5% of impacts to OD, 34.4% to AD, 31.2% to ET, and 30.4% to HT. Finally, MDF was compared with particleboard production in Brazil, and additional opportunities to improve the MDF environmental profile were identified. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Next Generation Cloud-based Science Data Systems and Their Implications on Data and Software Stewardship, Preservation, and Provenance

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.

    2017-12-01

    NASA's upcoming missions are expected to be generating data volumes at least an order of magnitude larger than current missions. A significant increase in data processing, data rates, data volumes, and long-term data archive capabilities are needed. Consequently, new challenges are emerging that impact traditional data and software management approaches. At large-scales, next generation science data systems are exploring the move onto cloud computing paradigms to support these increased needs. New implications such as costs, data movement, collocation of data systems & archives, and moving processing closer to the data, may result in changes to the stewardship, preservation, and provenance of science data and software. With more science data systems being on-boarding onto cloud computing facilities, we can expect more Earth science data records to be both generated and kept in the cloud. But at large scales, the cost of processing and storing global data may impact architectural and system designs. Data systems will trade the cost of keeping data in the cloud with the data life-cycle approaches of moving "colder" data back to traditional on-premise facilities. How will this impact data citation and processing software stewardship? What are the impacts of cloud-based on-demand processing and its affect on reproducibility and provenance. Similarly, with more science processing software being moved onto cloud, virtual machines, and container based approaches, more opportunities arise for improved stewardship and preservation. But will the science community trust data reprocessed years or decades later? We will also explore emerging questions of the stewardship of the science data system software that is generating the science data records both during and after the life of mission.

  8. Competitive Strategies of States: A Life-Cycle Perspective. EQW Working Papers.

    ERIC Educational Resources Information Center

    Flynn, Patricia M.

    This paper demonstrates that production life-cycle models provide a conceptual framework to analyze systematically the interrelationships between industrial and technological change and human resources. Section II presents the life-cycle model, focusing on its implications for the types and level of employment and skill requirements in an area.…

  9. Adult Development and the Workplace.

    ERIC Educational Resources Information Center

    Heffernan, James M.

    Little attention has been given to how adults develop through their lifetimes and what roles their workplace environments play in that development. Research and theory regarding adult psychosocial development have confirmed the developmental life-cycle phases of adulthood. These are: leaving the family (ages 16-22), getting into the adult world…

  10. A perspective on cost-effectiveness of greenhouse gas reduction solutions in water distribution systems

    NASA Astrophysics Data System (ADS)

    Hendrickson, Thomas P.; Horvath, Arpad

    2014-01-01

    Water distribution systems (WDSs) face great challenges as aging infrastructures require significant investments in rehabilitation, replacement, and expansion. Reducing environmental impacts as WDSs develop is essential for utility managers and policy makers. This study quantifies the existing greenhouse gas (GHG) footprint of common WDS elements using life-cycle assessment (LCA) while identifying the greatest opportunities for emission reduction. This study addresses oversights of the related literature, which fails to capture several WDS elements and to provide detailed life-cycle inventories. The life-cycle inventory results for a US case study utility reveal that 81% of GHGs are from pumping energy, where a large portion of these emissions are a result of distribution leaks, which account for 270 billion l of water losses daily in the United States. Pipe replacement scheduling is analyzed from an environmental perspective where, through incorporating leak impacts, a tool reveals that optimal replacement is no more than 20 years, which is in contrast to the US average of 200 years. Carbon abatement costs (CACs) are calculated for different leak reduction scenarios for the case utility that range from -130 to 35 t-1 CO2(eq). Including life-cycle modeling in evaluating pipe materials identified polyvinyl chloride (PVC) and cement-lined ductile iron (DICL) as the Pareto efficient options, however; utilizing PVC presents human health risks. The model developed for the case utility is applied to California and Texas to determine the CACs of reducing leaks to 5% of distributed water. For California, annual GHG savings from reducing leaks alone (3.4 million tons of CO2(eq)) are found to exceed California Air Resources Board’s estimate for energy efficiency improvements in the state’s water infrastructure.

  11. On-orbit servicing system assessment and optimization methods based on lifecycle simulation under mixed aleatory and epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel

    2013-06-01

    To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.

  12. [A Medical Devices Management Information System Supporting Full Life-Cycle Process Management].

    PubMed

    Tang, Guoping; Hu, Liang

    2015-07-01

    Medical equipments are essential supplies to carry out medical work. How to ensure the safety and reliability of the medical equipments in diagnosis, and reduce procurement and maintenance costs is a topic of concern to everyone. In this paper, product lifecycle management (PLM) and enterprise resource planning (ERP) are cited to establish a lifecycle management information system. Through integrative and analysis of the various stages of the relevant data in life-cycle, it can ensure safety and reliability of medical equipments in the operation and provide the convincing data for meticulous management.

  13. Product-related research: how research can contribute to successful life-cycle management.

    PubMed

    Sandner, Peter; Ziegelbauer, Karl

    2008-05-01

    Declining productivity with decreasing new molecular entity output combined with increased R&D spending is one of the key challenges for the entire pharmaceutical industry. In order to offset decreasing new molecular entity output, life-cycle management activities for established drugs become more and more important to maintain or even expand clinical indication and market opportunities. Life-cycle management covers a whole range of activities from strategic pricing to a next generation product launch. In this communication, we review how research organizations can contribute to successful life-cycle management strategies using phosphodiesterase 5 inhibitors as an example.

  14. Software solutions manage the definition, operation, maintenance and configuration control of the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobson, D; Churby, A; Krieger, E

    2011-07-25

    The National Ignition Facility (NIF) is the world's largest laser composed of millions of individual parts brought together to form one massive assembly. Maintaining control of the physical definition, status and configuration of this structure is a monumental undertaking yet critical to the validity of the shot experiment data and the safe operation of the facility. The NIF business application suite of software provides the means to effectively manage the definition, build, operation, maintenance and configuration control of all components of the National Ignition Facility. State of the art Computer Aided Design software applications are used to generate a virtualmore » model and assemblies. Engineering bills of material are controlled through the Enterprise Configuration Management System. This data structure is passed to the Enterprise Resource Planning system to create a manufacturing bill of material. Specific parts are serialized then tracked along their entire lifecycle providing visibility to the location and status of optical, target and diagnostic components that are key to assessing pre-shot machine readiness. Nearly forty thousand items requiring preventive, reactive and calibration maintenance are tracked through the System Maintenance & Reliability Tracking application to ensure proper operation. Radiological tracking applications ensure proper stewardship of radiological and hazardous materials and help provide a safe working environment for NIF personnel.« less

  15. Energy, Environmental, and Economic Analyses of Design Concepts for the Co-Production of Fuels and Chemicals with Electricity via Co-Gasification of Coal and Biomass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eric Larson; Robert Williams; Thomas Kreutz

    2012-03-11

    The overall objective of this project was to quantify the energy, environmental, and economic performance of industrial facilities that would coproduce electricity and transportation fuels or chemicals from a mixture of coal and biomass via co-gasification in a single pressurized, oxygen-blown, entrained-flow gasifier, with capture and storage of CO{sub 2} (CCS). The work sought to identify plant designs with promising (Nth plant) economics, superior environmental footprints, and the potential to be deployed at scale as a means for simultaneously achieving enhanced energy security and deep reductions in U.S. GHG emissions in the coming decades. Designs included systems using primarily already-commercializedmore » component technologies, which may have the potential for near-term deployment at scale, as well as systems incorporating some advanced technologies at various stages of R&D. All of the coproduction designs have the common attribute of producing some electricity and also of capturing CO{sub 2} for storage. For each of the co-product pairs detailed process mass and energy simulations (using Aspen Plus software) were developed for a set of alternative process configurations, on the basis of which lifecycle greenhouse gas emissions, Nth plant economic performance, and other characteristics were evaluated for each configuration. In developing each set of process configurations, focused attention was given to understanding the influence of biomass input fraction and electricity output fraction. Self-consistent evaluations were also carried out for gasification-based reference systems producing only electricity from coal, including integrated gasification combined cycle (IGCC) and integrated gasification solid-oxide fuel cell (IGFC) systems. The reason biomass is considered as a co-feed with coal in cases when gasoline or olefins are co-produced with electricity is to help reduce lifecycle greenhouse gas (GHG) emissions for these systems. Storing biomass-derived CO{sub 2} underground represents negative CO{sub 2} emissions if the biomass is grown sustainably (i.e., if one ton of new biomass growth replaces each ton consumed), and this offsets positive CO{sub 2} emissions associated with the coal used in these systems. Different coal:biomass input ratios will produce different net lifecycle greenhouse gas (GHG) emissions for these systems, which is the reason that attention in our analysis was given to the impact of the biomass input fraction. In the case of systems that produce only products with no carbon content, namely electricity, ammonia and hydrogen, only coal was considered as a feedstock because it is possible in theory to essentially fully decarbonize such products by capturing all of the coal-derived CO{sub 2} during the production process.« less

  16. Collection to Archival: A Data Management Strategy for the Ocean Acidification Community

    NASA Astrophysics Data System (ADS)

    Burger, E. F.; Smith, K. M.; Parsons, A. R.; Wanninkhof, R. H.; O'Brien, K.; Barbero, L.; Schweitzer, R.; Manke, A.

    2014-12-01

    Recently new data collection platforms, many of them autonomous mobile platforms, have added immensely to the data volume the Ocean Acidification community is dealing with. This is no exception with NOAA's Pacific Marine Environmental Laboratory (PMEL) Ocean Acidification (OA) effort. Collaboration between the PMEL Carbon group and the PMEL Science Data Integration group to manage local data has spawned the development of a data management strategy that covers the data lifecycle from collection to analysis to quality control to archival. The proposed software and workflow will leverage the successful data management framework pioneered by the Surface Ocean CO2 Atlas (SOCAT) project, but customized for Ocean Acidification requirements. This presentation will give a brief overview of the data management framework that will be implemented for Ocean Acidification data that are collected by PMEL scientists. We will also be discussing our plans to leverage this system to build an east coast ocean acidification management system at NOAA's Atlantic Oceanographic and Meteorological Laboratory (AOML), as well as a national OA management system at NOAA's National Oceanographic Data Center (NODC).

  17. Uncertainty estimates of purity measurements based on current information: toward a "live validation" of purity methods.

    PubMed

    Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech

    2012-12-01

    To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.

  18. A Review of the Application of Lifecycle Analysis to Renewable Energy Systems

    ERIC Educational Resources Information Center

    Lund, Chris; Biswas, Wahidul

    2008-01-01

    The lifecycle concept is a "cradle to grave" approach to thinking about products, processes, and services, recognizing that all stages have environmental and economic impacts. Any rigorous and meaningful comparison of energy supply options must be done using a lifecycle analysis approach. It has been applied to an increasing number of conventional…

  19. Solid-state fermentation and composting as alternatives to treat hair waste: A life-cycle assessment comparative approach.

    PubMed

    Catalán, Eva; Komilis, Dimitrios; Sánchez, Antoni

    2017-07-01

    One of the wastes associated with leather production in tannery industries is the hair residue generated during the dehairing process. Hair wastes are mainly dumped or managed through composting but recent studies propose the treatment of hair wastes through solid-state fermentation (SSF) to obtain proteases and compost. These enzymes are suitable for its use in an enzymatic dehairing process, as an alternative to the current chemical dehairing process. In the present work, two different scenarios for the valorization of the hair waste are proposed and assessed by means of life-cycle assessment: composting and SSF for protease production. Detailed data on hair waste composting and on SSF protease production are gathered from previous studies performed by our research group and from a literature survey. Background inventory data are mainly based on Ecoinvent version 3 from software SimaPro® 8. The main aim of this study was to identify which process results in the highest environmental impact. The SSF process was found to have lower environmental impacts than composting, due to the fact that the enzyme use in the dehairing process prevents the use of chemicals traditionally used in the dehairing process. This permits to reformulate an industrial process from the classical approach of waste management to a novel alternative based on circular economy.

  20. Development and beyond: Strategy for long-term maintenance of an online laser diffraction particle size method in a spray drying manufacturing process.

    PubMed

    Medendorp, Joseph; Bric, John; Connelly, Greg; Tolton, Kelly; Warman, Martin

    2015-08-10

    The purpose of this manuscript is to present the intended use and long-term maintenance strategy of an online laser diffraction particle size method used for process control in a spray drying process. A Malvern Insitec was used for online particle size measurements and a Malvern Mastersizer was used for offline particle size measurements. The two methods were developed in parallel with the Mastersizer serving as the reference method. Despite extensive method development across a range of particle sizes, the two instruments demonstrated different sensitivities to material and process changes over the product lifecycle. This paper will describe the procedure used to ensure consistent alignment of the two methods, thus allowing for continued use of online real-time laser diffraction as a surrogate for the offline system over the product lifecycle. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. A Comparative Analysis of Life-Cycle Assessment Tools for ...

    EPA Pesticide Factsheets

    We identified and evaluated five life-cycle assessment tools that community decision makers can use to assess the environmental and economic impacts of end-of-life (EOL) materials management options. The tools evaluated in this report are waste reduction mode (WARM), municipal solid waste-decision support tool (MSW-DST), solid waste optimization life-cycle framework (SWOLF), environmental assessment system for environmental technologies (EASETECH), and waste and resources assessment for the environment (WRATE). WARM, MSW-DST, and SWOLF were developed for US-specific materials management strategies, while WRATE and EASETECH were developed for European-specific conditions. All of the tools (with the exception of WARM) allow specification of a wide variety of parameters (e.g., materials composition and energy mix) to a varying degree, thus allowing users to model specific EOL materials management methods even outside the geographical domain they are originally intended for. The flexibility to accept user-specified input for a large number of parameters increases the level of complexity and the skill set needed for using these tools. The tools were evaluated and compared based on a series of criteria, including general tool features, the scope of the analysis (e.g., materials and processes included), and the impact categories analyzed (e.g., climate change, acidification). A series of scenarios representing materials management problems currently relevant to c

  2. Updated estimation of energy efficiencies of U.S. petroleum refineries.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palou-Rivera, I.; Wang, M. Q.

    2010-12-08

    Evaluation of life-cycle (or well-to-wheels, WTW) energy and emission impacts of vehicle/fuel systems requires energy use (or energy efficiencies) of energy processing or conversion activities. In most such studies, petroleum fuels are included. Thus, determination of energy efficiencies of petroleum refineries becomes a necessary step for life-cycle analyses of vehicle/fuel systems. Petroleum refinery energy efficiencies can then be used to determine the total amount of process energy use for refinery operation. Furthermore, since refineries produce multiple products, allocation of energy use and emissions associated with petroleum refineries to various petroleum products is needed for WTW analysis of individual fuels suchmore » as gasoline and diesel. In particular, GREET, the life-cycle model developed at Argonne National Laboratory with DOE sponsorship, compares energy use and emissions of various transportation fuels including gasoline and diesel. Energy use in petroleum refineries is key components of well-to-pump (WTP) energy use and emissions of gasoline and diesel. In GREET, petroleum refinery overall energy efficiencies are used to determine petroleum product specific energy efficiencies. Argonne has developed petroleum refining efficiencies from LP simulations of petroleum refineries and EIA survey data of petroleum refineries up to 2006 (see Wang, 2008). This memo documents Argonne's most recent update of petroleum refining efficiencies.« less

  3. Water conservation implications for decarbonizing non-electric energy supply: A hybrid life-cycle analysis.

    PubMed

    Liu, Shiyuan; Wang, Can; Shi, Lei; Cai, Wenjia; Zhang, Lixiao

    2018-08-01

    Low-carbon transition in the non-electric energy sector, which includes transport and heating energy, is necessary for achieving the 2 °C target. Meanwhile, as non-electric energy accounts for over 60% of total water consumption in the energy supply sector, it is vital to understand future water trends in the context of decarbonization. However, few studies have focused on life-cycle water impacts for non-electric energy; besides, applying conventional LCA methodology to assess non-electric energy has limitations. In this paper, a Multi-Regional Hybrid Life-Cycle Assessment (MRHLCA) model is built to assess total CO 2 emissions and water consumption of 6 non-electric energy technologies - transport energy from biofuel and gasoline, heat supply from natural gas, biogas, coal, and residual biomass, within 7 major emitting economies. We find that a shift to natural gas and residual biomass heating can help economies reduce 14-65% CO 2 and save more than 21% water. However, developed and developing economies should take differentiated technical strategies. Then we apply scenarios from IMAGE model to demonstrate that if economies take cost-effective 2 °C pathways, the water conservation synergy for the whole energy supply sector, including electricity, can also be achieved. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    NASA Technical Reports Server (NTRS)

    Doyle, Monica; ONeil, Daniel A.; Christensen, Carissa B.

    2005-01-01

    The Advanced Technology Lifecycle Analysis System (ATLAS) is a decision support tool designed to aid program managers and strategic planners in determining how to invest technology research and development dollars. It is an Excel-based modeling package that allows a user to build complex space architectures and evaluate the impact of various technology choices. ATLAS contains system models, cost and operations models, a campaign timeline and a centralized technology database. Technology data for all system models is drawn from a common database, the ATLAS Technology Tool Box (TTB). The TTB provides a comprehensive, architecture-independent technology database that is keyed to current and future timeframes.

  5. A Systematic Comprehensive Computational Model for Stake Estimation in Mission Assurance: Applying Cyber Security Econometrics System (CSES) to Mission Assurance Analysis Protocol (MAAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Sheldon, Frederick T; Grimaila, Michael R

    2010-01-01

    In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper, we discuss how this infrastructure can be used in the subject domain of mission assurance as defined as the full life-cycle engineering process to identify and mitigate design, production, test, and field support deficiencies of mission success. We address the opportunity to apply the Cyberspace Security Econometrics System (CSES) to Carnegie Mellon University and Software Engineering Institute s Mission Assurance Analysismore » Protocol (MAAP) in this context.« less

  6. Vocational Education for Migrant Youth. Information Series No. 238.

    ERIC Educational Resources Information Center

    Picou, J. Steven

    This paper is intended to assist vocational educators in meeting the career development needs and aspirations of migrant youth. It examines the unique characteristics of migrant youth and develops a general life-cycle model of their vocational development. This comparative analysis provides the vocational educator with a basis for identifying…

  7. IN LCA INTERNATIONAL CONFERENCE & EXHIBITION ON LIFE-CYCLE ASSESSMENT: TOOLS FOR SUSTAINABILITY

    EPA Science Inventory

    LCA is being developed and applied internationally by corporations, governments, and environmental groups to incorporate environmental concerns into the decision-making process. It is being widely adopted as a means to evaluate commercial systems and develop sustainable solution...

  8. Beyond Survival: Educational Development and the Maturing of the POD Network

    ERIC Educational Resources Information Center

    Ortquist-Ahrens, Leslie

    2016-01-01

    Scholarship about the growth of educational development has charted major shifts in developers' focuses and roles through time and, especially in recent years, has explored the professionalization of the field around the globe. This essay uses a lifecycle analogy to consider the development of one organization, the POD Network (The Professional…

  9. Life-cycle inventory of hardwood lumber manufacturing in the Northeastern and North Central United States.

    Treesearch

    Richard Bergman; Scott A. Bowe

    2007-01-01

    The goal of this study was to find the environmental impact of hardwood lumber production through a gate-to-gate Life-Cycle Inventory (LCI) on hardwood sawmills in the northeast and northcentral (NE/NC) United States. Primary mill data was collected per CORRIM Research Guidelines (CORRIM 2001). Lifecycle analysis is beyond the scope of the study.

  10. Life-Cycle Thinking in Inquiry-Based Sustainability Education--Effects on Students' Attitudes towards Chemistry and Environmental Literacy

    ERIC Educational Resources Information Center

    Juntunen, Marianne; Aksela, Maija

    2013-01-01

    The aim of the present study is to improve the quality of students' environmental literacy and sustainability education in chemistry teaching by combining the socio-scientific issue of life-cycle thinking with inquiry-based learning approaches. This case study presents results from an inquiry-based life-cycle thinking project: an interdisciplinary…

  11. A program-level management system for the life cycle environmental and economic assessment of complex building projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Chan-Joong; Kim, Jimin; Hong, Taehoon

    Climate change has become one of the most significant environmental issues, of which about 40% come from the building sector. In particular, complex building projects with various functions have increased, which should be managed from a program-level perspective. Therefore, this study aimed to develop a program-level management system for the life-cycle environmental and economic assessment of complex building projects. The developed system consists of three parts: (i) input part: database server and input data; (ii) analysis part: life cycle assessment and life cycle cost; and (iii) result part: microscopic analysis and macroscopic analysis. To analyze the applicability of the developedmore » system, this study selected ‘U’ University, a complex building project consisting of research facility and residential facility. Through value engineering with experts, a total of 137 design alternatives were established. Based on these alternatives, the macroscopic analysis results were as follows: (i) at the program-level, the life-cycle environmental and economic cost in ‘U’ University were reduced by 6.22% and 2.11%, respectively; (ii) at the project-level, the life-cycle environmental and economic cost in research facility were reduced 6.01% and 1.87%, respectively; and those in residential facility, 12.01% and 3.83%, respective; and (iii) for the mechanical work at the work-type-level, the initial cost was increased 2.9%; but the operation and maintenance phase was reduced by 20.0%. As a result, the developed system can allow the facility managers to establish the operation and maintenance strategies for the environmental and economic aspects from a program-level perspective. - Highlights: • A program-level management system for complex building projects was developed. • Life-cycle environmental and economic assessment can be conducted using the system. • The design alternatives can be analyzed from the microscopic perspective. • The system can be used to establish the optimal O&M strategy at the program-level. • It can be applied to any other country or sector in the global environment.« less

  12. Development of hybrid lifecycle cost estimating tool (HLCET) for manufacturing influenced design tradeoff

    NASA Astrophysics Data System (ADS)

    Sirirojvisuth, Apinut

    In complex aerospace system design, making an effective design decision requires multidisciplinary knowledge from both product and process perspectives. Integrating manufacturing considerations into the design process is most valuable during the early design stages since designers have more freedom to integrate new ideas when changes are relatively inexpensive in terms of time and effort. Several metrics related to manufacturability are cost, time, and manufacturing readiness level (MRL). Yet, there is a lack of structured methodology that quantifies how changes in the design decisions impact these metrics. As a result, a new set of integrated cost analysis tools are proposed in this study to quantify the impacts. Equally important is the capability to integrate this new cost tool into the existing design methodologies without sacrificing agility and flexibility required during the early design phases. To demonstrate the applicability of this concept, a ModelCenter environment is used to develop software architecture that represents Integrated Product and Process Development (IPPD) methodology used in several aerospace systems designs. The environment seamlessly integrates product and process analysis tools and makes effective transition from one design phase to the other while retaining knowledge gained a priori. Then, an advanced cost estimating tool called Hybrid Lifecycle Cost Estimating Tool (HLCET), a hybrid combination of weight-, process-, and activity-based estimating techniques, is integrated with the design framework. A new weight-based lifecycle cost model is created based on Tailored Cost Model (TCM) equations [3]. This lifecycle cost tool estimates the program cost based on vehicle component weights and programmatic assumptions. Additional high fidelity cost tools like process-based and activity-based cost analysis methods can be used to modify the baseline TCM result as more knowledge is accumulated over design iterations. Therefore, with this concept, the additional manufacturing knowledge can be used to identify a more accurate lifecycle cost and facilitate higher fidelity tradeoffs during conceptual and preliminary design. Advanced Composite Cost Estimating Model (ACCEM) is employed as a process-based cost component to replace the original TCM result of the composite part production cost. The reason for the replacement is that TCM estimates production costs from part weights as a result of subtractive manufacturing of metallic origin such as casting, forging, and machining processes. A complexity factor can sometimes be adjusted to reflect different types of metal and machine settings. The TCM assumption, however, gives erroneous results when applied to additive processes like those of composite manufacturing. Another innovative aspect of this research is the introduction of a work measurement technique called Maynard Operation Sequence Technique (MOST) to be used, similarly to Activity-Based Costing (ABC) approach, to estimate manufacturing time of a part by virtue of breaking down the operations occurred during its production. ABC allows a realistic determination of cost incurred in each activity, as opposed to using a traditional method of time estimation by analogy or using response surface equations from historical process data. The MOST concept provides a tailored study of an individual process typically required for a new, innovative design. Nevertheless, the MOST idea has some challenges, one of which is its requirement to build a new process from ground up. The process development requires a Subject Matter Expertise (SME) in manufacturing method of the particular design. The SME must have also a comprehensive understanding of the MOST system so that the correct parameters are chosen. In practice, these knowledge requirements may demand people from outside of the design discipline and a priori training of MOST. To relieve the constraint, this study includes an entirely new sub-system architecture that comprises 1) a knowledge-based system to provide the required knowledge during the process selection; and 2) a new user-interface to guide the parameter selection when building the process using MOST. Also included in this study is the demonstration of how the HLCET and its constituents can be integrated with a Georgia Tech' Integrated Product and Process Development (IPPD) methodology. The applicability of this work will be shown through a complex aerospace design example to gain insights into how manufacturing knowledge helps make better design decisions during the early stages. The setup process is explained with an example of its utility demonstrated in a hypothetical fighter aircraft wing redesign. The evaluation of the system effectiveness against existing methodologies is illustrated to conclude the thesis.

  13. A product lifecycle management framework to support the exchange of prototyping and testing information

    NASA Astrophysics Data System (ADS)

    Toche Fumchum, Luc Boris

    2011-12-01

    The modern perspective on product life cycle and the rapid evolution of Information and Communication Technologies in general have opened a new era in product representation and product information sharing between participants, both inside and outside the enterprise and throughout the product life. In particular, the Product Development Process relies on cross-functional activities involving different domains of expertise that each have their own dedicated tools. This has generated new challenges in terms of collaboration and dissemination of information at large between companies or even within the same organization. Within this context, the work reported herein focuses on a specific stakeholder within product development activities - the prototyping and testing department. Its business is typically related to the planning and building of prototypes in order to perform specific tests on the future product or one of its sub-assemblies. The research project aims at investigating an appropriate framework that leverages configured engineering product information, based on complementary information structures, to share and exchange prototyping and testing information in a Product Lifecycle Management (PLM) perspective. As a first step, a case study based on the retrofit of an aircraft engine is deployed to implement a scenario demonstrating the functionalities to be available within the intended framework. For this purpose, complementary and configurable structures are simulated within the project's PLM system. In a second step are considered the software interoperability issues that don't only affect Design -- Testing interactions, but many other interfaces within either the company -- due to the silo-arrangement -- or the consortiums with partners, in which case the whole PLM platforms could simply be incompatible. A study based on an open source initiative and relying on an improved model of communication is described to show how two natively disparate PLM tools can dialogue to merge information in a central environment. The principles applied in both steps are therefore transposed to introduce the Open Exchange Nest as a generic PLM-driven and web-based concept to support the collaborative work in the aforementioned context.

  14. Problem Reporting System

    NASA Technical Reports Server (NTRS)

    Potter, Don; Serian, Charles; Sweet, Robert; Sapir, Babak; Gamez, Enrique; Mays, David

    2008-01-01

    The Problem Reporting System (PRS) is a Web application, running on two Web servers (load-balanced) and two database servers (RAID-5), which establishes a system for submission, editing, and sharing of reports to manage risk assessment of anomalies identified in NASA's flight projects. PRS consolidates diverse anomaly-reporting systems, maintains a rich database set, and incorporates a robust engine, which allows tracking of any hardware, software, or paper process by configuring an appropriate life cycle. Global and specific project administration and setup tools allow lifecycle tailoring, along with customizable controls for user, e-mail, notifications, and more. PRS is accessible via the World Wide Web for authorized user at most any location. Upon successful log-in, the user receives a customizable window, which displays time-critical 'To Do' items (anomalies requiring the user s input before the system moves the anomaly to the next phase of the lifecycle), anomalies originated by the user, anomalies the user has addressed, and custom queries that can be saved for future use. Access controls exist depending on a user's role as system administrator, project administrator, user, or developer, and then, further by association with user, project, subsystem, company, or item with provisions for business-to-business exclusions, limitations on access according to the covert or overt nature of a given project, all with multiple layers of filtration, as needed. Reporting of metrics is built in. There is a provision for proxy access (in which the user may choose to grant one or more other users to view screens and perform actions as though they were the user, during any part of a tracking life cycle - especially useful during tight build schedules and vacations to keep things moving). The system also provides users the ability to have an anomaly link to or notify other systems, including QA Inspection Reports, Safety, GIDEP (Government-Industry Data Exchange Program) Alert, Corrective Actions, and Lessons Learned. The PRS tracking engine was designed as a very extensible and scalable system, able to support additional applications, with future development possibilities already discussed, including Incident Surprise Anomalies (for anomalies occurring during Operations phases of NASA Flight projects), GIDEP and NASA Alerts, and others.

  15. A 20 Year Lifecycle Study for Launch Facilities at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Kolody, Mark R.; Li. Wenyan; Hintze, Paul E.; Calle, Luz-Marina

    2009-01-01

    The lifecycle cost analysis was based on corrosion costs for the Kennedy Space Center's Launch Complexes and Mobile Launch Platforms. The first step in the study involved identifying the relevant assets that would be included. Secondly, the identification and collection of the corrosion control cost data for the selected assets was completed. Corrosion control costs were separated into four categories. The sources of cost included the NASA labor for civil servant personnel directly involved in overseeing and managing corrosion control of the assets, United Space Alliance (USA) contractual requirements for performing planned corrosion control tasks, USA performance of unplanned corrosion control tasks, and Testing and Development. Corrosion control operations performed under USA contractual requirements were the most significant contributors to the total cost of corrosion. The operations include the inspection of the pad, routine maintenance of the pad, medium and large scale blasting and repainting activities, and the repair and replacement of structural metal elements. Cost data was collected from the years between 2001 and 2007. These costs were then extrapolated to future years to calculate the 20 year lifecycle costs.

  16. Life-cycle assessment of redwood decking in the United States with a comparison to three other decking materials

    Treesearch

    R. Bergman; H. Sup-Han; E. Oneil; I. Eastin

    2013-01-01

    The goal of the study was to conduct a life-cycle inventory (LCI) of California redwood (Sequoia sempervirens) decking that would quantify the critical environmental impacts of decking from cradle to grave. Using that LCI data, a life-cycle assessment (LCA) was produced for redwood decking. The results were used to compare the environmental footprint...

  17. Illustrative national scale scenarios of environmental and human health impacts of Carbon Capture and Storage.

    PubMed

    Tzanidakis, Konstantinos; Oxley, Tim; Cockerill, Tim; ApSimon, Helen

    2013-06-01

    Integrated Assessment, and the development of strategies to reduce the impacts of air pollution, has tended to focus only upon the direct emissions from different sources, with the indirect emissions associated with the full life-cycle of a technology often overlooked. Carbon Capture and Storage (CCS) reflects a number of new technologies designed to reduce CO2 emissions, but which may have much broader environmental implications than greenhouse gas emissions. This paper considers a wider range of pollutants from a full life-cycle perspective, illustrating a methodology for assessing environmental impacts using source-apportioned effects based impact factors calculated by the national scale UK Integrated Assessment Model (UKIAM). Contrasting illustrative scenarios for the deployment of CCS towards 2050 are presented which compare the life-cycle effects of air pollutant emissions upon human health and ecosystems of business-as-usual, deployment of CCS and widespread uptake of IGCC for power generation. Together with estimation of the transboundary impacts we discuss the benefits of an effects based approach to such assessments in relation to emissions based techniques. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. A Life-Cycle Assessment of Biofuels: Tracing Energy and Carbon through a Fuel-Production System

    ERIC Educational Resources Information Center

    Krauskopf, Sara

    2010-01-01

    A life-cycle assessment (LCA) is a tool used by engineers to make measurements of net energy, greenhouse gas production, water consumption, and other items of concern. This article describes an activity designed to walk students through the qualitative part of an LCA. It asks them to consider the life-cycle costs of ethanol production, in terms of…

  19. Assessing the environmental impacts of freshwater consumption in LCA.

    PubMed

    Pfister, Stephan; Koehler, Annette; Hellweg, Stefanie

    2009-06-01

    A method for assessing the environmental impacts of freshwater consumption was developed. This method considers damages to three areas of protection: human health, ecosystem quality, and resources. The method can be used within most existing life-cycle impact assessment (LCIA) methods. The relative importance of water consumption was analyzed by integrating the method into the Eco-indicator-99 LCIA method. The relative impact of water consumption in LCIA was analyzed with a case study on worldwide cotton production. The importance of regionalized characterization factors for water use was also examined in the case study. In arid regions, water consumption may dominate the aggregated life-cycle impacts of cotton-textile production. Therefore, the consideration of water consumption is crucial in life-cycle assessment (LCA) studies that include water-intensive products, such as agricultural goods. A regionalized assessment is necessary, since the impacts of water use vary greatly as a function of location. The presented method is useful for environmental decision-support in the production of water-intensive products as well as for environmentally responsible value-chain management.

  20. Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noonan, Nicholas James

    2015-07-01

    This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).

  1. Building Information Modeling (BIM) Primer. Report 1: Facility Life-Cycle Process and Technology Innovation

    DTIC Science & Technology

    2012-08-01

    Building Information Modeling ( BIM ) Primer Report 1: Facility Life-cycle Process and Technology Innovation In fo...is unlimited. ERDC/ITL TR-12-2 August 2012 Building Information Modeling ( BIM ) Primer Report 1: Facility Life-cycle Process and Technology...and to enhance the quality of projects through the design, construction, and handover phases. Building Information Modeling ( BIM ) is a

  2. Environmental and economic assessment methods for waste management decision-support: possibilities and limitations.

    PubMed

    Finnveden, Göran; Björklund, Anna; Moberg, Asa; Ekvall, Tomas

    2007-06-01

    A large number of methods and approaches that can be used for supporting waste management decisions at different levels in society have been developed. In this paper an overview of methods is provided and preliminary guidelines for the choice of methods are presented. The methods introduced include: Environmental Impact Assessment, Strategic Environmental Assessment, Life Cycle Assessment, Cost-Benefit Analysis, Cost-effectiveness Analysis, Life-cycle Costing, Risk Assessment, Material Flow Accounting, Substance Flow Analysis, Energy Analysis, Exergy Analysis, Entropy Analysis, Environmental Management Systems, and Environmental Auditing. The characteristics used are the types of impacts included, the objects under study and whether the method is procedural or analytical. The different methods can be described as systems analysis methods. Waste management systems thinking is receiving increasing attention. This is, for example, evidenced by the suggested thematic strategy on waste by the European Commission where life-cycle analysis and life-cycle thinking get prominent positions. Indeed, life-cycle analyses have been shown to provide policy-relevant and consistent results. However, it is also clear that the studies will always be open to criticism since they are simplifications of reality and include uncertainties. This is something all systems analysis methods have in common. Assumptions can be challenged and it may be difficult to generalize from case studies to policies. This suggests that if decisions are going to be made, they are likely to be made on a less than perfect basis.

  3. Cirrus Simulations of CRYSTAL-FACE 23 July 2002 Case

    NASA Technical Reports Server (NTRS)

    Starr, David; Lin, Ruci-Fong; Demoz, Belay; Lare, Andrew

    2004-01-01

    A key objective of the Cirrus Regional Study of Tropical Anvils and Cirrus Layers - Florida Area Cirrus Experiment (CRYSTAL-FACE) is to understand relationships between the properties of tropical convective cloud systems and the properties and lifecycle of the extended cirrus anvils they produce. We report here on a case study of 23 July 2002 where a sequence of convective storms over central Florida produced an extensive anvil outflow. Our approach is to use a suitably-initialized cloud-system simulation with MM5 to define initial conditions and time-dependent forcing for a simulation of anvil evolution using a two-dimensional fine-resolution (100 m) cirrus cloud model that explicitly accounts for details of cirrus microphysical development (bin or spectra model) and fully interactive radiative processes. The cirrus model follows Lin. Meteorological conditions and observations for the 23 July case are described in this volume. The goals of the present study are to evaluate how well we can simulate a cirrus anvil lifecycle, to evaluate the importance of various physical processes that operate within the anvil, and to evaluate the importance of environmental conditions in regulating anvil lifecycle. CRYSTAL-FACE produced a number of excellent case studies of anvil systems that will allow environmental factors, such as static stability or wind shear in the upper troposphere, to be examined. In the present study, we strive to assess the importance of propagating gravity waves, likely produced by the deep convection itself, and radiative processes, to anvil lifecycle and characteristics.

  4. Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hines, J. Wesley; Upadhyaya, Belle; Sharp, Michael

    On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning ofmore » component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for RUL predictions, with as little uncertainty as possible. From a reliability and maintenance standpoint, there would be improved safety by avoiding all failures. Calculated risk would decrease, saving money by avoiding unnecessary maintenance. One major bottleneck for data-driven prognostics is the availability of run-to-failure degradation data. Without enough degradation data leading to failure, prognostic models can yield RUL distributions with large uncertainty or mathematically unsound predictions. To address these issues a "Lifecycle Prognostics" method was developed to create RUL distributions from Beginning of Life (BOL) to End of Life (EOL). This employs established Type I, II, and III prognostic methods, and Bayesian transitioning between each Type. Bayesian methods, as opposed to classical frequency statistics, show how an expected value, a priori, changes with new data to form a posterior distribution. For example, when you purchase a component you have a prior belief, or estimation, of how long it will operate before failing. As you operate it, you may collect information related to its condition that will allow you to update your estimated failure time. Bayesian methods are best used when limited data are available. The use of a prior also means that information is conserved when new data are available. The weightings of the prior belief and information contained in the sampled data are dependent on the variance (uncertainty) of the prior, the variance (uncertainty) of the data, and the amount of measured data (number of samples). If the variance of the prior is small compared to the uncertainty of the data, the prior will be weighed more heavily. However, as more data are collected, the data will be weighted more heavily and will eventually swamp out the prior in calculating the posterior distribution of model parameters. Fundamentally Bayesian analysis updates a prior belief with new data to get a posterior belief. The general approach to applying the Bayesian method to lifecycle prognostics consisted of identifying the prior, which is the RUL estimate and uncertainty from the previous prognostics type, and combining it with observational data related to the newer prognostics type. The resulting lifecycle prognostics algorithm uses all available information throughout the component lifecycle.« less

  5. 48 CFR 231.205-18 - Independent research and development and bid and proposal costs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... development of technologies identified as critical under 10 U.S.C. 2522. (6) Increase the development and promotion of efficient and effective applications of dual-use technologies. (7) Provide efficient and... and life-cycle costs of military systems. (3) Strengthen the defense industrial and technology base of...

  6. 48 CFR 231.205-18 - Independent research and development and bid and proposal costs.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... development of technologies identified as critical under 10 U.S.C. 2522. (6) Increase the development and promotion of efficient and effective applications of dual-use technologies. (7) Provide efficient and... and life-cycle costs of military systems. (3) Strengthen the defense industrial and technology base of...

  7. 48 CFR 231.205-18 - Independent research and development and bid and proposal costs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... development of technologies identified as critical under 10 U.S.C. 2522. (6) Increase the development and promotion of efficient and effective applications of dual-use technologies. (7) Provide efficient and... and life-cycle costs of military systems. (3) Strengthen the defense industrial and technology base of...

  8. 48 CFR 231.205-18 - Independent research and development and bid and proposal costs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... development of technologies identified as critical under 10 U.S.C. 2522. (6) Increase the development and promotion of efficient and effective applications of dual-use technologies. (7) Provide efficient and... and life-cycle costs of military systems. (3) Strengthen the defense industrial and technology base of...

  9. 48 CFR 231.205-18 - Independent research and development and bid and proposal costs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... development of technologies identified as critical under 10 U.S.C. 2522. (6) Increase the development and promotion of efficient and effective applications of dual-use technologies. (7) Provide efficient and... and life-cycle costs of military systems. (3) Strengthen the defense industrial and technology base of...

  10. Enabling Data-Driven Methodologies Across the Data Lifecycle and Ecosystem

    NASA Astrophysics Data System (ADS)

    Doyle, R. J.; Crichton, D.

    2017-12-01

    NASA has unlocked unprecedented scientific knowledge through exploration of the Earth, our solar system, and the larger universe. NASA is generating enormous amounts of data that are challenging traditional approaches to capturing, managing, analyzing and ultimately gaining scientific understanding from science data. New architectures, capabilities and methodologies are needed to span the entire observing system, from spacecraft to archive, while integrating data-driven discovery and analytic capabilities. NASA data have a definable lifecycle, from remote collection point to validated accessibility in multiple archives. Data challenges must be addressed across this lifecycle, to capture opportunities and avoid decisions that may limit or compromise what is achievable once data arrives at the archive. Data triage may be necessary when the collection capacity of the sensor or instrument overwhelms data transport or storage capacity. By migrating computational and analytic capability to the point of data collection, informed decisions can be made about which data to keep; in some cases, to close observational decision loops onboard, to enable attending to unexpected or transient phenomena. Along a different dimension than the data lifecycle, scientists and other end-users must work across an increasingly complex data ecosystem, where the range of relevant data is rarely owned by a single institution. To operate effectively, scalable data architectures and community-owned information models become essential. NASA's Planetary Data System is having success with this approach. Finally, there is the difficult challenge of reproducibility and trust. While data provenance techniques will be part of the solution, future interactive analytics environments must support an ability to provide a basis for a result: relevant data source and algorithms, uncertainty tracking, etc., to assure scientific integrity and to enable confident decision making. Advances in data science offer opportunities to gain new insights from space missions and their vast data collections. We are working to innovate new architectures, exploit emerging technologies, develop new data-driven methodologies, and transfer them across disciplines, while working across the dual dimensions of the data lifecycle and the data ecosystem.

  11. Final Technical Report on Quantifying Dependability Attributes of Software Based Safety Critical Instrumentation and Control Systems in Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smidts, Carol; Huang, Funqun; Li, Boyuan

    With the current transition from analog to digital instrumentation and control systems in nuclear power plants, the number and variety of software-based systems have significantly increased. The sophisticated nature and increasing complexity of software raises trust in these systems as a significant challenge. The trust placed in a software system is typically termed software dependability. Software dependability analysis faces uncommon challenges since software systems’ characteristics differ from those of hardware systems. The lack of systematic science-based methods for quantifying the dependability attributes in software-based instrumentation as well as control systems in safety critical applications has proved itself to be amore » significant inhibitor to the expanded use of modern digital technology in the nuclear industry. Dependability refers to the ability of a system to deliver a service that can be trusted. Dependability is commonly considered as a general concept that encompasses different attributes, e.g., reliability, safety, security, availability and maintainability. Dependability research has progressed significantly over the last few decades. For example, various assessment models and/or design approaches have been proposed for software reliability, software availability and software maintainability. Advances have also been made to integrate multiple dependability attributes, e.g., integrating security with other dependability attributes, measuring availability and maintainability, modeling reliability and availability, quantifying reliability and security, exploring the dependencies between security and safety and developing integrated analysis models. However, there is still a lack of understanding of the dependencies between various dependability attributes as a whole and of how such dependencies are formed. To address the need for quantification and give a more objective basis to the review process -- therefore reducing regulatory uncertainty -- measures and methods are needed to assess dependability attributes early on, as well as throughout the life-cycle process of software development. In this research, extensive expert opinion elicitation is used to identify the measures and methods for assessing software dependability. Semi-structured questionnaires were designed to elicit expert knowledge. A new notation system, Causal Mechanism Graphing, was developed to extract and represent such knowledge. The Causal Mechanism Graphs were merged, thus, obtaining the consensus knowledge shared by the domain experts. In this report, we focus on how software contributes to dependability. However, software dependability is not discussed separately from the context of systems or socio-technical systems. Specifically, this report focuses on software dependability, reliability, safety, security, availability, and maintainability. Our research was conducted in the sequence of stages found below. Each stage is further examined in its corresponding chapter. Stage 1 (Chapter 2): Elicitation of causal maps describing the dependencies between dependability attributes. These causal maps were constructed using expert opinion elicitation. This chapter describes the expert opinion elicitation process, the questionnaire design, the causal map construction method and the causal maps obtained. Stage 2 (Chapter 3): Elicitation of the causal map describing the occurrence of the event of interest for each dependability attribute. The causal mechanisms for the “event of interest” were extracted for each of the software dependability attributes. The “event of interest” for a dependability attribute is generally considered to be the “attribute failure”, e.g. security failure. The extraction was based on the analysis of expert elicitation results obtained in Stage 1. Stage 3 (Chapter 4): Identification of relevant measurements. Measures for the “events of interest” and their causal mechanisms were obtained from expert opinion elicitation for each of the software dependability attributes. The measures extracted are presented in this chapter. Stage 4 (Chapter 5): Assessment of the coverage of the causal maps via measures. Coverage was assessed to determine whether the measures obtained were sufficient to quantify software dependability, and what measures are further required. Stage 5 (Chapter 6): Identification of “missing” measures and measurement approaches for concepts not covered. New measures, for concepts that had not been covered sufficiently as determined in Stage 4, were identified using supplementary expert opinion elicitation as well as literature reviews. Stage 6 (Chapter 7): Building of a detailed quantification model based on the causal maps and measurements obtained. Ability to derive such a quantification model shows that the causal models and measurements derived from the previous stages (Stage 1 to Stage 5) can form the technical basis for developing dependability quantification models. Scope restrictions have led us to prioritize this demonstration effort. The demonstration was focused on a critical system, i.e. the reactor protection system. For this system, a ranking of the software dependability attributes by nuclear stakeholders was developed. As expected for this application, the stakeholder ranking identified safety as the most critical attribute to be quantified. A safety quantification model limited to the requirements phase of development was built. Two case studies were conducted for verification. A preliminary control gate for software safety for the requirements stage was proposed and applied to the first case study. The control gate allows a cost effective selection of the duration of the requirements phase.« less

  12. Methodology for object-oriented real-time systems analysis and design: Software engineering

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1991-01-01

    Successful application of software engineering methodologies requires an integrated analysis and design life-cycle in which the various phases flow smoothly 'seamlessly' from analysis through design to implementation. Furthermore, different analysis methodologies often lead to different structuring of the system so that the transition from analysis to design may be awkward depending on the design methodology to be used. This is especially important when object-oriented programming is to be used for implementation when the original specification and perhaps high-level design is non-object oriented. Two approaches to real-time systems analysis which can lead to an object-oriented design are contrasted: (1) modeling the system using structured analysis with real-time extensions which emphasizes data and control flows followed by the abstraction of objects where the operations or methods of the objects correspond to processes in the data flow diagrams and then design in terms of these objects; and (2) modeling the system from the beginning as a set of naturally occurring concurrent entities (objects) each having its own time-behavior defined by a set of states and state-transition rules and seamlessly transforming the analysis models into high-level design models. A new concept of a 'real-time systems-analysis object' is introduced and becomes the basic building block of a series of seamlessly-connected models which progress from the object-oriented real-time systems analysis and design system analysis logical models through the physical architectural models and the high-level design stages. The methodology is appropriate to the overall specification including hardware and software modules. In software modules, the systems analysis objects are transformed into software objects.

  13. Policy implications of uncertainty in modeled life-cycle greenhouse gas emissions of biofuels.

    PubMed

    Mullins, Kimberley A; Griffin, W Michael; Matthews, H Scott

    2011-01-01

    Biofuels have received legislative support recently in California's Low-Carbon Fuel Standard and the Federal Energy Independence and Security Act. Both present new fuel types, but neither provides methodological guidelines for dealing with the inherent uncertainty in evaluating their potential life-cycle greenhouse gas emissions. Emissions reductions are based on point estimates only. This work demonstrates the use of Monte Carlo simulation to estimate life-cycle emissions distributions from ethanol and butanol from corn or switchgrass. Life-cycle emissions distributions for each feedstock and fuel pairing modeled span an order of magnitude or more. Using a streamlined life-cycle assessment, corn ethanol emissions range from 50 to 250 g CO(2)e/MJ, for example, and each feedstock-fuel pathway studied shows some probability of greater emissions than a distribution for gasoline. Potential GHG emissions reductions from displacing fossil fuels with biofuels are difficult to forecast given this high degree of uncertainty in life-cycle emissions. This uncertainty is driven by the importance and uncertainty of indirect land use change emissions. Incorporating uncertainty in the decision making process can illuminate the risks of policy failure (e.g., increased emissions), and a calculated risk of failure due to uncertainty can be used to inform more appropriate reduction targets in future biofuel policies.

  14. Energy and life-cycle cost analysis of a six-story office building

    NASA Astrophysics Data System (ADS)

    Turiel, I.

    1981-10-01

    An energy analysis computer program, DOE-2, was used to compute annual energy use for a typical office building as originally designed and with several energy conserving design modifications. The largest energy use reductions were obtained with the incorporation of daylighting techniques, the use of double pane windows, night temperature setback, and the reduction of artificial lighting levels. A life-cycle cost model was developed to assess the cost-effectiveness of the design modifications discussed. The model incorporates such features as inclusion of taxes, depreciation, and financing of conservation investments. The energy conserving strategies are ranked according to economic criteria such as net present benefit, discounted payback period, and benefit to cost ratio.

  15. Comprehensive Environmental Informatics System (CEIS) Integrating Crew and Vehicle Environmental Health

    NASA Technical Reports Server (NTRS)

    Nall, Mark E.

    2006-01-01

    Integrated Vehicle Health Management (IVHM) systems have been pursued as highly integrated systems that include smart sensors, diagnostic and prognostics software for assessments of real-time and life-cycle vehicle health information. Inclusive to such a system is the requirement to monitor the environmental health within the vehicle and the occupants of the vehicle. In this regard an enterprise approach to informatics is used to develop a methodology entitled, Comprehensive Environmental Informatics System (CEIS). The hardware and software technologies integrated into this system will be embedded in the vehicle subsystems, and maintenance operations, to provide both real-time and life-cycle health information of the environment within the vehicle cabin and of its occupants. This comprehensive information database will enable informed decision making and logistics management. One key element of the CEIS is interoperability for data acquisition and archive between environment and human system monitoring. With comprehensive components the data acquired in this system will use model based reasoning systems for subsystem and system level managers, advanced on-board and ground-based mission and maintenance planners to assess system functionality. Knowledge databases of the vehicle health state will be continuously updated and reported for critical failure modes, and routinely updated and reported for life cycle condition trending. Sufficient intelligence, including evidence-based engineering practices which are analogous to evidencebased medicine practices, will be included in the CEIS to result in more rapid recognition of off-nominal operation to enable quicker corrective actions. This will result from better information (rather than just data) for improved crew/operator situational awareness, which will produce significant vehicle and crew safety improvements, as well as increasing the chance for mission success, future mission planning as well as training. Other benefits include improved reliability, increase safety in operations and cost of operations. The cost benefits stem from significantly reduced processing and operations manpower, predictive maintenance for systems and subjects. The improvements in vehicle functionality and cost will result from increased prognostic and diagnostic capability due to the detailed total human exploration system health knowledge from CEIS. A collateral benefit is that there will be closer observations of the vehicle occupants as wrist watch sized devices are worn for continuous health monitoring. Additional database acquisition will stem from activities in countermeasure practices to ensure peak performance capability by occupants of the vehicle. The CEIS will provide data from advanced sensing technologies and informatics modeling which will be useful in problem troubleshooting, and improving NASA s awareness of systems during operation.

  16. Towards more sustainable management of European food waste: Methodological approach and numerical application.

    PubMed

    Manfredi, Simone; Cristobal, Jorge

    2016-09-01

    Trying to respond to the latest policy needs, the work presented in this article aims at developing a life-cycle based framework methodology to quantitatively evaluate the environmental and economic sustainability of European food waste management options. The methodology is structured into six steps aimed at defining boundaries and scope of the evaluation, evaluating environmental and economic impacts and identifying best performing options. The methodology is able to accommodate additional assessment criteria, for example the social dimension of sustainability, thus moving towards a comprehensive sustainability assessment framework. A numerical case study is also developed to provide an example of application of the proposed methodology to an average European context. Different options for food waste treatment are compared, including landfilling, composting, anaerobic digestion and incineration. The environmental dimension is evaluated with the software EASETECH, while the economic assessment is conducted based on different indicators expressing the costs associated with food waste management. Results show that the proposed methodology allows for a straightforward identification of the most sustainable options for food waste, thus can provide factual support to decision/policy making. However, it was also observed that results markedly depend on a number of user-defined assumptions, for example on the choice of the indicators to express the environmental and economic performance. © The Author(s) 2016.

  17. High-performance concrete : applying life-cycle cost analysis and developing specifications.

    DOT National Transportation Integrated Search

    2016-12-01

    Numerous studies and transportation agency experience across the nation have established that highperformance concrete (HPC) technology improves concrete quality and extends the service life of concrete structures at risk of chlorideinduced cor...

  18. Partnering With Patients in the Development and Lifecycle of Medicines

    PubMed Central

    Anderson, James; Boutin, Marc; Dewulf, Lode; Geissler, Jan; Johnston, Graeme; Joos, Angelika; Metcalf, Marilyn; Regnante, Jeanne; Sargeant, Ifeanyi; Schneider, Roslyn F.; Todaro, Veronica; Tougas, Gervais

    2015-01-01

    The purpose of medicines is to improve patients' lives. Stakeholders involved in the development and lifecycle management of medicines agree that more effective patient involvement is needed to ensure that patient needs and priorities are identified and met. Despite the increasing number and scope of patient involvement initiatives, there is no accepted master framework for systematic patient involvement in industry-led medicines research and development, regulatory review, or market access decisions. Patient engagement is very productive in some indications, but inconsistent and fragmentary on a broader level. This often results in inefficient drug development, increasing evidence requirements, lack of patient-centered outcomes that address unmet medical needs and facilitate adherence, and consequently, lack of required therapeutic options and high costs to society and involved parties. Improved patient involvement can drive the development of innovative medicines that deliver more relevant and impactful patient outcomes and make medicine development faster, more efficient, and more productive. It can lead to better prioritization of early research; improved resource allocation; improved trial protocol designs that better reflect patient needs; and, by addressing potential barriers to patient participation, enhanced recruitment and retention. It may also improve trial conduct and lead to more focused, economically viable clinical trials. At launch and beyond, systematic patient involvement can also improve the ongoing benefit-risk assessment, ensure that public funds prioritize medicines of value to patients, and further the development of the medicine. Progress toward a universal framework for patient involvement requires a joint, precompetitive, and international approach by all stakeholders, working in true partnership to consolidate outputs from existing initiatives, identify gaps, and develop a comprehensive framework. It is essential that all stakeholders participate to drive adoption and implementation of the framework and to ensure that patients and their needs are embedded at the heart of medicines development and lifecycle management. PMID:26539338

  19. Influence of corn oil recovery on life-cycle greenhouse gas emissions of corn ethanol and corn oil biodiesel

    DOE PAGES

    Wang, Zhichao; Dunn, Jennifer B.; Han, Jeongwoo; ...

    2015-11-04

    Corn oil recovery and conversion to biodiesel has been widely adopted at corn ethanol plants recently. The US EPA has projected 2.6 billion liters of biodiesel will be produced from corn oil in 2022. Corn oil biodiesel may qualify for federal renewable identification number (RIN) credits under the Renewable Fuel Standard, as well as for low greenhouse gas (GHG) emission intensity credits under California’s Low Carbon Fuel Standard. Because multiple products [ethanol, biodiesel, and distiller’s grain with solubles (DGS)] are produced from one feedstock (corn), however, a careful co-product treatment approach is required to accurately estimate GHG intensities of bothmore » ethanol and corn oil biodiesel and to avoid double counting of benefits associated with corn oil biodiesel production. This study develops four co-product treatment methods: (1) displacement, (2) marginal, (3) hybrid allocation, and (4) process-level energy allocation. Life-cycle GHG emissions for corn oil biodiesel were more sensitive to the choice of co-product allocation method because significantly less corn oil biodiesel is produced than corn ethanol at a dry mill. Corn ethanol life-cycle GHG emissions with the displacement, marginal, and hybrid allocation approaches are similar (61, 62, and 59 g CO 2e/MJ, respectively). Although corn ethanol and DGS share upstream farming and conversion burdens in both the hybrid and process-level energy allocation methods, DGS bears a higher burden in the latter because it has lower energy content per selling price as compared to corn ethanol. As a result, with the process-level allocation approach, ethanol’s life-cycle GHG emissions are lower at 46 g CO 2e/MJ. Corn oil biodiesel life-cycle GHG emissions from the marginal, hybrid allocation, and process-level energy allocation methods were 14, 59, and 45 g CO 2e/MJ, respectively. Sensitivity analyses were conducted to investigate the influence corn oil yield, soy biodiesel, and defatted DGS displacement credits, and energy consumption for corn oil production and corn oil biodiesel production. Furthermore, this study’s results demonstrate that co-product treatment methodology strongly influences corn oil biodiesel life-cycle GHG emissions and can affect how this fuel is treated under the Renewable Fuel and Low Carbon Fuel Standards.« less

  20. Influence of corn oil recovery on life-cycle greenhouse gas emissions of corn ethanol and corn oil biodiesel.

    PubMed

    Wang, Zhichao; Dunn, Jennifer B; Han, Jeongwoo; Wang, Michael Q

    2015-01-01

    Corn oil recovery and conversion to biodiesel has been widely adopted at corn ethanol plants recently. The US EPA has projected 2.6 billion liters of biodiesel will be produced from corn oil in 2022. Corn oil biodiesel may qualify for federal renewable identification number (RIN) credits under the Renewable Fuel Standard, as well as for low greenhouse gas (GHG) emission intensity credits under California's Low Carbon Fuel Standard. Because multiple products [ethanol, biodiesel, and distiller's grain with solubles (DGS)] are produced from one feedstock (corn), however, a careful co-product treatment approach is required to accurately estimate GHG intensities of both ethanol and corn oil biodiesel and to avoid double counting of benefits associated with corn oil biodiesel production. This study develops four co-product treatment methods: (1) displacement, (2) marginal, (3) hybrid allocation, and (4) process-level energy allocation. Life-cycle GHG emissions for corn oil biodiesel were more sensitive to the choice of co-product allocation method because significantly less corn oil biodiesel is produced than corn ethanol at a dry mill. Corn ethanol life-cycle GHG emissions with the displacement, marginal, and hybrid allocation approaches are similar (61, 62, and 59 g CO2e/MJ, respectively). Although corn ethanol and DGS share upstream farming and conversion burdens in both the hybrid and process-level energy allocation methods, DGS bears a higher burden in the latter because it has lower energy content per selling price as compared to corn ethanol. As a result, with the process-level allocation approach, ethanol's life-cycle GHG emissions are lower at 46 g CO2e/MJ. Corn oil biodiesel life-cycle GHG emissions from the marginal, hybrid allocation, and process-level energy allocation methods were 14, 59, and 45 g CO2e/MJ, respectively. Sensitivity analyses were conducted to investigate the influence corn oil yield, soy biodiesel, and defatted DGS displacement credits, and energy consumption for corn oil production and corn oil biodiesel production. This study's results demonstrate that co-product treatment methodology strongly influences corn oil biodiesel life-cycle GHG emissions and can affect how this fuel is treated under the Renewable Fuel and Low Carbon Fuel Standards.

  1. Influence of corn oil recovery on life-cycle greenhouse gas emissions of corn ethanol and corn oil biodiesel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhichao; Dunn, Jennifer B.; Han, Jeongwoo

    Corn oil recovery and conversion to biodiesel has been widely adopted at corn ethanol plants recently. The US EPA has projected 2.6 billion liters of biodiesel will be produced from corn oil in 2022. Corn oil biodiesel may qualify for federal renewable identification number (RIN) credits under the Renewable Fuel Standard, as well as for low greenhouse gas (GHG) emission intensity credits under California’s Low Carbon Fuel Standard. Because multiple products [ethanol, biodiesel, and distiller’s grain with solubles (DGS)] are produced from one feedstock (corn), however, a careful co-product treatment approach is required to accurately estimate GHG intensities of bothmore » ethanol and corn oil biodiesel and to avoid double counting of benefits associated with corn oil biodiesel production. This study develops four co-product treatment methods: (1) displacement, (2) marginal, (3) hybrid allocation, and (4) process-level energy allocation. Life-cycle GHG emissions for corn oil biodiesel were more sensitive to the choice of co-product allocation method because significantly less corn oil biodiesel is produced than corn ethanol at a dry mill. Corn ethanol life-cycle GHG emissions with the displacement, marginal, and hybrid allocation approaches are similar (61, 62, and 59 g CO 2e/MJ, respectively). Although corn ethanol and DGS share upstream farming and conversion burdens in both the hybrid and process-level energy allocation methods, DGS bears a higher burden in the latter because it has lower energy content per selling price as compared to corn ethanol. As a result, with the process-level allocation approach, ethanol’s life-cycle GHG emissions are lower at 46 g CO 2e/MJ. Corn oil biodiesel life-cycle GHG emissions from the marginal, hybrid allocation, and process-level energy allocation methods were 14, 59, and 45 g CO 2e/MJ, respectively. Sensitivity analyses were conducted to investigate the influence corn oil yield, soy biodiesel, and defatted DGS displacement credits, and energy consumption for corn oil production and corn oil biodiesel production. Furthermore, this study’s results demonstrate that co-product treatment methodology strongly influences corn oil biodiesel life-cycle GHG emissions and can affect how this fuel is treated under the Renewable Fuel and Low Carbon Fuel Standards.« less

  2. Developing a theory of the societal lifecycle of cigarette smoking : explaining and anticipating trends using information feedback.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, Nancy S.; Glass, Robert John, Jr.; Zagonel, Aldo A.

    Cigarette smoking presented the most significant public health challenge in the United States in the 20th Century and remains the single most preventable cause of morbidity and mortality in this country. A number of System Dynamics models exist that inform tobacco control policies. We reviewed them and discuss their contributions. We developed a theory of the societal lifecycle of smoking, using a parsimonious set of feedback loops to capture historical trends and explore future scenarios. Previous work did not explain the long-term historical patterns of smoking behaviors. Much of it used stock-and-flow to represent the decline in prevalence in themore » recent past. With noted exceptions, information feedbacks were not embedded in these models. We present and discuss our feedback-rich conceptual model and illustrate the results of a series of simulations. A formal analysis shows phenomena composed of different phases of behavior with specific dominant feedbacks associated with each phase. We discuss the implications of our society's current phase, and conclude with simulations of what-if scenarios. Because System Dynamics models must contain information feedback to be able to anticipate tipping points and to help identify policies that exploit leverage in a complex system, we expanded this body of work to provide an endogenous representation of the century-long societal lifecycle of smoking.« less

  3. Next generation control system for reflexive aerostructures

    NASA Astrophysics Data System (ADS)

    Maddux, Michael R.; Meents, Elizabeth P.; Barnell, Thomas J.; Cable, Kristin M.; Hemmelgarn, Christopher; Margraf, Thomas W.; Havens, Ernie

    2010-04-01

    Cornerstone Research Group Inc. (CRG) has developed and demonstrated a composite structural solution called reflexive composites for aerospace applications featuring CRG's healable shape memory polymer (SMP) matrix. In reflexive composites, an integrated structural health monitoring (SHM) system autonomously monitors the structural health of composite aerospace structures, while integrated intelligent controls monitor data from the SHM system to characterize damage and initiate healing when damage is detected. Development of next generation intelligent controls for reflexive composites were initiated for the purpose of integrating prognostic health monitoring capabilities into the reflexive composite structural solution. Initial efforts involved data generation through physical inspections and mechanical testing. Compression after impact (CAI) testing was conducted on composite-reinforced shape memory polymer samples to induce damage and investigate the effectiveness of matrix healing on mechanical performance. Non-destructive evaluation (NDE) techniques were employed to observe and characterize material damage. Restoration of mechanical performance was demonstrated through healing, while NDE data showed location and size of damage and verified mitigation of damage post-healing. Data generated was used in the development of next generation reflexive controls software. Data output from the intelligent controls could serve as input to Integrated Vehicle Health Management (IVHM) systems and Integrated Resilient Aircraft Controls (IRAC). Reflexive composite technology has the ability to reduce maintenance required on composite structures through healing, offering potential to significantly extend service life of aerospace vehicles and reduce operating and lifecycle costs.

  4. Hot forming and quenching pilot process development for low cost and low environmental impact manufacturing.

    NASA Astrophysics Data System (ADS)

    Hall, Roger W.; Foster, Alistair; Herrmann Praturlon, Anja

    2017-09-01

    The Hot Forming and in-tool Quenching (HFQ®) process is a proven technique to enable complex shaped stampings to be manufactured from high strength aluminium. Its widespread uptake for high volume production will be maximised if it is able to wholly amortise the additional investment cost of this process compared to conventional deep drawing techniques. This paper discusses the use of three techniques to guide some of the development decisions taken during upscaling of the HFQ® process. Modelling of Process timing, Cost and Life-cycle impact were found to be effective tools to identify where development budget could be focused in order to be able to manufacture low cost panels of different sizes from many different alloys in a sustainable way. The results confirm that raw material cost, panel trimming, and artificial ageing were some of the highest contributing factors to final component cost. Additionally, heat treatment and lubricant removal stages played a significant role in the overall life-cycle assessment of the final products. These findings confirmed development priorities as novel furnace design, fast artificial ageing and low-cost alloy development.

  5. Integrating risk minimization planning throughout the clinical development and commercialization lifecycle: an opinion on how drug development could be improved

    PubMed Central

    Morrato, Elaine H; Smith, Meredith Y

    2015-01-01

    Pharmaceutical risk minimization programs are now an established requirement in the regulatory landscape. However, pharmaceutical companies have been slow to recognize and embrace the significant potential these programs offer in terms of enhancing trust with health care professionals and patients, and for providing a mechanism for bringing products to the market that might not otherwise have been approved. Pitfalls of the current drug development process include risk minimization programs that are not data driven; missed opportunities to incorporate pragmatic methods and market-based insights, outmoded tools and data sources, lack of rapid evaluative learning to support timely adaption, lack of systematic approaches for patient engagement, and questions on staffing and organizational infrastructure. We propose better integration of risk minimization with clinical drug development and commercialization work streams throughout the product lifecycle. We articulate a vision and propose broad adoption of organizational models for incorporating risk minimization expertise into the drug development process. Three organizational models are discussed and compared: outsource/external vendor, embedded risk management specialist model, and Center of Excellence. PMID:25750537

  6. Knowledge management in the engineering design environment

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2006-01-01

    The Aerospace and Defense industry is experiencing an increasing loss of knowledge through workforce reductions associated with business consolidation and retirement of senior personnel. Significant effort is being placed on process definition as part of ISO certification and, more recently, CMMI certification. The process knowledge in these efforts represents the simplest of engineering knowledge and many organizations are trying to get senior engineers to write more significant guidelines, best practices and design manuals. A new generation of design software, known as Product Lifecycle Management systems, has many mechanisms for capturing and deploying a wider variety of engineering knowledge than simple process definitions. These hold the promise of significant improvements through reuse of prior designs, codification of practices in workflows, and placement of detailed how-tos at the point of application.

  7. Point clouds in BIM

    NASA Astrophysics Data System (ADS)

    Antova, Gergana; Kunchev, Ivan; Mickrenska-Cherneva, Christina

    2016-10-01

    The representation of physical buildings in Building Information Models (BIM) has been a subject of research since four decades in the fields of Construction Informatics and GeoInformatics. The early digital representations of buildings mainly appeared as 3D drawings constructed by CAD software, and the 3D representation of the buildings was only geometric, while semantics and topology were out of modelling focus. On the other hand, less detailed building representations, with often focus on ‘outside’ representations were also found in form of 2D /2,5D GeoInformation models. Point clouds from 3D laser scanning data give a full and exact representation of the building geometry. The article presents different aspects and the benefits of using point clouds in BIM in the different stages of a lifecycle of a building.

  8. NREL to Assist in Development and Evaluation of Class 6 Plug-in Hybrid

    Science.gov Websites

    , and emissions, as well as the potential impacts on life-cycle costs, barriers to implementation, and application and maximizing potential energy efficiency, emissions, economic, and performance impacts."

  9. Stochastic airspace simulation tool development

    DOT National Transportation Integrated Search

    2009-10-01

    Modeling and simulation is often used to study : the physical world when observation may not be : practical. The overall goal of a recent and ongoing : simulation tool project has been to provide a : documented, lifecycle-managed, multi-processor : c...

  10. Biofuels Research at EPA

    EPA Science Inventory

    The development of sustainable and clean biofuels is a national priority. To do so requires a life-cycle approach that includes consideration of feedstock production and logistics, and biofuel production, distribution, and end use. The US Environmental Protection Agency is suppor...

  11. Space Transportation Operations: Assessment of Methodologies and Models

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla

    2001-01-01

    The systems design process for future space transportation involves understanding multiple variables and their effect on lifecycle metrics. Variables such as technology readiness or potential environmental impact are qualitative, while variables such as reliability, operations costs or flight rates are quantitative. In deciding what new design concepts to fund, NASA needs a methodology that would assess the sum total of all relevant qualitative and quantitative lifecycle metrics resulting from each proposed concept. The objective of this research was to review the state of operations assessment methodologies and models used to evaluate proposed space transportation systems and to develop recommendations for improving them. It was found that, compared to the models available from other sources, the operations assessment methodology recently developed at Kennedy Space Center has the potential to produce a decision support tool that will serve as the industry standard. Towards that goal, a number of areas of improvement in the Kennedy Space Center's methodology are identified.

  12. Space Transportation Operations: Assessment of Methodologies and Models

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla

    2002-01-01

    The systems design process for future space transportation involves understanding multiple variables and their effect on lifecycle metrics. Variables such as technology readiness or potential environmental impact are qualitative, while variables such as reliability, operations costs or flight rates are quantitative. In deciding what new design concepts to fund, NASA needs a methodology that would assess the sum total of all relevant qualitative and quantitative lifecycle metrics resulting from each proposed concept. The objective of this research was to review the state of operations assessment methodologies and models used to evaluate proposed space transportation systems and to develop recommendations for improving them. It was found that, compared to the models available from other sources, the operations assessment methodology recently developed at Kennedy Space Center has the potential to produce a decision support tool that will serve as the industry standard. Towards that goal, a number of areas of improvement in the Kennedy Space Center's methodology are identified.

  13. Nanotechnology for environmentally sustainable electromobility

    NASA Astrophysics Data System (ADS)

    Ellingsen, Linda Ager-Wick; Hung, Christine Roxanne; Majeau-Bettez, Guillaume; Singh, Bhawna; Chen, Zhongwei; Whittingham, M. Stanley; Strømman, Anders Hammer

    2016-12-01

    Electric vehicles (EVs) powered by lithium-ion batteries (LIBs) or proton exchange membrane hydrogen fuel cells (PEMFCs) offer important potential climate change mitigation effects when combined with clean energy sources. The development of novel nanomaterials may bring about the next wave of technical improvements for LIBs and PEMFCs. If the next generation of EVs is to lead to not only reduced emissions during use but also environmentally sustainable production chains, the research on nanomaterials for LIBs and PEMFCs should be guided by a life-cycle perspective. In this Analysis, we describe an environmental life-cycle screening framework tailored to assess nanomaterials for electromobility. By applying this framework, we offer an early evaluation of the most promising nanomaterials for LIBs and PEMFCs and their potential contributions to the environmental sustainability of EV life cycles. Potential environmental trade-offs and gaps in nanomaterials research are identified to provide guidance for future nanomaterial developments for electromobility.

  14. Reusable Rocket Engine Advanced Health Management System. Architecture and Technology Evaluation: Summary

    NASA Technical Reports Server (NTRS)

    Pettit, C. D.; Barkhoudarian, S.; Daumann, A. G., Jr.; Provan, G. M.; ElFattah, Y. M.; Glover, D. E.

    1999-01-01

    In this study, we proposed an Advanced Health Management System (AHMS) functional architecture and conducted a technology assessment for liquid propellant rocket engine lifecycle health management. The purpose of the AHMS is to improve reusable rocket engine safety and to reduce between-flight maintenance. During the study, past and current reusable rocket engine health management-related projects were reviewed, data structures and health management processes of current rocket engine programs were assessed, and in-depth interviews with rocket engine lifecycle and system experts were conducted. A generic AHMS functional architecture, with primary focus on real-time health monitoring, was developed. Fourteen categories of technology tasks and development needs for implementation of the AHMS were identified, based on the functional architecture and our assessment of current rocket engine programs. Five key technology areas were recommended for immediate development, which (1) would provide immediate benefits to current engine programs, and (2) could be implemented with minimal impact on the current Space Shuttle Main Engine (SSME) and Reusable Launch Vehicle (RLV) engine controllers.

  15. Assessment Methodology for Process Validation Lifecycle Stage 3A.

    PubMed

    Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Chen, Shu; Ingram, Marzena; Spes, Jana

    2017-07-01

    The paper introduces evaluation methodologies and associated statistical approaches for process validation lifecycle Stage 3A. The assessment tools proposed can be applied to newly developed and launched small molecule as well as bio-pharma products, where substantial process and product knowledge has been gathered. The following elements may be included in Stage 3A: number of 3A batch determination; evaluation of critical material attributes, critical process parameters, critical quality attributes; in vivo in vitro correlation; estimation of inherent process variability (IPV) and PaCS index; process capability and quality dashboard (PCQd); and enhanced control strategy. US FDA guidance on Process Validation: General Principles and Practices, January 2011 encourages applying previous credible experience with suitably similar products and processes. A complete Stage 3A evaluation is a valuable resource for product development and future risk mitigation of similar products and processes. Elements of 3A assessment were developed to address industry and regulatory guidance requirements. The conclusions made provide sufficient information to make a scientific and risk-based decision on product robustness.

  16. The lifecycle of e-learning course in the adaptive educational environment

    NASA Astrophysics Data System (ADS)

    Gustun, O. N.; Budaragin, N. V.

    2017-01-01

    In the article we have considered the lifecycle model of the e-learning course in the electronic educational environment. This model consists of three stages and nine phases. In order to implement the adaptive control of the learning process we have determined the actions which are necessary to undertake at different phases of the e-learning course lifecycle. The general characteristics of the SPACEL-technology is given for creating adaptive educational environments of the next generation.

  17. Life-cycle assessment of corn-based butanol as a potential transportation fuel.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, M.; Wang, M.; Liu, J.

    2007-12-31

    Butanol produced from bio-sources (such as corn) could have attractive properties as a transportation fuel. Production of butanol through a fermentation process called acetone-butanol-ethanol (ABE) has been the focus of increasing research and development efforts. Advances in ABE process development in recent years have led to drastic increases in ABE productivity and yields, making butanol production worthy of evaluation for use in motor vehicles. Consequently, chemical/fuel industries have announced their intention to produce butanol from bio-based materials. The purpose of this study is to estimate the potential life-cycle energy and emission effects associated with using bio-butanol as a transportation fuel.more » The study employs a well-to-wheels analysis tool--the Greenhouse Gases, Regulated Emissions and Energy Use in Transportation (GREET) model developed at Argonne National Laboratory--and the Aspen Plus{reg_sign} model developed by AspenTech. The study describes the butanol production from corn, including grain processing, fermentation, gas stripping, distillation, and adsorption for products separation. The Aspen{reg_sign} results that we obtained for the corn-to-butanol production process provide the basis for GREET modeling to estimate life-cycle energy use and greenhouse gas emissions. The GREET model was expanded to simulate the bio-butanol life cycle, from agricultural chemical production to butanol use in motor vehicles. We then compared the results for bio-butanol with those of conventional gasoline. We also analyzed the bio-acetone that is coproduced with bio-butanol as an alternative to petroleum-based acetone. Our study shows that, while the use of corn-based butanol achieves energy benefits and reduces greenhouse gas emissions, the results are affected by the methods used to treat the acetone that is co-produced in butanol plants.« less

  18. 77 FR 65665 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-30

    ...: International Trade Administration. Title: International Client Life-cycle Multi-Purpose Forms. OMB Control... of an international client's life-cycle with CS, involves merging with other information collections...

  19. MODULES FOR EXPERIMENTS IN STELLAR ASTROPHYSICS (MESA): BINARIES, PULSATIONS, AND EXPLOSIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paxton, Bill; Bildsten, Lars; Cantiello, Matteo

    We substantially update the capabilities of the open-source software instrument Modules for Experiments in Stellar Astrophysics (MESA). MESA can now simultaneously evolve an interacting pair of differentially rotating stars undergoing transfer and loss of mass and angular momentum, greatly enhancing the prior ability to model binary evolution. New MESA capabilities in fully coupled calculation of nuclear networks with hundreds of isotopes now allow MESA to accurately simulate the advanced burning stages needed to construct supernova progenitor models. Implicit hydrodynamics with shocks can now be treated with MESA, enabling modeling of the entire massive star lifecycle, from pre-main-sequence evolution to themore » onset of core collapse and nucleosynthesis from the resulting explosion. Coupling of the GYRE non-adiabatic pulsation instrument with MESA allows for new explorations of the instability strips for massive stars while also accelerating the astrophysical use of asteroseismology data. We improve the treatment of mass accretion, giving more accurate and robust near-surface profiles. A new MESA capability to calculate weak reaction rates “on-the-fly” from input nuclear data allows better simulation of accretion induced collapse of massive white dwarfs and the fate of some massive stars. We discuss the ongoing challenge of chemical diffusion in the strongly coupled plasma regime, and exhibit improvements in MESA that now allow for the simulation of radiative levitation of heavy elements in hot stars. We close by noting that the MESA software infrastructure provides bit-for-bit consistency for all results across all the supported platforms, a profound enabling capability for accelerating MESA's development.« less

  20. The development and operation of the international solar-terrestrial physics central data handling facility

    NASA Technical Reports Server (NTRS)

    Lehtonen, Kenneth

    1994-01-01

    The National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) International Solar-Terrestrial Physics (ISTP) Program is committed to the development of a comprehensive, multi-mission ground data system which will support a variety of national and international scientific missions in an effort to study the flow of energy from the sun through the Earth-space environment, known as the geospace. A major component of the ISTP ground data system is an ISTP-dedicated Central Data Handling Facility (CDHF). Acquisition, development, and operation of the ISTP CDHF were delegated by the ISTP Project Office within the Flight Projects Directorate to the Information Processing Division (IPD) within the Mission Operations and Data Systems Directorate (MO&DSD). The ISTP CDHF supports the receipt, storage, and electronic access of the full complement of ISTP Level-zero science data; serves as the linchpin for the centralized processing and long-term storage of all key parameters generated either by the ISTP CDHF itself or received from external, ISTP Program approved sources; and provides the required networking and 'science-friendly' interfaces for the ISTP investigators. Once connected to the ISTP CDHF, the online catalog of key parameters can be browsed from their remote processing facilities for the immediate electronic receipt of selected key parameters using the NASA Science Internet (NSI), managed by NASA's Ames Research Center. The purpose of this paper is twofold: (1) to describe how the ISTP CDHF was successfully implemented and operated to support initially the Japanese Geomagnetic Tail (GEOTAIL) mission and correlative science investigations, and (2) to describe how the ISTP CDHF has been enhanced to support ongoing as well as future ISTP missions. Emphasis will be placed on how various project management approaches were undertaken that proved to be highly effective in delivering an operational ISTP CDHF to the Project on schedule and within budget. Examples to be discussed include: the development of superior teams; the use of Defect Causal Analysis (DCA) concepts to improve the software development process in a pilot Total Quality Management (TQM) initiative; and the implementation of a robust architecture that will be able to support the anticipated growth in the ISTP Program science requirements with only incremental upgrades to the baseline system. Further examples include the use of automated data management software and the implementation of Government and/or industry standards, whenever possible, into the hardware and software development life-cycle. Finally, the paper will also report on several new technologies (for example, the installation of a Fiber Data Distribution Interface network) that were successfully employed.

  1. A PLM-based automated inspection planning system for coordinate measuring machine

    NASA Astrophysics Data System (ADS)

    Zhao, Haibin; Wang, Junying; Wang, Boxiong; Wang, Jianmei; Chen, Huacheng

    2006-11-01

    With rapid progress of Product Lifecycle Management (PLM) in manufacturing industry, automatic generation of inspection planning of product and the integration with other activities in product lifecycle play important roles in quality control. But the techniques for these purposes are laggard comparing with techniques of CAD/CAM. Therefore, an automatic inspection planning system for Coordinate Measuring Machine (CMM) was developed to improve the automatization of measuring based on the integration of inspection system in PLM. Feature information representation is achieved based on a PLM canter database; measuring strategy is optimized through the integration of multi-sensors; reasonable number and distribution of inspection points are calculated and designed with the guidance of statistic theory and a synthesis distribution algorithm; a collision avoidance method is proposed to generate non-collision inspection path with high efficiency. Information mapping is performed between Neutral Interchange Files (NIFs), such as STEP, DML, DMIS, XML, etc., to realize information integration with other activities in the product lifecycle like design, manufacturing and inspection execution, etc. Simulation was carried out to demonstrate the feasibility of the proposed system. As a result, the inspection process is becoming simpler and good result can be got based on the integration in PLM.

  2. Comparison of various staining methods for the detection of Cryptosporidium in cell-free culture.

    PubMed

    Boxell, Annika; Hijjawi, Nawal; Monis, Paul; Ryan, Una

    2008-09-01

    The complete development of Cryptosporidium in host cell-free medium first described in 2004, represented a significant advance that can facilitate many aspects of Cryptosporidium research. A current limitation of host cell-free cultivation is the difficulty involved in visualising the life-cycle stages as they are very small in size, morphologically difficult to identify and dispersed throughout the media. This is in contrast to conventional cell culture methods for Cryptosporidium, where it is possible to focus on the host cells and view the foci of infection on the host cells. In the present study, we compared three specific and three non-specific techniques for visualising Cryptosporidium parvum life-cycle stages in cell-free culture; antibody staining using anti-sporozoite and anti-oocyst wall antibodies (Sporo-Glo and Crypto Cel), fluorescent in-situ hybridization (FISH) using a Cryptosporidium specific rRNA oligonucleotide probe and the non-specific dyes; Texas Red, carboxyfluorescein diacetate succinimidyl ester (CFSE) and 4,6' diamino-2-phenylindole dihydrochloride (DAPI). Results revealed that a combination of Sporo-Glo and Crypto Cel staining resulted in easy and reliable identification of all life-cycle stages.

  3. Design for life-cycle profit with simultaneous consideration of initial manufacturing and end-of-life remanufacturing

    NASA Astrophysics Data System (ADS)

    Kwak, Minjung; Kim, Harrison

    2015-01-01

    Remanufacturing is emerging as a promising solution for achieving green, profitable businesses. This article considers a manufacturer that produces new products and also remanufactured versions of the new products that become available at the end of their life cycle. For such a manufacturer, design decisions at the initial design stage determine both the current profit from manufacturing and future profit from remanufacturing. To maximize the total profit, design decisions must carefully consider both ends of product life cycle, i.e. manufacturing and end-of-life stages. This article proposes a decision-support model for the life-cycle design using mixed-integer nonlinear programming. With an aim to maximize the total life-cycle profit, the proposed model searches for an (at least locally) optimal product design (i.e. design specifications and the selling price) for the new and remanufactured products. It optimizes both the initial design and design upgrades at the end-of-life stage and also provides corresponding production strategies, including production quantities and take-back rate. The model is extended to a multi-objective model that maximizes both economic profit and environmental-impact saving. To illustrate, the developed model is demonstrated with an example of a desktop computer.

  4. A comprehensive methodology for intelligent systems life-cycle cost modelling

    NASA Technical Reports Server (NTRS)

    Korsmeyer, David J.; Lum, Henry, Jr.

    1993-01-01

    As NASA moves into the last part on the twentieth century, the desire to do 'business as usual' has been replaced with the mantra 'faster, cheaper, better'. Recently, new work has been done to show how the implementation of advanced technologies, such as intelligent systems, will impact the cost of a system design or in the operational cost for a spacecraft mission. The impact of the degree of autonomous or intelligent systems and human participation on a given program is manifested most significantly during the program operational phases, while the decision of who performs what tasks, and how much automation is incorporated into the system are all made during the design and development phases. Employing intelligent systems and automation is not an either/or question, but one of degree. The question is what level of automation and autonomy will provide the optimal trade-off between performance and cost. Conventional costing methodologies, however, are unable to show the significance of technologies like these in terms of traceable cost benefits and reductions in the various phases of the spacecraft's lifecycle. The proposed comprehensive life-cycle methodology can address intelligent system technologies as well as others that impact human-machine operational modes.

  5. SCRL-Model for Human Space Flight Operations Enterprise Supply Chain

    NASA Technical Reports Server (NTRS)

    Tucker, Brian

    2010-01-01

    Standard approach to evaluate and configure adaptable and sustainable program and mission supply chains at an enterprise level. End-to-end view. Total Lifecycle. Evaluate the readiness of the supply chain during the supply chain development phase.

  6. The Development of Lifecycle Data for Hydrogen Fuel Production and Delivery

    DOT National Transportation Integrated Search

    2017-10-01

    An evaluation of renewable hydrogen production technologies anticipated to be available in the short, mid- and long-term timeframes was conducted. Renewable conversion pathways often rely on a combination of renewable and fossil energy sources, with ...

  7. Green Infrastructure Models and Tools

    EPA Science Inventory

    The objective of this project is to modify and refine existing models and develop new tools to support decision making for the complete green infrastructure (GI) project lifecycle, including the planning and implementation of stormwater control in urban and agricultural settings,...

  8. Abrasion-resistant concrete mix designs for precast bridge deck panels.

    DOT National Transportation Integrated Search

    2010-08-01

    The report documents laboratory investigations undertaken to develop high performance concrete (HPC) for precast and pre-stressed bridge deck components that would reduce the life-cycle cost of bridges by improving the studded tire wear (abrasion) re...

  9. An XML-Based Manipulation and Query Language for Rule-Based Information

    NASA Astrophysics Data System (ADS)

    Mansour, Essam; Höpfner, Hagen

    Rules are utilized to assist in the monitoring process that is required in activities, such as disease management and customer relationship management. These rules are specified according to the application best practices. Most of research efforts emphasize on the specification and execution of these rules. Few research efforts focus on managing these rules as one object that has a management life-cycle. This paper presents our manipulation and query language that is developed to facilitate the maintenance of this object during its life-cycle and to query the information contained in this object. This language is based on an XML-based model. Furthermore, we evaluate the model and language using a prototype system applied to a clinical case study.

  10. A simplified life-cycle cost comparison of various engines for small helicopter use

    NASA Technical Reports Server (NTRS)

    Civinskas, K. C.; Fishbach, L. M.

    1974-01-01

    A ten-year, life-cycle cost comparison is made of the following engines for small helicopter use: (1) simple turboshaft; (2) regenerative turboshaft; (3) compression-ignition reciprocator; (4) spark-ignited rotary; and (5) spark-ignited reciprocator. Based on a simplified analysis and somewhat approximate data, the simple turboshaft engine apparently has the lowest costs for mission times up to just under 2 hours. At 2 hours and above, the regenerative turboshaft appears promising. The reciprocating and rotary engines are less attractive, requiring from 10 percent to 80 percent more aircraft to have the same total payload capability as a given number of turbine powered craft. A nomogram was developed for estimating total costs of engines not covered in this study.

  11. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    NASA Astrophysics Data System (ADS)

    Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.

    2017-12-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.

  12. Land processes distributed active archive center product lifecycle plan

    USGS Publications Warehouse

    Daucsavage, John C.; Bennett, Stacie D.

    2014-01-01

    The U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center and the National Aeronautics and Space Administration (NASA) Earth Science Data System Program worked together to establish, develop, and operate the Land Processes (LP) Distributed Active Archive Center (DAAC) to provide stewardship for NASA’s land processes science data. These data are critical science assets that serve the land processes science community with potential value beyond any immediate research use, and therefore need to be accounted for and properly managed throughout their lifecycle. A fundamental LP DAAC objective is to enable permanent preservation of these data and information products. The LP DAAC accomplishes this by bridging data producers and permanent archival resources while providing intermediate archive services for data and information products.

  13. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

    NASA Astrophysics Data System (ADS)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.

  14. Exploring business process modelling paradigms and design-time to run-time transitions

    NASA Astrophysics Data System (ADS)

    Caron, Filip; Vanthienen, Jan

    2016-09-01

    The business process management literature describes a multitude of approaches (e.g. imperative, declarative or event-driven) that each result in a different mix of process flexibility, compliance, effectiveness and efficiency. Although the use of a single approach over the process lifecycle is often assumed, transitions between approaches at different phases in the process lifecycle may also be considered. This article explores several business process strategies by analysing the approaches at different phases in the process lifecycle as well as the various transitions.

  15. Life-cycle environmental performance of renewable building materials in the context of residential construction : phase II research report : an extension to the 2005 phase I research report. Module N, Life-cycle inventory of manufacturing prefinished engineered wood flooring in the eastern United States

    Treesearch

    Richard D. Bergman; Scott A. Bowe

    2011-01-01

    This study summarizes the environmental performance of prefinished engineered wood flooring using life-cycle inventory (LCI) analysis. Using primary mill data gathered from manufacturers in the eastern United States and applying the methods found in Consortium for Research on Renewable Industrial Materials (CORRIM) Research Guidelines and International Organization of...

  16. Parametric CERs (Cost Estimate Relationships) for Replenishment Repair Parts (Selected U.S. Army Helicopters and Combat Vehicles)

    DTIC Science & Technology

    1989-07-31

    Information System (OSMIS). The long-range objective is to develop methods to determine total operating and support (O&S) costs within life-cycle cost...objective was to assess the feasibility of developing cost estimating relationships (CERs) based on data from the Army Operating and Support Management

  17. Key Decision Record Creation and Approval Module

    NASA Technical Reports Server (NTRS)

    Hebert, Barrt; Messer, Elizabeth A.; Albasini, Colby; Le, Thang; ORourke, William, Sr.; Stiglets, Tim; Strain, Ted

    2012-01-01

    Retaining good key decision records is critical to ensuring the success of a project or operation. Having adequately documented decisions with supporting documents and rationale can greatly reduce the amount of rework or reinvention over a project's, vehicle's, or facility's lifecycle. Stennis Space Center developed and uses a software tool that automates the Key Decision Record (KDR) process for its engineering and test projects. It provides the ability for a user to log key decisions that are made during the course of a project. By customizing Parametric Technology Corporation's (PTC) Windchill product, the team was able to log all information about a decision, and electronically route that information for approval. Customizing the Windchill product allowed the team to directly connect these decisions to the engineering data that it might affect and notify data owners of the decision. The user interface was created in JSP and Javascript, within the OOTB (Out of the Box) Windchill product, allowing users to create KDRs. Not only does this interface allow users to create and track KDRs, but it also plugs directly into the OOTB ability to associate these decision records with other relevant engineering data such as drawings, designs, models, requirements, or specifications

  18. POD evaluation using simulation: A phased array UT case on a complex geometry part

    NASA Astrophysics Data System (ADS)

    Dominguez, Nicolas; Reverdy, Frederic; Jenson, Frederic

    2014-02-01

    The use of Probability of Detection (POD) for NDT performances demonstration is a key link in products lifecycle management. The POD approach is to apply the given NDT procedure on a series of known flaws to estimate the probability to detect with respect to the flaw size. A POD is relevant if and only if NDT operations are carried out within the range of variability authorized by the procedure. Such experimental campaigns require collection of large enough datasets to cover the range of variability with sufficient occurrences to build a reliable POD statistics, leading to expensive costs to get POD curves. In the last decade research activities have been led in the USA with the MAPOD group and later in Europe with the SISTAE and PICASSO projects based on the idea to use models and simulation tools to feed POD estimations. This paper proposes an example of application of POD using simulation on the inspection procedure of a complex -full 3D- geometry part using phased arrays ultrasonic testing. It illustrates the methodology and the associated tools developed in the CIVA software. The paper finally provides elements of further progress in the domain.

  19. Creating the Infrastructure for Rapid Application Development and Processing Response to the HIRDLS Radiance Anomaly

    NASA Astrophysics Data System (ADS)

    Cavanaugh, C.; Gille, J.; Francis, G.; Nardi, B.; Hannigan, J.; McInerney, J.; Krinsky, C.; Barnett, J.; Dean, V.; Craig, C.

    2005-12-01

    The High Resolution Dynamics Limb Sounder (HIRDLS) instrument onboard the NASA Aura spacecraft experienced a rupture of the thermal blanketing material (Kapton) during the rapid depressurization of launch. The Kapton draped over the HIRDLS scan mirror, severely limiting the aperture through which HIRDLS views space and Earth's atmospheric limb. In order for HIRDLS to achieve its intended measurement goals, rapid characterization of the anomaly, and rapid recovery from it were required. The recovery centered around a new processing module inserted into the standard HIRDLS processing scheme, with a goal of minimizing the effect of the anomaly on the already existing processing modules. We describe the software infrastructure on which the new processing module was built, and how that infrastructure allows for rapid application development and processing response. The scope of the infrastructure spans three distinct anomaly recovery steps and the means for their intercommunication. Each of the three recovery steps (removing the Kapton-induced oscillation in the radiometric signal, removing the Kapton signal contamination upon the radiometric signal, and correcting for the partially-obscured atmospheric view) is completely modularized and insulated from the other steps, allowing focused and rapid application development towards a specific step, and neutralizing unintended inter-step influences, thus greatly shortening the design-development-test lifecycle. The intercommunication is also completely modularized and has a simple interface to which the three recovery steps adhere, allowing easy modification and replacement of specific recovery scenarios, thereby heightening the processing response.

  20. 49 CFR 236.917 - Retention of records.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-Based Signal and Train Control Systems § 236.917 Retention of records. (a) What life-cycle and...: (i) For the life-cycle of the product, adequate documentation to demonstrate that the PSP meets the...

Top