Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Developing Software Life Cycle Processes for Digital... Software Life Cycle Processes for Digital Computer Software used in Safety Systems of Nuclear Power Plants... clarifications, the enhanced consensus practices for developing software life-cycle processes for digital...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Developing Software Life Cycle Processes Used in... revised regulatory guide (RG), revision 1 of RG 1.173, ``Developing Software Life Cycle Processes for... Developing a Software Project Life Cycle Process,'' issued 2006, with the clarifications and exceptions as...
Increase Return on Investment of Software Development Life Cycle by Managing the Risk - A Case Study
2015-04-01
for increasing the return on investment during the Software Development Life Cycle ( SDLC ) through selected quantitative analyses employing both the...defect rate, return on investment (ROI), software development life cycle ( SDLC ) DE FE N SE A C Q U IS IT IO N UN IVERSITY ALU M N I A SSO C IATIO N R...becomes comfortable due to its intricacies and learning cycle. The same may be said with respect to software development life cycle ( SDLC ) management
Software security checklist for the software life cycle
NASA Technical Reports Server (NTRS)
Gilliam, D. P.; Wolfe, T. L.; Sherif, J. S.
2002-01-01
A formal approach to security in the software life cycle is essential to protect corporate resources. However, little thought has been given to this aspect of software development. Due to its criticality, security should be integrated as a formal approach in the software life cycle.
The embedded software life cycle - An expanded view
NASA Technical Reports Server (NTRS)
Larman, Brian T.; Loesh, Robert E.
1989-01-01
Six common issues that are encountered in the development of software for embedded computer systems are discussed from the perspective of their interrelationships with the development process and/or the system itself. Particular attention is given to concurrent hardware/software development, prototyping, the inaccessibility of the operational system, fault tolerance, the long life cycle, and inheritance. It is noted that the life cycle for embedded software must include elements beyond simply the specification and implementation of the target software.
ERIC Educational Resources Information Center
Kramer, Aleksey
2013-01-01
The topic of software security has become paramount in information technology (IT) related scholarly research. Researchers have addressed numerous software security topics touching on all phases of the Software Development Life Cycle (SDLC): requirements gathering phase, design phase, development phase, testing phase, and maintenance phase.…
Security Risks: Management and Mitigation in the Software Life Cycle
NASA Technical Reports Server (NTRS)
Gilliam, David P.
2004-01-01
A formal approach to managing and mitigating security risks in the software life cycle is requisite to developing software that has a higher degree of assurance that it is free of security defects which pose risk to the computing environment and the organization. Due to its criticality, security should be integrated as a formal approach in the software life cycle. Both a software security checklist and assessment tools should be incorporated into this life cycle process and integrated with a security risk assessment and mitigation tool. The current research at JPL addresses these areas through the development of a Sotfware Security Assessment Instrument (SSAI) and integrating it with a Defect Detection and Prevention (DDP) risk management tool.
Addressing software security risk mitigations in the life cycle
NASA Technical Reports Server (NTRS)
Gilliam, David; Powell, John; Haugh, Eric; Bishop, Matt
2003-01-01
The NASA Office of Safety and Mission Assurance (OSMA) has funded the Jet Propulsion Laboratory (JPL) with a Center Initiative, 'Reducing Software Security Risk through an Integrated Approach' (RSSR), to address this need. The Initiative is a formal approach to addressing software security in the life cycle through the instantiation of a Software Security Assessment Instrument (SSAI) for the development and maintenance life cycles.
Development of a Communications Front End Processor (FEP) for the VAX-11/780 Using an LSI-11/23.
1983-12-01
9 Approach . . . . . . . . . . . . . . . . . . 11 Software Development Life Cycle . . . . . . . 11 Requirements Analysis...proven to be useful (25] during the Software Development Life Cycle of a project. Development tools and documentation aids used throughout this effort...include "Structure Charts" ( ref Appendix B ), a "Data Dictionary" ( ref Appendix C ),and Program Design Language CPDL). 1.5.1 Software Development- Life
Software Program: Software Management Guidebook
NASA Technical Reports Server (NTRS)
1996-01-01
The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.
Recommended approach to software development, revision 3
NASA Technical Reports Server (NTRS)
Landis, Linda; Waligora, Sharon; Mcgarry, Frank; Pajerski, Rose; Stark, Mike; Johnson, Kevin Orlin; Cover, Donna
1992-01-01
Guidelines for an organized, disciplined approach to software development that is based on studies conducted by the Software Engineering Laboratory (SEL) since 1976 are presented. It describes methods and practices for each phase of a software development life cycle that starts with requirements definition and ends with acceptance testing. For each defined life cycle phase, guidelines for the development process and its management, and for the products produced and their reviews are presented.
Addressing software security and mitigations in the life cycle
NASA Technical Reports Server (NTRS)
Gilliam, David; Powell, John; Haugh, Eric; Bishop, Matt
2003-01-01
Traditionally, security is viewed as an organizational and Information Technology (IIJ systems function comprising of Firewalls, intrusion detection systems (IDS), system security settings and patches to the operating system (OS) and applications running on it. Until recently, little thought has been given to the importance of security as a formal approach in the software life cycle. The Jet Propulsion Laboratory has approached the problem through the development of an integrated formal Software Security Assessment Instrument (SSAI) with six foci for the software life cycle.
Addressing software security and mitigations in the life cycle
NASA Technical Reports Server (NTRS)
Gilliam, David; Powell, John; Haugh, Eric; Bishop, Matt
2004-01-01
Traditionally, security is viewed as an organizational and Information Technology (IT) systems function comprising of firewalls, intrusion detection systems (IDS), system security settings and patches to the operating system (OS) and applications running on it. Until recently, little thought has been given to the importance of security as a formal approach in the software life cycle. The Jet Propulsion Laboratory has approached the problem through the development of an integrated formal Software Security Assessment Instrument (SSAI) with six foci for the software life cycle.
Software Development Life Cycle Security Issues
NASA Astrophysics Data System (ADS)
Kaur, Daljit; Kaur, Parminder
2011-12-01
Security is now-a-days one of the major problems because of many reasons. Security is now-a-days one of the major problems because of many reasons. The main cause is that software can't withstand security attacks because of vulnerabilities in it which are caused by defective specifications design and implementation. We have conducted a survey asking software developers, project managers and other people in software development about their security awareness and implementation in Software Development Life Cycle (SDLC). The survey was open to participation for three weeks and this paper explains the survey results.
Recommended approach to sofware development
NASA Technical Reports Server (NTRS)
Mcgarry, F. E.; Page, J.; Eslinger, S.; Church, V.; Merwarth, P.
1983-01-01
A set of guideline for an organized, disciplined approach to software development, based on data collected and studied for 46 flight dynamics software development projects. Methods and practices for each phase of a software development life cycle that starts with requirements analysis and ends with acceptance testing are described; maintenance and operation is not addressed. For each defined life cycle phase, guidelines for the development process and its management, and the products produced and their reviews are presented.
Computer-aided software development process design
NASA Technical Reports Server (NTRS)
Lin, Chi Y.; Levary, Reuven R.
1989-01-01
The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.
Information system life-cycle and documentation standards, volume 1
NASA Technical Reports Server (NTRS)
Callender, E. David; Steinbacher, Jody
1989-01-01
The Software Management and Assurance Program (SMAP) Information System Life-Cycle and Documentation Standards Document describes the Version 4 standard information system life-cycle in terms of processes, products, and reviews. The description of the products includes detailed documentation standards. The standards in this document set can be applied to the life-cycle, i.e., to each phase in the system's development, and to the documentation of all NASA information systems. This provides consistency across the agency as well as visibility into the completeness of the information recorded. An information system is software-intensive, but consists of any combination of software, hardware, and operational procedures required to process, store, or transmit data. This document defines a standard life-cycle model and content for associated documentation.
Standard practices for the implementation of computer software
NASA Technical Reports Server (NTRS)
Irvine, A. P. (Editor)
1978-01-01
A standard approach to the development of computer program is provided that covers the file cycle of software development from the planning and requirements phase through the software acceptance testing phase. All documents necessary to provide the required visibility into the software life cycle process are discussed in detail.
Metrinome: Continuous Monitoring and Security Validation of Distributed Systems
2014-03-01
Integration into the SDLC ( Software Development Life Cycle), Retrieved Nov 06 2013, https://www.owasp.org/ images/f/f6/Integration_into_the_SDLC.ppt [2...assessment as part of the software development life cycle, current approaches suffer from a number of shortcomings that limit their application in...with assessing security and correct functionality. Second, integrated and end-to-end testing and experimentation is often postponed until software
Development of computer software for pavement life cycle cost analysis.
DOT National Transportation Integrated Search
1988-01-01
The life cycle cost analysis program (LCCA) is designed to automate and standardize life cycle costing in Virginia. It allows the user to input information necessary for the analysis, and it then completes the calculations and produces a printed copy...
Resource utilization during software development
NASA Technical Reports Server (NTRS)
Zelkowitz, Marvin V.
1988-01-01
This paper discusses resource utilization over the life cycle of software development and discusses the role that the current 'waterfall' model plays in the actual software life cycle. Software production in the NASA environment was analyzed to measure these differences. The data from 13 different projects were collected by the Software Engineering Laboratory at NASA Goddard Space Flight Center and analyzed for similarities and differences. The results indicate that the waterfall model is not very realistic in practice, and that as technology introduces further perturbations to this model with concepts like executable specifications, rapid prototyping, and wide-spectrum languages, we need to modify our model of this process.
A Recommended Framework for the Network-Centric Acquisition Process
2009-09-01
ISO /IEC 12207 , Systems and Software Engineering-Software Life-Cycle Processes ANSI/EIA 632, Processes for Engineering a System. There are...engineering [46]. Some of the process models presented in the DAG are: ISO /IEC 15288, Systems and Software Engineering-System Life-Cycle Processes...e.g., ISO , IA, Security, etc.). Vetting developers helps ensure that they are using industry best industry practices and maximize the IA compliance
Software Quality Assurance Metrics
NASA Technical Reports Server (NTRS)
McRae, Kalindra A.
2004-01-01
Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.
Ontology for Life-Cycle Modeling of Water Distribution Systems: Model View Definition
2013-06-01
Research and Development Center, Construction Engineering Research Laboratory (ERDC-CERL) to develop a life-cycle building model have resulted in the...Laboratory (ERDC-CERL) to develop a life-cycle building model have resulted in the definition of a “core” building information model that contains...developed experimental BIM models us- ing commercial off-the-shelf (COTS) software. Those models represent three types of typical low-rise Army
Software engineering standards and practices
NASA Technical Reports Server (NTRS)
Durachka, R. W.
1981-01-01
Guidelines are presented for the preparation of a software development plan. The various phases of a software development project are discussed throughout its life cycle including a general description of the software engineering standards and practices to be followed during each phase.
Learning & Personality Types: A Case Study of a Software Design Course
ERIC Educational Resources Information Center
Ahmed, Faheem; Campbell, Piers; Jaffar, Ahmad; Alkobaisi, Shayma; Campbell, Julie
2010-01-01
The software industry has continued to grow over the past decade and there is now a need to provide education and hands-on training to students in various phases of software life cycle. Software design is one of the vital phases of the software development cycle. Psychological theories assert that not everybody is fit for all kind of tasks as…
A "Rainmaker" Process for Developing Internet-Based Retail Businesses
ERIC Educational Resources Information Center
Abrahams, Alan S.; Singh, Tirna
2011-01-01
Various systems development life cycles and business development models have been popularized by information systems researchers and practitioners over a number of decades. In the case of systems development life cycles, these have been targeted at software development projects within an organization, typically involving analysis, design,…
Impact of Agile Software Development Model on Software Maintainability
ERIC Educational Resources Information Center
Gawali, Ajay R.
2012-01-01
Software maintenance and support costs account for up to 60% of the overall software life cycle cost and often burdens tightly budgeted information technology (IT) organizations. Agile software development approach delivers business value early, but implications on software maintainability are still unknown. The purpose of this quantitative study…
Using Modified Fagan Inspections to Control Rapid System Development
NASA Technical Reports Server (NTRS)
Griesel, M. A.; Welz, L. L.
1994-01-01
The Jet Propulsion Laboratory (JPL) has been developing new approaches to software and system development to shorten life cycle time and reduce total life-cycle cost, while maintaining product quality. One such approach has been taken by the Just-In-Time (JIT) Materiel Acquisition System Development Project.
The dynamics of software development project management: An integrative systems dynamic perspective
NASA Technical Reports Server (NTRS)
Vandervelde, W. E.; Abdel-Hamid, T.
1984-01-01
Rather than continuing to focus on software development projects per se, the system dynamics modeling approach outlined is extended to investigate a broader set of issues pertaining to the software development organization. Rather than trace the life cycle(s) of one or more software projects, the focus is on the operations of a software development department as a continuous stream of software products are developed, placed into operation, and maintained. A number of research questions are ""ripe'' for investigating including: (1) the efficacy of different organizational structures in different software development environments, (2) personnel turnover, (3) impact of management approaches such as management by objectives, and (4) the organizational/environmental determinants of productivity.
1982-03-01
pilot systems. Magnitude of the mutant error is classified as: o Program does not compute. o Program computes but does not run test data. o Program...14 Test and Integration ... ............ .. 105 15 The Mapping of SQM to the SDLC ........ ... 108 16 ADS Development .... .............. . 224 17...and funds. While the test phase concludes the normal development cycle, one should realize that with software the development continues in the
1986-05-07
Cycle? Moderator: Christine M. Anderson Dennis D. Doe Manager of Engineering Software and Artificial Intelligence Boeing Aerospace Company In... intelligence systems development pro- cess affect the life cycle? Artificial intelligence developers seem to be the last haven for people who don’t...of Engineering Software and Artificial Intelligence at the Boeing Aerospace Company. In this capacity, Mr. Doe is the focal point for software
One approach for evaluating the Distributed Computing Design System (DCDS)
NASA Technical Reports Server (NTRS)
Ellis, J. T.
1985-01-01
The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.
Formal assessment instrument for ensuring the security of NASA's networks, systems and software
NASA Technical Reports Server (NTRS)
Gilliam, D. P.; Powell, J. D.; Sherif, J.
2002-01-01
To address the problem of security for NASA's networks, systems and software, NASA has funded the Jet Propulsion Lab in conjunction with UC Davis to begin work on developing a software security assessment instrument for use in the software development and maintenance life cycle.
Towards Model-Driven End-User Development in CALL
ERIC Educational Resources Information Center
Farmer, Rod; Gruba, Paul
2006-01-01
The purpose of this article is to introduce end-user development (EUD) processes to the CALL software development community. EUD refers to the active participation of end-users, as non-professional developers, in the software development life cycle. Unlike formal software engineering approaches, the focus in EUD on means/ends development is…
A software engineering approach to expert system design and verification
NASA Technical Reports Server (NTRS)
Bochsler, Daniel C.; Goodwin, Mary Ann
1988-01-01
Software engineering design and verification methods for developing expert systems are not yet well defined. Integration of expert system technology into software production environments will require effective software engineering methodologies to support the entire life cycle of expert systems. The software engineering methods used to design and verify an expert system, RENEX, is discussed. RENEX demonstrates autonomous rendezvous and proximity operations, including replanning trajectory events and subsystem fault detection, onboard a space vehicle during flight. The RENEX designers utilized a number of software engineering methodologies to deal with the complex problems inherent in this system. An overview is presented of the methods utilized. Details of the verification process receive special emphasis. The benefits and weaknesses of the methods for supporting the development life cycle of expert systems are evaluated, and recommendations are made based on the overall experiences with the methods.
Artificial intelligence approaches to software engineering
NASA Technical Reports Server (NTRS)
Johannes, James D.; Macdonald, James R.
1988-01-01
Artificial intelligence approaches to software engineering are examined. The software development life cycle is a sequence of not so well-defined phases. Improved techniques for developing systems have been formulated over the past 15 years, but pressure continues to attempt to reduce current costs. Software development technology seems to be standing still. The primary objective of the knowledge-based approach to software development presented in this paper is to avoid problem areas that lead to schedule slippages, cost overruns, or software products that fall short of their desired goals. Identifying and resolving software problems early, often in the phase in which they first occur, has been shown to contribute significantly to reducing risks in software development. Software development is not a mechanical process but a basic human activity. It requires clear thinking, work, and rework to be successful. The artificial intelligence approaches to software engineering presented support the software development life cycle through the use of software development techniques and methodologies in terms of changing current practices and methods. These should be replaced by better techniques that that improve the process of of software development and the quality of the resulting products. The software development process can be structured into well-defined steps, of which the interfaces are standardized, supported and checked by automated procedures that provide error detection, production of the documentation and ultimately support the actual design of complex programs.
Software Assurance Curriculum Project Volume 2: Undergraduate Course Outlines
2010-08-01
Contents Acknowledgments iii Abstract v 1 An Undergraduate Curriculum Focus on Software Assurance 1 2 Computer Science I 7 3 Computer Science II...confidence that can be integrated into traditional software development and acquisition process models . Thus, in addition to a technology focus...testing throughout the software development life cycle ( SDLC ) AP Security and complexity—system development challenges: security failures
Product assurance policies and procedures for flight dynamics software development
NASA Technical Reports Server (NTRS)
Perry, Sandra; Jordan, Leon; Decker, William; Page, Gerald; Mcgarry, Frank E.; Valett, Jon
1987-01-01
The product assurance policies and procedures necessary to support flight dynamics software development projects for Goddard Space Flight Center are presented. The quality assurance and configuration management methods and tools for each phase of the software development life cycles are described, from requirements analysis through acceptance testing; maintenance and operation are not addressed.
Support for life-cycle product reuse in NASA's SSE
NASA Technical Reports Server (NTRS)
Shotton, Charles
1989-01-01
The Software Support Environment (SSE) is a software factory for the production of Space Station Freedom Program operational software. The SSE is to be centrally developed and maintained and used to configure software production facilities in the field. The PRC product TTCQF provides for an automated qualification process and analysis of existing code that can be used for software reuse. The interrogation subsystem permits user queries of the reusable data and components which have been identified by an analyzer and qualified with associated metrics. The concept includes reuse of non-code life-cycle components such as requirements and designs. Possible types of reusable life-cycle components include templates, generics, and as-is items. Qualification of reusable elements requires analysis (separation of candidate components into primitives), qualification (evaluation of primitives for reusability according to reusability criteria) and loading (placing qualified elements into appropriate libraries). There can be different qualifications for different installations, methodologies, applications and components. Identifying reusable software and related components is labor-intensive and is best carried out as an integrated function of an SSE.
Automated Estimation Of Software-Development Costs
NASA Technical Reports Server (NTRS)
Roush, George B.; Reini, William
1993-01-01
COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.
Information System Life-Cycle And Documentation Standards (SMAP DIDS)
NASA Technical Reports Server (NTRS)
1990-01-01
Although not computer program, SMAP DIDS written to provide systematic, NASA-wide structure for documenting information system development projects. Each DID (data item description) outlines document required for top-quality software development. When combined with management, assurance, and life cycle standards, Standards protect all parties who participate in design and operation of new information system.
ERIC Educational Resources Information Center
Kocherla, Showry
2012-01-01
Information technology (IT) projects are considered successful if they are completed on time, within budget, and within scope. Even though, the required tools and methodologies are in place, IT projects continue to fail at a higher rate. Current literature lacks explanation for success within the stages of system development life-cycle (SDLC) such…
RT-Syn: A real-time software system generator
NASA Technical Reports Server (NTRS)
Setliff, Dorothy E.
1992-01-01
This paper presents research into providing highly reusable and maintainable components by using automatic software synthesis techniques. This proposal uses domain knowledge combined with automatic software synthesis techniques to engineer large-scale mission-critical real-time software. The hypothesis centers on a software synthesis architecture that specifically incorporates application-specific (in this case real-time) knowledge. This architecture synthesizes complex system software to meet a behavioral specification and external interaction design constraints. Some examples of these external constraints are communication protocols, precisions, timing, and space limitations. The incorporation of application-specific knowledge facilitates the generation of mathematical software metrics which are used to narrow the design space, thereby making software synthesis tractable. Success has the potential to dramatically reduce mission-critical system life-cycle costs not only by reducing development time, but more importantly facilitating maintenance, modifications, and extensions of complex mission-critical software systems, which are currently dominating life cycle costs.
Programming support environment issues in the Byron programming environment
NASA Technical Reports Server (NTRS)
Larsen, Matthew J.
1986-01-01
Issues are discussed which programming support environments need to address in order to successfully support software engineering. These concerns are divided into two categories. The first category, issues of how software development is supported by an environment, includes support of the full life cycle, methodology flexibility, and support of software reusability. The second category contains issues of how environments should operate, such as tool reusability and integration, user friendliness, networking, and use of a central data base. This discussion is followed by an examination of Byron, an Ada based programming support environment developed at Intermetrics, focusing on the solutions Byron offers to these problems, including the support provided for software reusability and the test and maintenance phases of the life cycle. The use of Byron in project development is described briefly, and some suggestions for future Byron tools and user written tools are presented.
NASA Technical Reports Server (NTRS)
Shull, Forrest; Feldmann, Raimund; Haingaertner, Ralf; Regardie, Myrna; Seaman, Carolyn
2007-01-01
It is often the case in software projects that when schedule and budget resources are limited, the Verification and Validation (V&V) activities suffer. Fewer V&V activities can be afforded and moreover, short-term challenges can result in V&V activities being scaled back or dropped altogether. As a result, too often the default solution is to save activities for improving software quality until too late in the life-cycle, relying on late-term code inspections followed by thorough testing activities to reduce defect counts to acceptable levels. As many project managers realize, however, this is a resource-intensive way of achieving the required quality for software. The Full Life-cycle Defect Management Assessment Initiative, funded by NASA s Office of Safety and Mission Assurance under the Software Assurance Research Program, aims to address these problems by: Improving the effectiveness of early life-cycle V&V activities to make their benefits more attractive to team leads. Specifically, we focus on software inspection, a proven method that can be applied to any software work product, long before executable code has been developed; Better communicating this effectiveness to software development teams, along with suggestions for parameters to improve in the future to increase effectiveness; Analyzing the impact of early life-cycle V&V on the effectiveness and cost required for late life-cycle V&V activities, such as testing, in order to make the tradeoffs more apparent. This white paper reports on an initial milestone in this work, the development of a preliminary model of inspection effectiveness across multiple NASA Centers. This model contributes toward reaching our project goals by: Allowing an examination of inspection parameters, across different types of projects and different work products, for an analysis of factors that impact defect detection effectiveness. Allowing a comparison of this NASA-specific model to existing recommendations in the literature regarding how to plan effective inspections. Forming a baseline model which can be extended to incorporate factors describing: the numbers and types of defects that are missed by inspections; how such defects flow downstream through software development phases; how effectively they can be caught by testing activities in the late stages of development. The model has been implemented in a prototype web-enabled decision-support tool which allows developers to enter their inspection data and receive feedback based on a comparison against the model. The tool also allows users to access reusable materials (such as checklists) from projects included in the baseline. Both the tool itself and the model underlying it will continue to be extended throughout the remainder of this initiative. As results of analyzing inspection effectiveness for defect containment are determined, they can be shared via the tool and also via updates to existing training courses on metrics and software inspections. Moreover, the tool will help satisfy key CMMI requirements for the NASA Centers, as it will enable NASA to take a global view across peer review results for various types of projects to identify systemic problems. This analysis can result in continuous improvements to the approach to verification.
Software Engineering Guidebook
NASA Technical Reports Server (NTRS)
Connell, John; Wenneson, Greg
1993-01-01
The Software Engineering Guidebook describes SEPG (Software Engineering Process Group) supported processes and techniques for engineering quality software in NASA environments. Three process models are supported: structured, object-oriented, and evolutionary rapid-prototyping. The guidebook covers software life-cycles, engineering, assurance, and configuration management. The guidebook is written for managers and engineers who manage, develop, enhance, and/or maintain software under the Computer Software Services Contract.
Life Cycle Assessment Software for Product and Process Sustainability Analysis
ERIC Educational Resources Information Center
Vervaeke, Marina
2012-01-01
In recent years, life cycle assessment (LCA), a methodology for assessment of environmental impacts of products and services, has become increasingly important. This methodology is applied by decision makers in industry and policy, product developers, environmental managers, and other non-LCA specialists working on environmental issues in a wide…
National Cycle Program (NCP) Common Analysis Tool for Aeropropulsion
NASA Technical Reports Server (NTRS)
Follen, G.; Naiman, C.; Evans, A.
1999-01-01
Through the NASA/Industry Cooperative Effort (NICE) agreement, NASA Lewis and industry partners are developing a new engine simulation, called the National Cycle Program (NCP), which is the initial framework of NPSS. NCP is the first phase toward achieving the goal of NPSS. This new software supports the aerothermodynamic system simulation process for the full life cycle of an engine. The National Cycle Program (NCP) was written following the Object Oriented Paradigm (C++, CORBA). The software development process used was also based on the Object Oriented paradigm. Software reviews, configuration management, test plans, requirements, design were all apart of the process used in developing NCP. Due to the many contributors to NCP, the stated software process was mandatory for building a common tool intended for use by so many organizations. The U.S. aircraft and airframe companies recognize NCP as the future industry standard for propulsion system modeling.
DOT National Transportation Integrated Search
2009-06-30
The project has been focused on National Transportation Communications for ITS Protocol : (NTCIP) research and testing across the entire life cycle of traffic operations, ITS, and statewide : communications deployments. This life cycle includes desig...
Analysis and specification tools in relation to the APSE
NASA Technical Reports Server (NTRS)
Hendricks, John W.
1986-01-01
Ada and the Ada Programming Support Environment (APSE) specifically address the phases of the system/software life cycle which follow after the user's problem was translated into system and software development specifications. The waterfall model of the life cycle identifies the analysis and requirements definition phases as preceeding program design and coding. Since Ada is a programming language and the APSE is a programming support environment, they are primarily targeted to support program (code) development, tecting, and maintenance. The use of Ada based or Ada related specification languages (SLs) and program design languages (PDLs) can extend the use of Ada back into the software design phases of the life cycle. Recall that the standardization of the APSE as a programming support environment is only now happening after many years of evolutionary experience with diverse sets of programming support tools. Restricting consideration to one, or even a few chosen specification and design tools, could be a real mistake for an organization or a major project such as the Space Station, which will need to deal with an increasingly complex level of system problems. To require that everything be Ada-like, be implemented in Ada, run directly under the APSE, and fit into a rigid waterfall model of the life cycle would turn a promising support environment into a straight jacket for progress.
Industry best practices for the software development life cycle
DOT National Transportation Integrated Search
2007-11-01
In the area of software development, there are many different views of what constitutes a best practice. The goal of this project was to identify a set of industry best practice techniques that fit the needs of the Montana Department of Transportatio...
A conceptual model for megaprogramming
NASA Technical Reports Server (NTRS)
Tracz, Will
1990-01-01
Megaprogramming is component-based software engineering and life-cycle management. Magaprogramming and its relationship to other research initiatives (common prototyping system/common prototyping language, domain specific software architectures, and software understanding) are analyzed. The desirable attributes of megaprogramming software components are identified and a software development model and resulting prototype megaprogramming system (library interconnection language extended by annotated Ada) are described.
IEEE Computer Society/Software Engineering Institute Software Process Achievement (SPA) Award 2009
2011-03-01
capabilities to our GDM. We also introduced software as a service ( SaaS ) as part our technology solutions and have further enhanced our ability to...model PROSPER Infosys production support methodology Q&P quality and productivity R&D research and development SaaS software as a service ... Software Development Life Cycle (SDLC) 23 Table 10: Scientific Estimation Coverage by Service Line 27 CMU/SEI-2011-TR-008 | vi CMU/SEI-2011
NASA Technical Reports Server (NTRS)
Callender, E. David; Steinbacher, Jody
1989-01-01
This is the third of five volumes on Information System Life-Cycle and Documentation Standards which present a well organized, easily used standard for providing technical information needed for developing information systems, components, and related processes. This volume states the Software Management and Assurance Program documentation standard for a product specification document and for data item descriptions. The framework can be applied to any NASA information system, software, hardware, operational procedures components, and related processes.
Software engineering and the role of Ada: Executive seminar
NASA Technical Reports Server (NTRS)
Freedman, Glenn B.
1987-01-01
The objective was to introduce the basic terminology and concepts of software engineering and Ada. The life cycle model is reviewed. The application of the goals and principles of software engineering is applied. An introductory understanding of the features of the Ada language is gained. Topics addressed include: the software crises; the mandate of the Space Station Program; software life cycle model; software engineering; and Ada under the software engineering umbrella.
The Knowledge-Based Software Assistant: Beyond CASE
NASA Technical Reports Server (NTRS)
Carozzoni, Joseph A.
1993-01-01
This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.
1990-02-01
inspections are performed before each formal review of each software life cycle phase. * Required software audits are performed . " The software is acceptable... Audits : Software audits are performed bySQA consistent with thegeneral audit rules and an auditreportis prepared. Software Quality Inspection (SQI...DSD Software Development Method 3-34 DEFINITION OF ACRONYMS Acronym Full Name or Description MACH Methode d’Analyse et de Conception Flierarchisee
NASA Technical Reports Server (NTRS)
Callender, E. David; Steinbacher, Jody
1989-01-01
This is the second of five volumes of the Information System Life-Cycle and Documentation Standards. This volume provides a well-organized, easily used standard for management plans used in acquiring, assuring, and developing information systems and software, hardware, and operational procedures components, and related processes.
DDP - a tool for life-cycle risk management
NASA Technical Reports Server (NTRS)
Cornford, S. L.; Feather, M. S.; Hicks, K. A.
2001-01-01
At JPL we have developed, and implemented, a process for achieving life-cycle risk management. This process has been embodied in a software tool and is called Defect Detection and Prevention (DDP). The DDP process can be succinctly stated as: determine where we want to be, what could get in the way and how we will get there.
Reuse-Driven Software Processes Guidebook. Version 02.00.03
1993-11-01
a required sys - tem without unduly constraining the details of the solution. The Naval Research Laboratory Software Cost Reduction project developed...conventional manner. The emphasis is still on the development of "one-of-a-kind" sys - tems and the phased completion and review of corresponding...Application Engineering to improve the life-cycle productivity of Sy - 21 OVM ftrdauntals of Syatbes the total software development enterprise. The
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to set of accepted processes and products for achieving each criterion; (5) Select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to a set of accepted processes and products for achieving each criterion; (5) select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
Toward full life cycle control: Adding maintenance measurement to the SEL
NASA Technical Reports Server (NTRS)
Rombach, H. Dieter; Ulery, Bradford T.; Valett, Jon D.
1992-01-01
Organization-wide measurement of software products and processes is needed to establish full life cycle control over software products. The Software Engineering Laboratory (SEL)--a joint venture between NASA GSFC, the University of Maryland, and Computer Sciences Corporation--started measurement of software development more than 15 years ago. Recently, the measurement of maintenance was added to the scope of the SEL. In this article, the maintenance measurement program is presented as an addition to the already existing and well-established SEL development measurement program and evaluated in terms of its immediate benefits and long-term improvement potential. Immediate benefits of this program for the SEL include an increased understanding of the maintenance domain, the differences and commonalities between development and maintenance, and the cause-effect relationships between development and maintenance. Initial results from a sample maintenance study are presented to substantiate these benefits. The long-term potential of this program includes the use of maintenance baselines to better plan and manage future projects and to improve development and maintenance practices for future projects wherever warranted.
Software Assurance: Five Essential Considerations for Acquisition Officials
2007-05-01
May 2007 www.stsc.hill.af.mil 17 2 • address security concerns in the software development life cycle ( SDLC )? • Are there formal software quality...What threat modeling process, if any, is used when designing the software ? What analysis, design, and construction tools are used by your software design...the-shelf (COTS), government off-the-shelf (GOTS), open- source, embedded, and legacy software . Attackers exploit unintentional vulnerabil- ities or
Software-Engineering Process Simulation (SEPS) model
NASA Technical Reports Server (NTRS)
Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.
1992-01-01
The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.
Computer Software for Life Cycle Cost.
1987-04-01
34 111. 1111I .25 IL4 jj 16 MICROCOPY RESOLUTION TEST CHART hut FILE C AIR CoMMNAMN STFF COLLG STUJDET PORTO i COMpUTER SOFTWARE FOR LIFE CYCLE CO879...obsolete), physical life (utility before physically wearing out), or application life (utility in a given function)." (7:5) The costs are usually
Development and Evaluation of the Effectiveness of Computer-Assisted Physics Instruction
ERIC Educational Resources Information Center
Rahman, Mohd. Jasmy Abd; Ismail, Mohd. Arif. Hj.; Nasir, Muhammad
2014-01-01
This study aims to design and develop an interactive software for teaching and learning physics about motion and vectors analysis. This study also assesses its effectiveness in classroom and assesses the learning motivation of SMA Pekanbaru's students. The software is developed using ADDIE Model design and Life Cycle Model and built using the…
Concepts associated with a unified life cycle analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whelan, Gene; Peffers, Melissa S.; Tolle, Duane A.
There is a risk associated with most things in the world, and all things have a life cycle unto themselves, even brownfields. Many components can be described by a''cycle of life.'' For example, five such components are life-form, chemical, process, activity, and idea, although many more may exist. Brownfields may touch upon several of these life cycles. Each life cycle can be represented as independent software; therefore, a software technology structure is being formulated to allow for the seamless linkage of software products, representing various life-cycle aspects. Because classes of these life cycles tend to be independent of each other,more » the current research programs and efforts do not have to be revamped; therefore, this unified life-cycle paradigm builds upon current technology and is backward compatible while embracing future technology. Only when two of these life cycles coincide and one impacts the other is there connectivity and a transfer of information at the interface. The current framework approaches (e.g., FRAMES, 3MRA, etc.) have a design that is amenable to capturing (1) many of these underlying philosophical concepts to assure backward compatibility of diverse independent assessment frameworks and (2) linkage communication to help transfer the needed information at the points of intersection. The key effort will be to identify (1) linkage points (i.e., portals) between life cycles, (2) the type and form of data passing between life cycles, and (3) conditions when life cycles interact and communicate. This paper discusses design aspects associated with a unified life-cycle analysis, which can support not only brownfields but also other types of assessments.« less
A high order approach to flight software development and testing
NASA Technical Reports Server (NTRS)
Steinbacher, J.
1981-01-01
The use of a software development facility is discussed as a means of producing a reliable and maintainable ECS software system, and as a means of providing efficient use of the ECS hardware test facility. Principles applied to software design are given, including modularity, abstraction, hiding, and uniformity. The general objectives of each phase of the software life cycle are also given, including testing, maintenance, code development, and requirement specifications. Software development facility tools are summarized, and tool deficiencies recognized in the code development and testing phases are considered. Due to limited lab resources, the functional simulation capabilities may be indispensable in the testing phase.
2010-06-01
cannot make a distinction between software maintenance and development” (Sharma, 2004). ISO /IEC 12207 Software Lifecycle Processes offers a guide to...synopsis of ISO /IEC 12207 , Raghu Singh of the Federal Aviation Administration states “Whenever a software product needs modifications, the development...Corporation. Singh, R. (1998). International Standard ISO /IEC 12207 Software Life Cycle Processes. Washington: Federal Aviation Administration. The Joint
CrossTalk: The Journal of Defense Software Engineering. Volume 21, Number 1
2008-01-01
project manage- ment and the individual components of the software life-cycle model ; it will be awarded for...software professionals that had been formally educated in software project manage- ment. The study indicated that our industry is lacking in program managers...soft- ware developments get bigger, more complicated, and more dependent on senior software pro- fessionals to get the project on the right path
Automation of the Environmental Control and Life Support System
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.; Carnes, J. Ray
1990-01-01
The objective of the Environmental Control and Life Support System (ECLSS) Advanced Automation Project is to recommend and develop advanced software for the initial and evolutionary Space Station Freedom (SSF) ECLS system which will minimize the crew and ground manpower needed for operations. Another objective includes capturing ECLSS design and development knowledge for future missions. This report summarizes our results from Phase I, the ECLSS domain analysis phase, which we broke down into three steps: 1) Analyze and document the baselined ECLS system, 2) envision as our goal an evolution to a fully automated regenerative life support system, built upon an augmented baseline, and 3) document the augmentations (hooks and scars) and advanced software systems which we see as necessary in achieving minimal manpower support for ECLSS operations. In addition, Phase I included development of an advanced software life cycle testing tools will be used in the development of the software. In this way, we plan in preparation for phase II and III, the development and integration phases, respectively. Automated knowledge acquisition, engineering, verification, and can capture ECLSS development knowledge for future use, develop more robust and complex software, provide feedback to the KBS tool community, and insure proper visibility of our efforts.
An application generator for rapid prototyping of Ada real-time control software
NASA Technical Reports Server (NTRS)
Johnson, Jim; Biglari, Haik; Lehman, Larry
1990-01-01
The need to increase engineering productivity and decrease software life cycle costs in real-time system development establishes a motivation for a method of rapid prototyping. The design by iterative rapid prototyping technique is described. A tool which facilitates such a design methodology for the generation of embedded control software is described.
Software Development: A Product Life-Cycle Perspective
1990-05-01
management came from these magazines and journals: Journal of Advertising Research , Business Marketing, Journal of Systems Manaament, nural Marketing...Johanna. "Price is More Sensitive." Software Magazine, March 1988, 44. Andrews, Kirby. "Communications Imperatives for New Products." Journal of Advertising Research , October
Software engineering and simulation
NASA Technical Reports Server (NTRS)
Zhang, Shou X.; Schroer, Bernard J.; Messimer, Sherri L.; Tseng, Fan T.
1990-01-01
This paper summarizes the development of several automatic programming systems for discrete event simulation. Emphasis is given on the model development, or problem definition, and the model writing phases of the modeling life cycle.
ENCOMPASS: A SAGA based environment for the compositon of programs and specifications, appendix A
NASA Technical Reports Server (NTRS)
Terwilliger, Robert B.; Campbell, Roy H.
1985-01-01
ENCOMPASS is an example integrated software engineering environment being constructed by the SAGA project. ENCOMPASS supports the specification, design, construction and maintenance of efficient, validated, and verified programs in a modular programming language. The life cycle paradigm, schema of software configurations, and hierarchical library structure used by ENCOMPASS is presented. In ENCOMPASS, the software life cycle is viewed as a sequence of developments, each of which reuses components from the previous ones. Each development proceeds through the phases planning, requirements definition, validation, design, implementation, and system integration. The components in a software system are modeled as entities which have relationships between them. An entity may have different versions and different views of the same project are allowed. The simple entities supported by ENCOMPASS may be combined into modules which may be collected into projects. ENCOMPASS supports multiple programmers and projects using a hierarchical library system containing a workspace for each programmer; a project library for each project, and a global library common to all projects.
A Methodology for Cybercraft Requirement Definition and Initial System Design
2008-06-01
the software development concepts of the SDLC , requirements, use cases and domain modeling . It ...collectively as Software Development 5 Life Cycle ( SDLC ) models . While there are numerous models that fit under the SDLC definition, all are based on... developed that provided expanded understanding of the domain, it is necessary to either update an existing domain model or create another domain
Software attribute visualization for high integrity software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollock, G.M.
1998-03-01
This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.
General object-oriented software development
NASA Technical Reports Server (NTRS)
Seidewitz, Edwin V.; Stark, Mike
1986-01-01
Object-oriented design techniques are gaining increasing popularity for use with the Ada programming language. A general approach to object-oriented design which synthesizes the principles of previous object-oriented methods into the overall software life-cycle, providing transitions from specification to design and from design to code. It therefore provides the basis for a general object-oriented development methodology.
Real-time software failure characterization
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Finelli, George B.
1990-01-01
A series of studies aimed at characterizing the fundamentals of the software failure process has been undertaken as part of a NASA project on the modeling of a real-time aerospace vehicle software reliability. An overview of these studies is provided, and the current study, an investigation of the reliability of aerospace vehicle guidance and control software, is examined. The study approach provides for the collection of life-cycle process data, and for the retention and evaluation of interim software life-cycle products.
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Jet PRopulsion Laboratory (JPL) Deep Space Network (DSN) Data System implementation tasks is described. The resource estimation mdel modifies and combines a number of existing models. The model calibrates the task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software life-cycle statistics.
A Strategy for Improved System Assurance
2007-06-20
Quality (Measurements Life Cycle Safety, Security & Others) ISO /IEC 12207 * Software Life Cycle Processes ISO 9001 Quality Management System...14598 Software Product Evaluation Related ISO /IEC 90003 Guidelines for the Application of ISO 9001:2000 to Computer Software IEEE 12207 Industry...Implementation of International Standard ISO /IEC 12207 IEEE 1220 Standard for Application and Management of the System Engineering Process Use in
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
An OpenStudio Measure is a script that can manipulate an OpenStudio model and associated data to apply energy conservation measures (ECMs), run supplemental simulations, or visualize simulation results. The OpenStudio software development kit (SDK) and accessibility of the Ruby scripting language makes measure authorship accessible to both software developers and energy modelers. This paper discusses the life cycle of an OpenStudio Measure from development, testing, and distribution, to application.
Software life cycle dynamic simulation model: The organizational performance submodel
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1985-01-01
The submodel structure of a software life cycle dynamic simulation model is described. The software process is divided into seven phases, each with product, staff, and funding flows. The model is subdivided into an organizational response submodel, a management submodel, a management influence interface, and a model analyst interface. The concentration here is on the organizational response model, which simulates the performance characteristics of a software development subject to external and internal influences. These influences emanate from two sources: the model analyst interface, which configures the model to simulate the response of an implementing organization subject to its own internal influences, and the management submodel that exerts external dynamic control over the production process. A complete characterization is given of the organizational response submodel in the form of parameterized differential equations governing product, staffing, and funding levels. The parameter values and functions are allocated to the two interfaces.
Ada education in a software life-cycle context
NASA Technical Reports Server (NTRS)
Clough, Anne J.
1986-01-01
Some of the experience gained from a comprehensive educational program undertaken at The Charles Stark Draper Lab. to introduce the Ada language and to transition modern software engineering technology into the development of Ada and non-Ada applications is described. Initially, a core group, which included manager, engineers and programmers, received training in Ada. An Ada Office was established to assume the major responsibility for training, evaluation, acquisition and benchmarking of tools, and consultation on Ada projects. As a first step in this process, and in-house educational program was undertaken to introduce Ada to the Laboratory. Later, a software engineering course was added to the educational program as the need to address issues spanning the entire software life cycle became evident. Educational efforts to date are summarized, with an emphasis on the educational approach adopted. Finally, lessons learned in administering this program are addressed.
Software engineering from a Langley perspective
NASA Technical Reports Server (NTRS)
Voigt, Susan
1994-01-01
A brief introduction to software engineering is presented. The talk is divided into four sections beginning with the question 'What is software engineering', followed by a brief history of the progression of software engineering at the Langley Research Center in the context of an expanding computing environment. Several basic concepts and terms are introduced, including software development life cycles and maturity levels. Finally, comments are offered on what software engineering means for the Langley Research Center and where to find more information on the subject.
Artificial Intelligence Software Engineering (AISE) model
NASA Technical Reports Server (NTRS)
Kiss, Peter A.
1990-01-01
The American Institute of Aeronautics and Astronautics has initiated a committee on standards for Artificial Intelligence. Presented are the initial efforts of one of the working groups of that committee. A candidate model is presented for the development life cycle of knowledge based systems (KBSs). The intent is for the model to be used by the aerospace community and eventually be evolved into a standard. The model is rooted in the evolutionary model, borrows from the spiral model, and is embedded in the standard Waterfall model for software development. Its intent is to satisfy the development of both stand-alone and embedded KBSs. The phases of the life cycle are shown and detailed as are the review points that constitute the key milestones throughout the development process. The applicability and strengths of the model are discussed along with areas needing further development and refinement by the aerospace community.
NASA Technical Reports Server (NTRS)
Callender, E. David; Steinbacher, Jody
1989-01-01
This is the fourth of five volumes on Information System Life-Cycle and Documentation Standards. This volume provides a well organized, easily used standard for assurance documentation for information systems and software, hardware, and operational procedures components, and related processes. The specifications are developed in conjunction with the corresponding management plans specifying the assurance activities to be performed.
Air Force Systems Engineering Assessment Model (AF SEAM) Management Guide, Version 2
2010-09-21
gleaned from experienced professionals who assisted with the model’s development. Examples of the references used include the following: • ISO /IEC...Defense Acquisition Guidebook, Chapter 4 • AFI 63-1201, Life Cycle Systems Engineering • IEEE/EIA 12207 , Software Life Cycle Processes • Air...Selection criteria Reference Material: IEEE/EIA 12207 , MIL-HDBK-514 Other Considerations: Modeling, simulation and analysis techniques can be
NASA Technical Reports Server (NTRS)
Callender, E. David; Steinbacher, Jody
1989-01-01
This is the fifth of five volumes on Information System Life-Cycle and Documentation Standards. This volume provides a well organized, easily used standard for management control and status reports used in monitoring and controlling the management, development, and assurance of informations systems and software, hardware, and operational procedures components, and related processes.
Ada developers' supplement to the recommended approach
NASA Technical Reports Server (NTRS)
Kester, Rush; Landis, Linda
1993-01-01
This document is a collection of guidelines for programmers and managers who are responsible for the development of flight dynamics applications in Ada. It is intended to be used in conjunction with the Recommended Approach to Software Development (SEL-81-305), which describes the software development life cycle, its products, reviews, methods, tools, and measures. The Ada Developers' Supplement provides additional detail on such topics as reuse, object-oriented analysis, and object-oriented design.
Predicting Software Suitability Using a Bayesian Belief Network
NASA Technical Reports Server (NTRS)
Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.
2005-01-01
The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.
NASA Technical Reports Server (NTRS)
Broderick, Ron
1997-01-01
The ultimate goal of this report was to integrate the powerful tools of artificial intelligence into the traditional process of software development. To maintain the US aerospace competitive advantage, traditional aerospace and software engineers need to more easily incorporate the technology of artificial intelligence into the advanced aerospace systems being designed today. The future goal was to transition artificial intelligence from an emerging technology to a standard technology that is considered early in the life cycle process to develop state-of-the-art aircraft automation systems. This report addressed the future goal in two ways. First, it provided a matrix that identified typical aircraft automation applications conducive to various artificial intelligence methods. The purpose of this matrix was to provide top-level guidance to managers contemplating the possible use of artificial intelligence in the development of aircraft automation. Second, the report provided a methodology to formally evaluate neural networks as part of the traditional process of software development. The matrix was developed by organizing the discipline of artificial intelligence into the following six methods: logical, object representation-based, distributed, uncertainty management, temporal and neurocomputing. Next, a study of existing aircraft automation applications that have been conducive to artificial intelligence implementation resulted in the following five categories: pilot-vehicle interface, system status and diagnosis, situation assessment, automatic flight planning, and aircraft flight control. The resulting matrix provided management guidance to understand artificial intelligence as it applied to aircraft automation. The approach taken to develop a methodology to formally evaluate neural networks as part of the software engineering life cycle was to start with the existing software quality assurance standards and to change these standards to include neural network development. The changes were to include evaluation tools that can be applied to neural networks at each phase of the software engineering life cycle. The result was a formal evaluation approach to increase the product quality of systems that use neural networks for their implementation.
Carbon footprint estimator, phase II : volume II - technical appendices.
DOT National Transportation Integrated Search
2014-03-01
The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...
Carbon footprint estimator, phase II : volume I - GASCAP model.
DOT National Transportation Integrated Search
2014-03-01
The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...
Process Improvement Should Link to Security: SEPG 2007 Security Track Recap
2007-09-01
the Systems Security Engineering Capability Maturity Model (SSE- CMM / ISO 21827) and its use in system software developments ...software development life cycle ( SDLC )? 6. In what ways should process improvement support security in the SDLC ? 1.2 10BPANEL RESOURCES For each... project management, and support practices through the use of the capability maturity models including the CMMI and the Systems Security
Product-oriented Software Certification Process for Software Synthesis
NASA Technical Reports Server (NTRS)
Nelson, Stacy; Fischer, Bernd; Denney, Ewen; Schumann, Johann; Richardson, Julian; Oh, Phil
2004-01-01
The purpose of this document is to propose a product-oriented software certification process to facilitate use of software synthesis and formal methods. Why is such a process needed? Currently, software is tested until deemed bug-free rather than proving that certain software properties exist. This approach has worked well in most cases, but unfortunately, deaths still occur due to software failure. Using formal methods (techniques from logic and discrete mathematics like set theory, automata theory and formal logic as opposed to continuous mathematics like calculus) and software synthesis, it is possible to reduce this risk by proving certain software properties. Additionally, software synthesis makes it possible to automate some phases of the traditional software development life cycle resulting in a more streamlined and accurate development process.
Cyber security best practices for the nuclear industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badr, I.
2012-07-01
When deploying software based systems, such as, digital instrumentation and controls for the nuclear industry, it is vital to include cyber security assessment as part of architecture and development process. When integrating and delivering software-intensive systems for the nuclear industry, engineering teams should make use of a secure, requirements driven, software development life cycle, ensuring security compliance and optimum return on investment. Reliability protections, data loss prevention, and privacy enforcement provide a strong case for installing strict cyber security policies. (authors)
Integrating automated support for a software management cycle into the TAME system
NASA Technical Reports Server (NTRS)
Sunazuka, Toshihiko; Basili, Victor R.
1989-01-01
Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system.
A Case Study of 4 & 5 Cost Effectiveness
NASA Technical Reports Server (NTRS)
Neal, Ralph D.; McCaugherty, Dan; Joshi, Tulasi; Callahan, John
1997-01-01
This paper looks at the Independent Verification and Validation (IV&V) of NASA's Space Shuttle Day of Launch I-Load Update (DoLILU) project. IV&V is defined. The system's development life cycle is explained. Data collection and analysis are described. DoLILU Issue Tracking Reports (DITRs) authored by IV&V personnel are analyzed to determine the effectiveness of IV&V in finding errors before the code, testing, and integration phase of the software development life cycle. The study's findings are reported along with the limitations of the study and planned future research.
A model for a knowledge-based system's life cycle
NASA Technical Reports Server (NTRS)
Kiss, Peter A.
1990-01-01
The American Institute of Aeronautics and Astronautics has initiated a Committee on Standards for Artificial Intelligence. Presented here are the initial efforts of one of the working groups of that committee. The purpose here is to present a candidate model for the development life cycle of Knowledge Based Systems (KBS). The intent is for the model to be used by the Aerospace Community and eventually be evolved into a standard. The model is rooted in the evolutionary model, borrows from the spiral model, and is embedded in the standard Waterfall model for software development. Its intent is to satisfy the development of both stand-alone and embedded KBSs. The phases of the life cycle are detailed as are and the review points that constitute the key milestones throughout the development process. The applicability and strengths of the model are discussed along with areas needing further development and refinement by the aerospace community.
NASA Technical Reports Server (NTRS)
1992-01-01
This standard specifies the software assurance program for the provider of software. It also delineates the assurance activities for the provider and the assurance data that are to be furnished by the provider to the acquirer. In any software development effort, the provider is the entity or individual that actually designs, develops, and implements the software product, while the acquirer is the entity or individual who specifies the requirements and accepts the resulting products. This standard specifies at a high level an overall software assurance program for software developed for and by NASA. Assurance includes the disciplines of quality assurance, quality engineering, verification and validation, nonconformance reporting and corrective action, safety assurance, and security assurance. The application of these disciplines during a software development life cycle is called software assurance. Subsequent lower-level standards will specify the specific processes within these disciplines.
DOT National Transportation Integrated Search
2004-05-01
This manual provides basic instruction for using RealCost, software that was developed by the Federal Highway Administration (FHWA) to support the application of life-cycle cost analysis (LCCA) in the pavement project-level decisionmaking process. Th...
1984-01-01
between projects and between host development systems, and between projects, using an integrated Programming Support Environment. The discussion assumes...the availability of some of the facilities that were proposed for inclusion in the UK CHAPSE (CHILL Ada Programming Support Environment). C’ Accession...life cycle of a product. In a programming support envirorment (PSE) with an underlying database, the software can be stored in the databave and
Functional description of the ISIS system
NASA Technical Reports Server (NTRS)
Berman, W. J.
1979-01-01
Development of software for avionic and aerospace applications (flight software) is influenced by a unique combination of factors which includes: (1) length of the life cycle of each project; (2) necessity for cooperation between the aerospace industry and NASA; (3) the need for flight software that is highly reliable; (4) the increasing complexity and size of flight software; and (5) the high quality of the programmers and the tightening of project budgets. The interactive software invocation system (ISIS) which is described is designed to overcome the problems created by this combination of factors.
NASA Technical Reports Server (NTRS)
Rosenberg, Linda
1997-01-01
If software is a critical element in a safety critical system, it is imperative to implement a systematic approach to software safety as an integral part of the overall system safety programs. The NASA-STD-8719.13A, "NASA Software Safety Standard", describes the activities necessary to ensure that safety is designed into software that is acquired or developed by NASA, and that safety is maintained throughout the software life cycle. A PDF version, is available on the WWW from Lewis. A Guidebook that will assist in the implementation of the requirements in the Safety Standard is under development at the Lewis Research Center (LeRC). After completion, it will also be available on the WWW from Lewis.
Adopting Open Source Software to Address Software Risks during the Scientific Data Life Cycle
NASA Astrophysics Data System (ADS)
Vinay, S.; Downs, R. R.
2012-12-01
Software enables the creation, management, storage, distribution, discovery, and use of scientific data throughout the data lifecycle. However, the capabilities offered by software also present risks for the stewardship of scientific data, since future access to digital data is dependent on the use of software. From operating systems to applications for analyzing data, the dependence of data on software presents challenges for the stewardship of scientific data. Adopting open source software provides opportunities to address some of the proprietary risks of data dependence on software. For example, in some cases, open source software can be deployed to avoid licensing restrictions for using, modifying, and transferring proprietary software. The availability of the source code of open source software also enables the inclusion of modifications, which may be contributed by various community members who are addressing similar issues. Likewise, an active community that is maintaining open source software can be a valuable source of help, providing an opportunity to collaborate to address common issues facing adopters. As part of the effort to meet the challenges of software dependence for scientific data stewardship, risks from software dependence have been identified that exist during various times of the data lifecycle. The identification of these risks should enable the development of plans for mitigating software dependencies, where applicable, using open source software, and to improve understanding of software dependency risks for scientific data and how they can be reduced during the data life cycle.
Modernization of software quality assurance
NASA Technical Reports Server (NTRS)
Bhaumik, Gokul
1988-01-01
The customers satisfaction depends not only on functional performance, it also depends on the quality characteristics of the software products. An examination of this quality aspect of software products will provide a clear, well defined framework for quality assurance functions, which improve the life-cycle activities of software development. Software developers must be aware of the following aspects which have been expressed by many quality experts: quality cannot be added on; the level of quality built into a program is a function of the quality attributes employed during the development process; and finally, quality must be managed. These concepts have guided our development of the following definition for a Software Quality Assurance function: Software Quality Assurance is a formal, planned approach of actions designed to evaluate the degree of an identifiable set of quality attributes present in all software systems and their products. This paper is an explanation of how this definition was developed and how it is used.
NASA Technical Reports Server (NTRS)
Mckay, C. W.; Bown, R. L.
1985-01-01
The space station data management system involves networks of computing resources that must work cooperatively and reliably over an indefinite life span. This program requires a long schedule of modular growth and an even longer period of maintenance and operation. The development and operation of space station computing resources will involve a spectrum of systems and software life cycle activities distributed across a variety of hosts, an integration, verification, and validation host with test bed, and distributed targets. The requirement for the early establishment and use of an apporopriate Computer Systems and Software Engineering Support Environment is identified. This environment will support the Research and Development Productivity challenges presented by the space station computing system.
FUNDAMENTALS OF LIFE CYCLE ASSESSMENT AND OFF-THE-SHELF SOFTWARE DEMONSTRATION
As the name implies, Life Cycle Assesssment (LCA) evaluates the entire life cycle of a product, process, activity, or service, not just simple economics at the time of delivery. This course on LCA covers the following issues:
Basic principles of LCA for use in producing, des...
Information systems analysis approach in hospitals: a national survey.
Wong, B K; Sellaro, C L; Monaco, J A
1995-03-01
A survey of 216 hospitals reveals that some hospitals do not conduct cost-benefit analyses or analyze possible adverse effects in feasibility studies. In determining and analyzing system requirements, external factors that initiate the transaction are not examined, and computer-aided software engineering (CASE) tools are seldom used. Some hospitals do not investigate the advantages and disadvantages of using in-house-developed software versus purchased software packages in the evaluation of alternatives. The survey finds that, overall, most hospitals follow the traditional systems development life cycle (SDLC) approach in analyzing information systems.
An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency
NASA Astrophysics Data System (ADS)
Phillips, Dewanne Marie
Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software architecture framework and acquisition methodology to improve the resiliency of space systems from a software perspective with an emphasis on the early phases of the systems engineering life cycle. This methodology involves seven steps: 1) Define technical resiliency requirements, 1a) Identify standards/policy for software resiliency, 2) Develop a request for proposal (RFP)/statement of work (SOW) for resilient space systems software, 3) Define software resiliency goals for space systems, 4) Establish software resiliency quality attributes, 5) Perform architectural tradeoffs and identify risks, 6) Conduct architecture assessments as part of the procurement process, and 7) Ascertain space system software architecture resiliency metrics. Data illustrates that software vulnerabilities can lead to opportunities for malicious cyber activities, which could degrade the space mission capability for the user community. Reducing the number of vulnerabilities by improving architecture and software system engineering practices can contribute to making space systems more resilient. Since cyber-attacks are enabled by shortfalls in software, robust software engineering practices and an architectural design are foundational to resiliency, which is a quality that allows the system to "take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". To achieve software resiliency for space systems, acquirers and suppliers must identify relevant factors and systems engineering practices to apply across the lifecycle, in software requirements analysis, architecture development, design, implementation, verification and validation, and maintenance phases.
Hybrid Modeling for Testing Intelligent Software for Lunar-Mars Closed Life Support
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Nicholson, Leonard S. (Technical Monitor)
1999-01-01
Intelligent software is being developed for closed life support systems with biological components, for human exploration of the Moon and Mars. The intelligent software functions include planning/scheduling, reactive discrete control and sequencing, management of continuous control, and fault detection, diagnosis, and management of failures and errors. Four types of modeling information have been essential to system modeling and simulation to develop and test the software and to provide operational model-based what-if analyses: discrete component operational and failure modes; continuous dynamic performance within component modes, modeled qualitatively or quantitatively; configuration of flows and power among components in the system; and operations activities and scenarios. CONFIG, a multi-purpose discrete event simulation tool that integrates all four types of models for use throughout the engineering and operations life cycle, has been used to model components and systems involved in the production and transfer of oxygen and carbon dioxide in a plant-growth chamber and between that chamber and a habitation chamber with physicochemical systems for gas processing.
NASA Technical Reports Server (NTRS)
1989-01-01
001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.
Means of storage and automated monitoring of versions of text technical documentation
NASA Astrophysics Data System (ADS)
Leonovets, S. A.; Shukalov, A. V.; Zharinov, I. O.
2018-03-01
The paper presents automation of the process of preparation, storage and monitoring of version control of a text designer, and program documentation by means of the specialized software is considered. Automation of preparation of documentation is based on processing of the engineering data which are contained in the specifications and technical documentation or in the specification. Data handling assumes existence of strictly structured electronic documents prepared in widespread formats according to templates on the basis of industry standards and generation by an automated method of the program or designer text document. Further life cycle of the document and engineering data entering it are controlled. At each stage of life cycle, archive data storage is carried out. Studies of high-speed performance of use of different widespread document formats in case of automated monitoring and storage are given. The new developed software and the work benches available to the developer of the instrumental equipment are described.
A Language Translator for a Computer Aided Rapid Prototyping System.
1988-03-01
PROBLEM ................... S B. THE TRADITIONAL "WATERFALL LIFE CYCLE" .. ............... 14 C. RAPID PROTOTYPING...feature of everyday life for almost the entire industrialized world. Few governments or businesses function without the aid of computer systems. Com...engineering. B. TIE TRADITIONAL "WATERFALL LIFE CYCLE" I. Characteristics The traditional method of software engineering is the "waterfall life cycle
Software life cycle methodologies and environments
NASA Technical Reports Server (NTRS)
Fridge, Ernest
1991-01-01
Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.
Application of industry-standard guidelines for the validation of avionics software
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Shagnea, Anita M.
1990-01-01
The application of industry standards to the development of avionics software is discussed, focusing on verification and validation activities. It is pointed out that the procedures that guide the avionics software development and testing process are under increased scrutiny. The DO-178A guidelines, Software Considerations in Airborne Systems and Equipment Certification, are used by the FAA for certifying avionics software. To investigate the effectiveness of the DO-178A guidelines for improving the quality of avionics software, guidance and control software (GCS) is being developed according to the DO-178A development method. It is noted that, due to the extent of the data collection and configuration management procedures, any phase in the life cycle of a GCS implementation can be reconstructed. Hence, a fundamental development and testing platform has been established that is suitable for investigating the adequacy of various software development processes. In particular, the overall effectiveness and efficiency of the development method recommended by the DO-178A guidelines are being closely examined.
Product Definition Data Interface (PDDI) Product Specification
1991-07-01
syntax of the language gives a precise specification of the data without interpretation of it. M - Constituent Read Block. CSECT - Control Section, the...to conform to the PDDI Access Software’s internal data representation so that it may be further processed. JCL - Job Control Language - IBM language...software development and life cycle * phases. OUALITY CONTROL - The planned and systematic application of all actions (management/technical) necessary to
NASA Technical Reports Server (NTRS)
Mahmot, Ron; Koslosky, John T.; Beach, Edward; Schwarz, Barbara
1994-01-01
The Mission Operations Division (MOD) at Goddard Space Flight Center builds Mission Operations Centers which are used by Flight Operations Teams to monitor and control satellites. Reducing system life cycle costs through software reuse has always been a priority of the MOD. The MOD's Transportable Payload Operations Control Center development team established an extensive library of 14 subsystems with over 100,000 delivered source instructions of reusable, generic software components. Nine TPOCC-based control centers to date support 11 satellites and achieved an average software reuse level of more than 75 percent. This paper shares experiences of how the TPOCC building blocks were developed and how building block developer's, mission development teams, and users are all part of the process.
Gate-to-gate Life-Cycle Inventory of Hardboard Production in North America
Richard Bergman
2014-01-01
Whole-building life-cycle assessments (LCAs) populated by life-cycle inventory (LCI) data are incorporated into environmental footprint software tools for establishing green building certification by building professionals and code. However, LCI data on some wood building products are still needed to help fill gaps in the data and thus provide a more complete picture...
A Reference Model for Software and System Inspections. White Paper
NASA Technical Reports Server (NTRS)
He, Lulu; Shull, Forrest
2009-01-01
Software Quality Assurance (SQA) is an important component of the software development process. SQA processes provide assurance that the software products and processes in the project life cycle conform to their specified requirements by planning, enacting, and performing a set of activities to provide adequate confidence that quality is being built into the software. Typical techniques include: (1) Testing (2) Simulation (3) Model checking (4) Symbolic execution (5) Management reviews (6) Technical reviews (7) Inspections (8) Walk-throughs (9) Audits (10) Analysis (complexity analysis, control flow analysis, algorithmic analysis) (11) Formal method Our work over the last few years has resulted in substantial knowledge about SQA techniques, especially the areas of technical reviews and inspections. But can we apply the same QA techniques to the system development process? If yes, what kind of tailoring do we need before applying them in the system engineering context? If not, what types of QA techniques are actually used at system level? And, is there any room for improvement.) After a brief examination of the system engineering literature (especially focused on NASA and DoD guidance) we found that: (1) System and software development process interact with each other at different phases through development life cycle (2) Reviews are emphasized in both system and software development. (Figl.3). For some reviews (e.g. SRR, PDR, CDR), there are both system versions and software versions. (3) Analysis techniques are emphasized (e.g. Fault Tree Analysis, Preliminary Hazard Analysis) and some details are given about how to apply them. (4) Reviews are expected to use the outputs of the analysis techniques. In other words, these particular analyses are usually conducted in preparation for (before) reviews. The goal of our work is to explore the interaction between the Quality Assurance (QA) techniques at the system level and the software level.
DOT National Transportation Integrated Search
2014-03-01
The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...
10 CFR 436.15 - Formatting cost data.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Procedures for Life Cycle Cost Analyses § 436.15 Formatting cost data. In establishing cost data under §§ 436... software referenced in the Life Cycle Cost Manual for the Federal Energy Management Program. ...
10 CFR 436.15 - Formatting cost data.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Procedures for Life Cycle Cost Analyses § 436.15 Formatting cost data. In establishing cost data under §§ 436... software referenced in the Life Cycle Cost Manual for the Federal Energy Management Program. ...
10 CFR 436.15 - Formatting cost data.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Procedures for Life Cycle Cost Analyses § 436.15 Formatting cost data. In establishing cost data under §§ 436... software referenced in the Life Cycle Cost Manual for the Federal Energy Management Program. ...
10 CFR 436.15 - Formatting cost data.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Procedures for Life Cycle Cost Analyses § 436.15 Formatting cost data. In establishing cost data under §§ 436... software referenced in the Life Cycle Cost Manual for the Federal Energy Management Program. ...
Improving software quality - The use of formal inspections at the Jet Propulsion Laboratory
NASA Technical Reports Server (NTRS)
Bush, Marilyn
1990-01-01
The introduction of software formal inspections (Fagan Inspections) at JPL for finding and fixing defects early in the software development life cycle are reviewed. It is estimated that, by the year 2000, some software efforts will rise to as much as 80 percent of the total. Software problems are especially important at NASA as critical flight software must be error-free. It is shown that formal inspections are particularly effective at finding and removing defects having to do with clarity, correctness, consistency, and completeness. A very significant discovery was that code audits were not as effective at finding defects as code inspections.
Risk-Based Object Oriented Testing
NASA Technical Reports Server (NTRS)
Rosenberg, Linda H.; Stapko, Ruth; Gallo, Albert
2000-01-01
Software testing is a well-defined phase of the software development life cycle. Functional ("black box") testing and structural ("white box") testing are two methods of test case design commonly used by software developers. A lesser known testing method is risk-based testing, which takes into account the probability of failure of a portion of code as determined by its complexity. For object oriented programs, a methodology is proposed for identification of risk-prone classes. Risk-based testing is a highly effective testing technique that can be used to find and fix the most important problems as quickly as possible.
Software safety - A user's practical perspective
NASA Technical Reports Server (NTRS)
Dunn, William R.; Corliss, Lloyd D.
1990-01-01
Software safety assurance philosophy and practices at the NASA Ames are discussed. It is shown that, to be safe, software must be error-free. Software developments on two digital flight control systems and two ground facility systems are examined, including the overall system and software organization and function, the software-safety issues, and their resolution. The effectiveness of safety assurance methods is discussed, including conventional life-cycle practices, verification and validation testing, software safety analysis, and formal design methods. It is concluded (1) that a practical software safety technology does not yet exist, (2) that it is unlikely that a set of general-purpose analytical techniques can be developed for proving that software is safe, and (3) that successful software safety-assurance practices will have to take into account the detailed design processes employed and show that the software will execute correctly under all possible conditions.
The Package-Based Development Process in the Flight Dynamics Division
NASA Technical Reports Server (NTRS)
Parra, Amalia; Seaman, Carolyn; Basili, Victor; Kraft, Stephen; Condon, Steven; Burke, Steven; Yakimovich, Daniil
1997-01-01
The Software Engineering Laboratory (SEL) has been operating for more than two decades in the Flight Dynamics Division (FDD) and has adapted to the constant movement of the software development environment. The SEL's Improvement Paradigm shows that process improvement is an iterative process. Understanding, Assessing and Packaging are the three steps that are followed in this cyclical paradigm. As the improvement process cycles back to the first step, after having packaged some experience, the level of understanding will be greater. In the past, products resulting from the packaging step have been large process documents, guidebooks, and training programs. As the technical world moves toward more modularized software, we have made a move toward more modularized software development process documentation, as such the products of the packaging step are becoming smaller and more frequent. In this manner, the QIP takes on a more spiral approach rather than a waterfall. This paper describes the state of the FDD in the area of software development processes, as revealed through the understanding and assessing activities conducted by the COTS study team. The insights presented include: (1) a characterization of a typical FDD Commercial Off the Shelf (COTS) intensive software development life-cycle process, (2) lessons learned through the COTS study interviews, and (3) a description of changes in the SEL due to the changing and accelerating nature of software development in the FDD.
Moving Up the CMMI Capability and Maturity Levels Using Simulation
2008-01-01
Alternative Process Tools, Including NPV and ROI 6 Figure 3: Top-Level View of the Full Life-Cycle Version of the IEEE 12207 PSIM, Including IV&V Layer 19...Figure 4: Screenshot of the Incremental Version Model 19 Figure 5: IEEE 12207 PSIM Showing the Top-Level Life-Cycle Phases 22 Figure 6: IEEE 12207 ...Software Detailed Design for the IEEE 12207 Life- Cycle Process 24 Figure 8: Incremental Life Cycle PSIM Configured for a Specific Project Using SEPG
Statistics of software vulnerability detection in certification testing
NASA Astrophysics Data System (ADS)
Barabanov, A. V.; Markov, A. S.; Tsirlov, V. L.
2018-05-01
The paper discusses practical aspects of introduction of the methods to detect software vulnerability in the day-to-day activities of the accredited testing laboratory. It presents the approval results of the vulnerability detection methods as part of the study of the open source software and the software that is a test object of the certification tests under information security requirements, including software for communication networks. Results of the study showing the allocation of identified vulnerabilities by types of attacks, country of origin, programming languages used in the development, methods for detecting vulnerability, etc. are given. The experience of foreign information security certification systems related to the detection of certified software vulnerabilities is analyzed. The main conclusion based on the study is the need to implement practices for developing secure software in the development life cycle processes. The conclusions and recommendations for the testing laboratories on the implementation of the vulnerability analysis methods are laid down.
Applying an MVC Framework for The System Development Life Cycle with Waterfall Model Extended
NASA Astrophysics Data System (ADS)
Hardyanto, W.; Purwinarko, A.; Sujito, F.; Masturi; Alighiri, D.
2017-04-01
This paper describes the extension of the waterfall model using MVC architectural pattern for software development. The waterfall model is the based model of the most widely used in software development, yet there are still many problems in it. The general issue usually happens on data changes that cause the delays on the process itself. On the other hand, the security factor on the software as well as one of the major problems. This study uses PHP programming language for implementation. Although this model can be implemented in several programming languages with the same concept. This study is based on MVC architecture so that it can improve the performance of both software development and maintenance, especially concerning security, validation, database access, and routing.
NASA Technical Reports Server (NTRS)
Rodriguez, Juan Jared
2014-01-01
The purpose of this report is to detail the tasks accomplished as a NASA NIFS intern for the summer 2014 session. This internship opportunity is to develop an issue tracker Ruby on Rails web application to improve the communication of developmental anomalies between the Support Software Computer Software Configuration Item (CSCI) teams, System Build and Information Architecture. As many may know software development is an arduous, time consuming, collaborative effort. It involves nearly as much work designing, planning, collaborating, discussing, and resolving issues as effort expended in actual development. This internship opportunity was put in place to help alleviate the amount of time spent discussing issues such as bugs, missing tests, new requirements, and usability concerns that arise during development and throughout the life cycle of software applications once in production.
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit DSN software life cycle statistics. The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
Life cycle cost modeling of conceptual space vehicles
NASA Technical Reports Server (NTRS)
Ebeling, Charles
1993-01-01
This paper documents progress to date by the University of Dayton on the development of a life cycle cost model for use during the conceptual design of new launch vehicles and spacecraft. This research is being conducted under NASA Research Grant NAG-1-1327. This research effort changes the focus from that of the first two years in which a reliability and maintainability model was developed to the initial development of a life cycle cost model. Cost categories are initially patterned after NASA's three axis work breakdown structure consisting of a configuration axis (vehicle), a function axis, and a cost axis. The focus will be on operations and maintenance costs and other recurring costs. Secondary tasks performed concurrent with the development of the life cycle costing model include continual support and upgrade of the R&M model. The primary result of the completed research will be a methodology and a computer implementation of the methodology to provide for timely cost analysis in support of the conceptual design activities. The major objectives of this research are: to obtain and to develop improved methods for estimating manpower, spares, software and hardware costs, facilities costs, and other cost categories as identified by NASA personnel; to construct a life cycle cost model of a space transportation system for budget exercises and performance-cost trade-off analysis during the conceptual and development stages; to continue to support modifications and enhancements to the R&M model; and to continue to assist in the development of a simulation model to provide an integrated view of the operations and support of the proposed system.
An approach to software cost estimation
NASA Technical Reports Server (NTRS)
Mcgarry, F.; Page, J.; Card, D.; Rohleder, M.; Church, V.
1984-01-01
A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented.
Modeling of the competition life cycle using the software complex of cellular automata PyCAlab
NASA Astrophysics Data System (ADS)
Berg, D. B.; Beklemishev, K. A.; Medvedev, A. N.; Medvedeva, M. A.
2015-11-01
The aim of the work is to develop a numerical model of the life cycle of competition on the basis of software complex cellular automata PyCAlab. The model is based on the general patterns of growth of various systems in resource-limited settings. At examples it is shown that the period of transition from an unlimited growth of the market agents to the stage of competitive growth takes quite a long time and may be characterized as monotonic. During this period two main strategies of competitive selection coexist: 1) capture of maximum market space with any reasonable costs; 2) saving by reducing costs. The obtained results allow concluding that the competitive strategies of companies must combine two mentioned types of behavior, and this issue needs to be given adequate attention in the academic literature on management. The created numerical model may be used for market research when developing of the strategies for promotion of new goods and services.
A measurement system for large, complex software programs
NASA Technical Reports Server (NTRS)
Rone, Kyle Y.; Olson, Kitty M.; Davis, Nathan E.
1994-01-01
This paper describes measurement systems required to forecast, measure, and control activities for large, complex software development and support programs. Initial software cost and quality analysis provides the foundation for meaningful management decisions as a project evolves. In modeling the cost and quality of software systems, the relationship between the functionality, quality, cost, and schedule of the product must be considered. This explicit relationship is dictated by the criticality of the software being developed. This balance between cost and quality is a viable software engineering trade-off throughout the life cycle. Therefore, the ability to accurately estimate the cost and quality of software systems is essential to providing reliable software on time and within budget. Software cost models relate the product error rate to the percent of the project labor that is required for independent verification and validation. The criticality of the software determines which cost model is used to estimate the labor required to develop the software. Software quality models yield an expected error discovery rate based on the software size, criticality, software development environment, and the level of competence of the project and developers with respect to the processes being employed.
Implementation of a production Ada project: The GRODY study
NASA Technical Reports Server (NTRS)
Godfrey, Sara; Brophy, Carolyn Elizabeth
1989-01-01
The use of the Ada language and design methodologies that encourage full use of its capabilities have a strong impact on all phases of the software development project life cycle. At the National Aeronautics and Space Administration/Goddard Space Flight Center (NASA/GSFC), the Software Engineering Laboratory (SEL) conducted an experiment in parallel development of two flight dynamics systems in FORTRAN and Ada. The differences observed during the implementation, unit testing, and integration phases of the two projects are described and the lessons learned during the implementation phase of the Ada development are outlined. Included are recommendations for future Ada development projects.
A Survey and Evaluation of Software Quality Assurance.
1984-09-01
activities; 2. Cryptologic activities related to national security; 3. Command and control of military forces; 4. Equipment that is an integral part of a...Testing and Integration , and Performance or Operation (6). Figure 3 shows the software life cycle and the key outputs of the phases. The first phase to...defects. This procedure is considered the Checkout (13:09-91). Once coding is complete, the Testing and Integration Phase begins. Here the developed
System Re-engineering Project Executive Summary
1991-11-01
Management Information System (STAMIS) application. This project involved reverse engineering, evaluation of structured design and object-oriented design, and re- implementation of the system in Ada. This executive summary presents the approach to re-engineering the system, the lessons learned while going through the process, and issues to be considered in future tasks of this nature.... Computer-Aided Software Engineering (CASE), Distributed Software, Ada, COBOL, Systems Analysis, Systems Design, Life Cycle Development, Functional Decomposition, Object-Oriented
USER'S GUIDE FOR THE MUNICIPAL SOLID WASTE LIFE-CYCLE DATABASE
The report describes how to use the municipal solid waste (MSW) life cycle database, a software application with Microsoft Access interfaces, that provides environmental data for energy production, materials production, and MSW management activities and equipment. The basic datab...
Inclusion of LCCA in Alaska flexible pavement design software manual.
DOT National Transportation Integrated Search
2012-10-01
Life cycle cost analysis is a key part for selecting materials and techniques that optimize the service life of a pavement in terms of cost and performance. While the Alaska : Flexible Pavement Design software has been in use since 2004, there is no ...
Development and weighting of a life cycle assessment screening model
NASA Astrophysics Data System (ADS)
Bates, Wayne E.; O'Shaughnessy, James; Johnson, Sharon A.; Sisson, Richard
2004-02-01
Nearly all life cycle assessment tools available today are high priced, comprehensive and quantitative models requiring a significant amount of data collection and data input. In addition, most of the available software packages require a great deal of training time to learn how to operate the model software. Even after this time investment, results are not guaranteed because of the number of estimations and assumptions often necessary to run the model. As a result, product development, design teams and environmental specialists need a simplified tool that will allow for the qualitative evaluation and "screening" of various design options. This paper presents the development and design of a generic, qualitative life cycle screening model and demonstrates its applicability and ease of use. The model uses qualitative environmental, health and safety factors, based on site or product-specific issues, to sensitize the overall results for a given set of conditions. The paper also evaluates the impact of different population input ranking values on model output. The final analysis is based on site or product-specific variables. The user can then evaluate various design changes and the apparent impact or improvement on the environment, health and safety, compliance cost and overall corporate liability. Major input parameters can be varied, and factors such as materials use, pollution prevention, waste minimization, worker safety, product life, environmental impacts, return of investment, and recycle are evaluated. The flexibility of the model format will be discussed in order to demonstrate the applicability and usefulness within nearly any industry sector. Finally, an example using audience input value scores will be compared to other population input results.
Configurable technology development for reusable control and monitor ground systems
NASA Technical Reports Server (NTRS)
Uhrlaub, David R.
1994-01-01
The control monitor unit (CMU) uses configurable software technology for real-time mission command and control, telemetry processing, simulation, data acquisition, data archiving, and ground operations automation. The base technology is currently planned for the following control and monitor systems: portable Space Station checkout systems; ecological life support systems; Space Station logistics carrier system; and the ground system of the Delta Clipper (SX-2) in the Single-Stage Rocket Technology program. The CMU makes extensive use of commercial technology to increase capability and reduce development and life-cycle costs. The concepts and technology are being developed by McDonnell Douglas Space and Defense Systems for the Real-Time Systems Laboratory at NASA's Kennedy Space Center under the Payload Ground Operations Contract. A second function of the Real-Time Systems Laboratory is development and utilization of advanced software development practices.
Code of Federal Regulations, 2012 CFR
2012-10-01
... cohesion. Component means an electronic element, device, or appliance (including hardware or software) that... and software version, is documented and maintained through the life-cycle of the products in use. Executive software means software common to all installations of a given electronic product. It generally is...
Code of Federal Regulations, 2013 CFR
2013-10-01
... cohesion. Component means an electronic element, device, or appliance (including hardware or software) that... and software version, is documented and maintained through the life-cycle of the products in use. Executive software means software common to all installations of a given electronic product. It generally is...
Code of Federal Regulations, 2014 CFR
2014-10-01
... cohesion. Component means an electronic element, device, or appliance (including hardware or software) that... and software version, is documented and maintained through the life-cycle of the products in use. Executive software means software common to all installations of a given electronic product. It generally is...
APPLICATION OF THE US DECISION SUPPORT TOOL FOR MATERIALS AND WASTE MANAGEMENT
EPA¿s National Risk Management Research Laboratory has led the development of a municipal solid waste decision support tool (MSW-DST). The computer software can be used to calculate life-cycle environmental tradeoffs and full costs of different waste management plans or recycling...
2011-05-27
frameworks 4 CMMI-DEV IEEE / ISO / IEC 15288 / 12207 Quality Assurance ©2011 Walz IEEE Life Cycle Processes & Artifacts • Systems Life Cycle Processes...TAG to ISO TC 176 Quality Management • Quality: ASQ, work experience • Software: three books, consulting, work experience • Systems: Telecom & DoD...and IEEE 730 SQA need to align. The P730 IEEE standards working group has expanded the scope of the SQA process standard to align with IS 12207
OSI for hardware/software interoperability
NASA Astrophysics Data System (ADS)
Wood, Richard J.; Harvey, Donald L.; Linderman, Richard W.; Gardener, Gary A.; Capraro, Gerard T.
1994-03-01
There is a need in public safety for real-time data collection and transmission from one or more sensors. The Rome Laboratory and the Ballistic Missile Defense Organization are pursuing an effort to bring the benefits of Open System Architectures (OSA) to embedded systems within the Department of Defense. When developed properly OSA provides interoperability, commonality, graceful upgradeability, survivability and hardware/software transportability to greatly minimize life cycle costs, integration and supportability. Architecture flexibility can be achieved to take advantage of commercial accomplishments by basing these developments on vendor-neutral commercially accepted standards and protocols.
Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic
NASA Technical Reports Server (NTRS)
Leucht, Kurt W.; Semmel, Glenn S.
2008-01-01
The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.
Applications of AN OO Methodology and Case to a Daq System
NASA Astrophysics Data System (ADS)
Bee, C. P.; Eshghi, S.; Jones, R.; Kolos, S.; Magherini, C.; Maidantchik, C.; Mapelli, L.; Mornacchi, G.; Niculescu, M.; Patel, A.; Prigent, D.; Spiwoks, R.; Soloviev, I.; Caprini, M.; Duval, P. Y.; Etienne, F.; Ferrato, D.; Le van Suu, A.; Qian, Z.; Gaponenko, I.; Merzliakov, Y.; Ambrosini, G.; Ferrari, R.; Fumagalli, G.; Polesello, G.
The RD13 project has evaluated the use of the Object Oriented Information Engineering (OOIE) method during the development of several software components connected to the DAQ system. The method is supported by a sophisticated commercial CASE tool (Object Management Workbench) and programming environment (Kappa) which covers the full life-cycle of the software including model simulation, code generation and application deployment. This paper gives an overview of the method, CASE tool, DAQ components which have been developed and we relate our experiences with the method and tool, its integration into our development environment and the spiral lifecycle it supports.
Systems of Systems: Scaling Up the Development Process
2006-08-01
many organizations are using the TSP and growing evidence sup- ports its efficacy [Davis 03, Grojeans 05, McAndrews 00, Pracchia 04, Rickets 05...January 2004. http://www.stsc.hill.af.mil/Crosstalk/2004/01 /0401Pracchia.html. [ Rickets 05] Rickets , Chris A. “A TSP Software Maintenance Life Cycle
Systems of Systems: Scaling up the Development Program
2006-08-01
many organizations are using the TSP and growing evidence sup- ports its efficacy [Davis 03, Grojeans 05, McAndrews 00, Pracchia 04, Rickets 05...January 2004. http://www.stsc.hill.af.mil/Crosstalk/2004/01 /0401Pracchia.html. [ Rickets 05] Rickets , Chris A. “A TSP Software Maintenance Life Cycle
Enough to Go 'Round? Thinking Smart about Total Cost of Ownership
ERIC Educational Resources Information Center
McIntire, Todd
2006-01-01
Total cost of ownership or TCO refers to the life cycle of costs for technology, including both direct and indirect expenses. TCO includes costs incurred by capital (hardware, software, and facilities); administration and operation (planning, upgrade, replacement, and technical support); and end-user operation (staff development and user…
Multiscale Fatigue Life Prediction for Composite Panels
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Yarrington, Phillip W.; Arnold, Steven M.
2012-01-01
Fatigue life prediction capabilities have been incorporated into the HyperSizer Composite Analysis and Structural Sizing Software. The fatigue damage model is introduced at the fiber/matrix constituent scale through HyperSizer s coupling with NASA s MAC/GMC micromechanics software. This enables prediction of the micro scale damage progression throughout stiffened and sandwich panels as a function of cycles leading ultimately to simulated panel failure. The fatigue model implementation uses a cycle jumping technique such that, rather than applying a specified number of additional cycles, a specified local damage increment is specified and the number of additional cycles to reach this damage increment is calculated. In this way, the effect of stress redistribution due to damage-induced stiffness change is captured, but the fatigue simulations remain computationally efficient. The model is compared to experimental fatigue life data for two composite facesheet/foam core sandwich panels, demonstrating very good agreement.
The advanced software development workstation project
NASA Technical Reports Server (NTRS)
Fridge, Ernest M., III; Pitman, Charles L.
1991-01-01
The Advanced Software Development Workstation (ASDW) task is researching and developing the technologies required to support Computer Aided Software Engineering (CASE) with the emphasis on those advanced methods, tools, and processes that will be of benefit to support all NASA programs. Immediate goals are to provide research and prototype tools that will increase productivity, in the near term, in projects such as the Software Support Environment (SSE), the Space Station Control Center (SSCC), and the Flight Analysis and Design System (FADS) which will be used to support the Space Shuttle and Space Station Freedom. Goals also include providing technology for development, evolution, maintenance, and operations. The technologies under research and development in the ASDW project are targeted to provide productivity enhancements during the software life cycle phase of enterprise and information system modeling, requirements generation and analysis, system design and coding, and system use and maintenance. On-line user's guides will assist users in operating the developed information system with knowledge base expert assistance.
NASA Technical Reports Server (NTRS)
Shull, Forrest; Godfrey, Sally; Bechtel, Andre; Feldmann, Raimund L.; Regardie, Myrna; Seaman, Carolyn
2008-01-01
A viewgraph presentation describing the NASA Software Assurance Research Program (SARP) project, with a focus on full life-cycle defect management, is provided. The topics include: defect classification, data set and algorithm mapping, inspection guidelines, and tool support.
Trends in computer hardware and software.
Frankenfeld, F M
1993-04-01
Previously identified and current trends in the development of computer systems and in the use of computers for health care applications are reviewed. Trends identified in a 1982 article were increasing miniaturization and archival ability, increasing software costs, increasing software independence, user empowerment through new software technologies, shorter computer-system life cycles, and more rapid development and support of pharmaceutical services. Most of these trends continue today. Current trends in hardware and software include the increasing use of reduced instruction-set computing, migration to the UNIX operating system, the development of large software libraries, microprocessor-based smart terminals that allow remote validation of data, speech synthesis and recognition, application generators, fourth-generation languages, computer-aided software engineering, object-oriented technologies, and artificial intelligence. Current trends specific to pharmacy and hospitals are the withdrawal of vendors of hospital information systems from the pharmacy market, improved linkage of information systems within hospitals, and increased regulation by government. The computer industry and its products continue to undergo dynamic change. Software development continues to lag behind hardware, and its high cost is offsetting the savings provided by hardware.
Software risk management through independent verification and validation
NASA Technical Reports Server (NTRS)
Callahan, John R.; Zhou, Tong C.; Wood, Ralph
1995-01-01
Software project managers need tools to estimate and track project goals in a continuous fashion before, during, and after development of a system. In addition, they need an ability to compare the current project status with past project profiles to validate management intuition, identify problems, and then direct appropriate resources to the sources of problems. This paper describes a measurement-based approach to calculating the risk inherent in meeting project goals that leverages past project metrics and existing estimation and tracking models. We introduce the IV&V Goal/Questions/Metrics model, explain its use in the software development life cycle, and describe our attempts to validate the model through the reverse engineering of existing projects.
Software metrics: Software quality metrics for distributed systems. [reliability engineering
NASA Technical Reports Server (NTRS)
Post, J. V.
1981-01-01
Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.
NASA Technical Reports Server (NTRS)
1988-01-01
Integrated Environments for Large, Complex Systems is the theme for the RICIS symposium of 1988. Distinguished professionals from industry, government, and academia have been invited to participate and present their views and experiences regarding research, education, and future directions related to this topic. Within RICIS, more than half of the research being conducted is in the area of Computer Systems and Software Engineering. The focus of this research is on the software development life-cycle for large, complex, distributed systems. Within the education and training component of RICIS, the primary emphasis has been to provide education and training for software professionals.
A Holistic Approach to Systems Development
NASA Technical Reports Server (NTRS)
Wong, Douglas T.
2008-01-01
Introduces a Holistic and Iterative Design Process. Continuous process but can be loosely divided into four stages. More effort spent early on in the design. Human-centered and Multidisciplinary. Emphasis on Life-Cycle Cost. Extensive use of modeling, simulation, mockups, human subjects, and proven technologies. Human-centered design doesn t mean the human factors discipline is the most important Disciplines should be involved in the design: Subsystem vendors, configuration management, operations research, manufacturing engineering, simulation/modeling, cost engineering, hardware engineering, software engineering, test and evaluation, human factors, electromagnetic compatibility, integrated logistics support, reliability/maintainability/availability, safety engineering, test equipment, training systems, design-to-cost, life cycle cost, application engineering etc. 9
Sonic Onyx: Case Study of an Interactive Artwork
NASA Astrophysics Data System (ADS)
Ahmed, Salah Uddin; Jaccheri, Letizia; M'kadmi, Samir
Software supported art projects are increasing in numbers in recent years as artists are exploring how computing can be used to create new forms of live art. Interactive sound installation is one kind of art in this genre. In this article we present the development process and functional description of Sonic Onyx, an interactive sound installation. The objective is to show, through the life cycle of Sonic Onyx, how a software dependent interactive artwork involves its users and raises issues related to its interaction and functionalities.
Experience with case tools in the design of process-oriented software
NASA Astrophysics Data System (ADS)
Novakov, Ognian; Sicard, Claude-Henri
1994-12-01
In Accelerator systems such as the CERN PS complex, process equipment has a life time which may exceed the typical life cycle of its related software. Taking into account the variety of such equipment, it is important to keep the analysis and design of the software in a system-independent form. This paper discusses the experience gathered in using commercial CASE tools for analysis, design and reverse engineering of different process-oriented software modules, with a principal emphasis on maintaining the initial analysis in a standardized form. Such tools have been in existence for several years, but this paper shows that they are not fully adapted to our needs. In particular, the paper stresses the problems of integrating such a tool into an existing data-base-dependent development chain, the lack of real-time simulation tools and of Object-Oriented concepts in existing commercial packages. Finally, the paper gives a broader view of software engineering needs in our particular context.
Code of Federal Regulations, 2011 CFR
2011-10-01
... electrical, mechanical, hardware, or software) that is part of a system or subsystem. Configuration..., including the hardware components and software version, is documented and maintained through the life-cycle... or compensates individuals to perform the duties specified in § 236.921 (a). Executive software means...
Code of Federal Regulations, 2014 CFR
2014-10-01
... electrical, mechanical, hardware, or software) that is part of a system or subsystem. Configuration..., including the hardware components and software version, is documented and maintained through the life-cycle... or compensates individuals to perform the duties specified in § 236.921 (a). Executive software means...
Code of Federal Regulations, 2012 CFR
2012-10-01
... electrical, mechanical, hardware, or software) that is part of a system or subsystem. Configuration..., including the hardware components and software version, is documented and maintained through the life-cycle... or compensates individuals to perform the duties specified in § 236.921 (a). Executive software means...
Code of Federal Regulations, 2013 CFR
2013-10-01
... electrical, mechanical, hardware, or software) that is part of a system or subsystem. Configuration..., including the hardware components and software version, is documented and maintained through the life-cycle... or compensates individuals to perform the duties specified in § 236.921 (a). Executive software means...
The US EPA is developing an open and publically available software program called the Human Exposure Model (HEM) to provide near-field exposure information for Life Cycle Impact Assessments (LCIAs). Historically, LCIAs have often omitted impacts from near-field sources of exposur...
Eliciting and Analyzing Quality Requirements: Management Influences on Software Quality Requirements
2005-03-01
their portable devices [ Balfanz 04] can be applied to many of the quality requirements issues within the development life cycle: " Neither usability or...Systems. New York, NY: Wiley Computer Publishing, 2001. [ Balfanz 04] Balfanz , D.; Durfee, G; & Smetters, D. K. "Search of Usable Security: Five Lessons from
A Bibliography of the Personal Software Process (PSP) and the Team Software Process (TSP)
2009-10-01
Postmortem.‖ Proceedings of the TSP Symposium (September 2007). http://www.sei.cmu.edu/tspsymposium/ Rickets , Chris; Lindeman, Robert; & Hodgins, Brad... Rickets , Chris A. ―A TSP Software Maintenance Life Cycle.‖ CrossTalk (March 2005). Rozanc, I. & Mahnic, V. ―Teaching Software Quality with Emphasis on PSP
Richard D. Bergman
2015-01-01
Developing wood product LCI data helps construct product LCAs that are then incorporated into developing whole building LCAs in environmental footprint software such as the Athena Impact Estimator for Buildings (ASMI 2015). Conducting whole building LCAs provide for points that go toward green building certification in rating systems such as LEED v4, Green Globes, and...
Fly-by-light technology development plan
NASA Technical Reports Server (NTRS)
Todd, J. R.; Williams, T.; Goldthorpe, S.; Hay, J.; Brennan, M.; Sherman, B.; Chen, J.; Yount, Larry J.; Hess, Richard F.; Kravetz, J.
1990-01-01
The driving factors and developments which make a fly-by-light (FBL) viable are discussed. Documentation, analyses, and recommendations are provided on the major issues pertinent to facilitating the U.S. implementation of commercial FBL aircraft before the turn of the century. Areas of particular concern include ultra-reliable computing (hardware/software); electromagnetic environment (EME); verification and validation; optical techniques; life-cycle maintenance; and basis and procedures for certification.
10 CFR 436.15 - Formatting cost data.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Procedures for Life Cycle Cost Analyses § 436.15 Formatting cost data. In establishing cost data under §§ 436.16 and 436.17 and measuring cost effectiveness by the modes of analysis described by § 436.19 through... software referenced in the Life Cycle Cost Manual for the Federal Energy Management Program. ...
Impacts of software and its engineering on the carbon footprint of ICT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kern, Eva, E-mail: e.kern@umwelt-campus.de; Dick, Markus, E-mail: sustainablesoftwareblog@gmail.com; Naumann, Stefan, E-mail: s.naumann@umwelt-campus.de
2015-04-15
The energy consumption of information and communication technology (ICT) is still increasing. Even though several solutions regarding the hardware side of Green IT exist, the software contribution to Green IT is not well investigated. The carbon footprint is one way to rate the environmental impacts of ICT. In order to get an impression of the induced CO{sub 2} emissions of software, we will present a calculation method for the carbon footprint of a software product over its life cycle. We also offer an approach on how to integrate some aspects of carbon footprint calculation into software development processes and discussmore » impacts and tools regarding this calculation method. We thus show the relevance of energy measurements and the attention to impacts on the carbon footprint by software within Green Software Engineering.« less
On the engineering of crucial software
NASA Technical Reports Server (NTRS)
Pratt, T. W.; Knight, J. C.; Gregory, S. T.
1983-01-01
The various aspects of the conventional software development cycle are examined. This cycle was the basis of the augmented approach contained in the original grant proposal. This cycle was found inadequate for crucial software development, and the justification for this opinion is presented. Several possible enhancements to the conventional software cycle are discussed. Software fault tolerance, a possible enhancement of major importance, is discussed separately. Formal verification using mathematical proof is considered. Automatic programming is a radical alternative to the conventional cycle and is discussed. Recommendations for a comprehensive approach are presented, and various experiments which could be conducted in AIRLAB are described.
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
IV&V Project Assessment Process Validation
NASA Technical Reports Server (NTRS)
Driskell, Stephen
2012-01-01
The Space Launch System (SLS) will launch NASA's Multi-Purpose Crew Vehicle (MPCV). This launch vehicle will provide American launch capability for human exploration and travelling beyond Earth orbit. SLS is designed to be flexible for crew or cargo missions. The first test flight is scheduled for December 2017. The SLS SRR/SDR provided insight into the project development life cycle. NASA IV&V ran the standard Risk Based Assessment and Portfolio Based Risk Assessment to identify analysis tasking for the SLS program. This presentation examines the SLS System Requirements Review/System Definition Review (SRR/SDR), IV&V findings for IV&V process validation correlation to/from the selected IV&V tasking and capabilities. It also provides a reusable IEEE 1012 scorecard for programmatic completeness across the software development life cycle.
CMMI(Registered) for Development, Version 1.3
2010-11-01
ISO /IEC 15288:2008 Systems and Software Engineering – System Life Cycle Processes [ ISO 2008b] ISO /IEC 27001 :2005 Information technology – Security...IEC 2005 International Organization for Standardization and International Electrotechnical Commission. ISO /IEC 27001 Information Technology...International Electrotechnical Commission ( ISO /IEC) body of standards. CMMs focus on improving processes in an organization. They contain the
NASA's Approach to Software Assurance
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2015-01-01
NASA defines software assurance as: the planned and systematic set of activities that ensure conformance of software life cycle processes and products to requirements, standards, and procedures via quality, safety, reliability, and independent verification and validation. NASA's implementation of this approach to the quality, safety, reliability, security and verification and validation of software is brought together in one discipline, software assurance. Organizationally, NASA has software assurance at each NASA center, a Software Assurance Manager at NASA Headquarters, a Software Assurance Technical Fellow (currently the same person as the SA Manager), and an Independent Verification and Validation Organization with its own facility. An umbrella risk mitigation strategy for safety and mission success assurance of NASA's software, software assurance covers a wide area and is better structured to address the dynamic changes in how software is developed, used, and managed, as well as it's increasingly complex functionality. Being flexible, risk based, and prepared for challenges in software at NASA is essential, especially as much of our software is unique for each mission.
Combined use of semantics and metadata to manage Research Data Life Cycle in Environmental Sciences
NASA Astrophysics Data System (ADS)
Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Pertinez, Esther; Palacio, Aida
2017-04-01
The use of metadata to contextualize datasets is quite extended in Earth System Sciences. There are some initiatives and available tools to help data managers to choose the best metadata standard that fit their use cases, like the DCC Metadata Directory (http://www.dcc.ac.uk/resources/metadata-standards). In our use case, we have been gathering physical, chemical and biological data from a water reservoir since 2010. A well metadata definition is crucial not only to contextualize our own data but also to integrate datasets from other sources like satellites or meteorological agencies. That is why we have chosen EML (Ecological Metadata Language), which integrates many different elements to define a dataset, including the project context, instrumentation and parameters definition, and the software used to process, provide quality controls and include the publication details. Those metadata elements can contribute to help both human and machines to understand and process the dataset. However, the use of metadata is not enough to fully support the data life cycle, from the Data Management Plan definition to the Publication and Re-use. To do so, we need to define not only metadata and attributes but also the relationships between them, so semantics are needed. Ontologies, being a knowledge representation, can contribute to define the elements of a research data life cycle, including DMP, datasets, software, etc. They also can define how the different elements are related between them and how they interact. The first advantage of developing an ontology of a knowledge domain is that they provide a common vocabulary hierarchy (i.e. a conceptual schema) that can be used and standardized by all the agents interested in the domain (either humans or machines). This way of using ontologies is one of the basis of the Semantic Web, where ontologies are set to play a key role in establishing a common terminology between agents. To develop an ontology we are using a graphical tool Protégé, which is a graphical ontology-development tool that supports a rich knowledge model and it is open-source and freely available. To process and manage the ontology, we are using Semantic MediaWiki, which is able to process queries. Semantic MediaWiki is an extension of MediaWiki where we can do semantic search and export data in RDF. Our final goal is integrating our data repository portal and semantic processing engine in order to have a complete system to manage the data life cycle stages and their relationships, including machine-actionable DMP solution, datasets and software management, computing resources for processing and analysis and publication features (DOI mint). This way we will be able to reproduce the full data life cycle chain warranting the FAIR+R principles.
Methodology for Software Reliability Prediction. Volume 1.
1987-11-01
SPACECRAFT 0 MANNED SPACECRAFT B ATCH SYSTEM AIRBORNE AVIONICS 0 UNMANNED EVENT C014TROL a REAL TIME CLOSED 0 UNMANNED SPACECRAFT LOOP OPERATINS SPACECRAFT...software reliability. A Software Reliability Measurement Framework was established which spans the life cycle of a software system and includes the...specification, prediction, estimation, and assessment of software reliability. Data from 59 systems , representing over 5 million lines of code, were
Preliminary design of the redundant software experiment
NASA Technical Reports Server (NTRS)
Campbell, Roy; Deimel, Lionel; Eckhardt, Dave, Jr.; Kelly, John; Knight, John; Lauterbach, Linda; Lee, Larry; Mcallister, Dave; Mchugh, John
1985-01-01
The goal of the present experiment is to characterize the fault distributions of highly reliable software replicates, constructed using techniques and environments which are similar to those used in comtemporary industrial software facilities. The fault distributions and their effect on the reliability of fault tolerant configurations of the software will be determined through extensive life testing of the replicates against carefully constructed randomly generated test data. Each detected error will be carefully analyzed to provide insight in to their nature and cause. A direct objective is to develop techniques for reducing the intensity of coincident errors, thus increasing the reliability gain which can be achieved with fault tolerance. Data on the reliability gains realized, and the cost of the fault tolerant configurations can be used to design a companion experiment to determine the cost effectiveness of the fault tolerant strategy. Finally, the data and analysis produced by this experiment will be valuable to the software engineering community as a whole because it will provide a useful insight into the nature and cause of hard to find, subtle faults which escape standard software engineering validation techniques and thus persist far into the software life cycle.
Dynamic visualization techniques for high consequence software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollock, G.M.
1998-02-01
This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification. The prototype tool is described along with the requirements constraint language after a brief literature review is presented. Examples of howmore » the tool can be used are also presented. In conclusion, the most significant advantage of this tool is to provide a first step in evaluating specification completeness, and to provide a more productive method for program comprehension and debugging. The expected payoff is increased software surety confidence, increased program comprehension, and reduced development and debugging time.« less
The Need for V&V in Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1997-01-01
V&V is currently performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to entire' domain or product line rather than a critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. engineering. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for activities.
2008-07-01
cycle Evolution of a system, product, service, project or other human-made entity from conception through retirement [ ISO 12207 ]. Logical line of...012 [ ISO 1995] International Organization for Standardization. ISO /IEC 12207 :1995—Information technology— Software life cycle processes. http...definitions, authors were asked to use or align with already existing standards such as those available through ISO and IEEE when possible. Literature
Implications of Responsive Space on the Flight Software Architecture
NASA Technical Reports Server (NTRS)
Wilmot, Jonathan
2006-01-01
The Responsive Space initiative has several implications for flight software that need to be addressed not only within the run-time element, but the development infrastructure and software life-cycle process elements as well. The runtime element must at a minimum support Plug & Play, while the development and process elements need to incorporate methods to quickly generate the needed documentation, code, tests, and all of the artifacts required of flight quality software. Very rapid response times go even further, and imply little or no new software development, requiring instead, using only predeveloped and certified software modules that can be integrated and tested through automated methods. These elements have typically been addressed individually with significant benefits, but it is when they are combined that they can have the greatest impact to Responsive Space. The Flight Software Branch at NASA's Goddard Space Flight Center has been developing the runtime, infrastructure and process elements needed for rapid integration with the Core Flight software System (CFS) architecture. The CFS architecture consists of three main components; the core Flight Executive (cFE), the component catalog, and the Integrated Development Environment (DE). This paper will discuss the design of the components, how they facilitate rapid integration, and lessons learned as the architecture is utilized for an upcoming spacecraft.
Camañes, Víctor; Elduque, Daniel; Javierre, Carlos; Fernández, Ángel
2014-01-01
This paper analyzes the high relevance of material selection for the sustainable development of an LED weatherproof light fitting. The research reveals how this choice modifies current and future end of life scenarios and can reduce the overall environmental impact. This life cycle assessment has been carried out with Ecotool, a software program especially developed for designers to assess the environmental performance of their designs at the same time that they are working on them. Results show that special attention can be put on the recycling and reusing of the product from the initial stages of development. PMID:28788160
Camañes, Víctor; Elduque, Daniel; Javierre, Carlos; Fernández, Ángel
2014-08-11
This paper analyzes the high relevance of material selection for the sustainable development of an LED weatherproof light fitting. The research reveals how this choice modifies current and future end of life scenarios and can reduce the overall environmental impact. This life cycle assessment has been carried out with Ecotool, a software program especially developed for designers to assess the environmental performance of their designs at the same time that they are working on them. Results show that special attention can be put on the recycling and reusing of the product from the initial stages of development.
Air Force Space Command. Space and Missile Systems Center Standard. Configuration Management
2008-06-13
Aerospace Corporation report number TOR-2006( 8583 )-1. 3. Beneficial comments (recommendations, additions, deletions) and any pertinent data that...Engineering Drawing Practices IEEE STD 610.12 Glossary of Software Engineering Terminology, September 28,1990 ISO /IEC 12207 Software Life...item, regardless of media, formally designated and fixed at a specific time during the configuration item’s life cycle. (Source: ISO /IEC 12207
Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction
Venkatesan, R.
2016-01-01
Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets. PMID:27738649
Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction.
Kumudha, P; Venkatesan, R
Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets.
Application of Lightweight Formal Methods to Software Security
NASA Technical Reports Server (NTRS)
Gilliam, David P.; Powell, John D.; Bishop, Matt
2005-01-01
Formal specification and verification of security has proven a challenging task. There is no single method that has proven feasible. Instead, an integrated approach which combines several formal techniques can increase the confidence in the verification of software security properties. Such an approach which species security properties in a library that can be reused by 2 instruments and their methodologies developed for the National Aeronautics and Space Administration (NASA) at the Jet Propulsion Laboratory (JPL) are described herein The Flexible Modeling Framework (FMF) is a model based verijkation instrument that uses Promela and the SPIN model checker. The Property Based Tester (PBT) uses TASPEC and a Text Execution Monitor (TEM). They are used to reduce vulnerabilities and unwanted exposures in software during the development and maintenance life cycles.
A Content Markup Language for Data Services
NASA Astrophysics Data System (ADS)
Noviello, C.; Acampa, P.; Mango Furnari, M.
Network content delivery and documents sharing is possible using a variety of technologies, such as distributed databases, service-oriented applications, and so forth. The development of such systems is a complex job, because document life cycle involves a strong cooperation between domain experts and software developers. Furthermore, the emerging software methodologies, such as the service-oriented architecture and knowledge organization (e.g., semantic web) did not really solve the problems faced in a real distributed and cooperating settlement. In this chapter the authors' efforts to design and deploy a distribute and cooperating content management system are described. The main features of the system are a user configurable document type definition and a management middleware layer. It allows CMS developers to orchestrate the composition of specialized software components around the structure of a document. In this chapter are also reported some of the experiences gained on deploying the developed framework in a cultural heritage dissemination settlement.
Ground Systems Development Environment (GSDE) software configuration management
NASA Technical Reports Server (NTRS)
Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo
1992-01-01
This report presents a review of the software configuration management (CM) plans developed for the Space Station Training Facility (SSTF) and the Space Station Control Center. The scope of the CM assessed in this report is the Systems Integration and Testing Phase of the Ground Systems development life cycle. This is the period following coding and unit test and preceding delivery to operational use. This report is one of a series from a study of the interfaces among the Ground Systems Development Environment (GSDE), the development systems for the SSTF and the SSCC, and the target systems for SSCC and SSTF. This is the last report in the series. The focus of this report is on the CM plans developed by the contractors for the Mission Systems Contract (MSC) and the Training Systems Contract (TSC). CM requirements are summarized and described in terms of operational software development. The software workflows proposed in the TSC and MSC plans are reviewed in this context, and evaluated against the CM requirements defined in earlier study reports. Recommendations are made to improve the effectiveness of CM while minimizing its impact on the developers.
Advanced software integration: The case for ITV facilities
NASA Technical Reports Server (NTRS)
Garman, John R.
1990-01-01
The array of technologies and methodologies involved in the development and integration of avionics software has moved almost as rapidly as computer technology itself. Future avionics systems involve major advances and risks in the following areas: (1) Complexity; (2) Connectivity; (3) Security; (4) Duration; and (5) Software engineering. From an architectural standpoint, the systems will be much more distributed, involve session-based user interfaces, and have the layered architectures typified in the layers of abstraction concepts popular in networking. Typified in the NASA Space Station Freedom will be the highly distributed nature of software development itself. Systems composed of independent components developed in parallel must be bound by rigid standards and interfaces, the clean requirements and specifications. Avionics software provides a challenge in that it can not be flight tested until the first time it literally flies. It is the binding of requirements for such an integration environment into the advances and risks of future avionics systems that form the basis of the presented concept and the basic Integration, Test, and Verification concept within the development and integration life cycle of Space Station Mission and Avionics systems.
NASA Technical Reports Server (NTRS)
Allen, B. Danette
1998-01-01
In the traditional 'waterfall' model of the software project life cycle, the Requirements Phase ends and flows into the Design Phase, which ends and flows into the Development Phase. Unfortunately, the process rarely, if ever, works so smoothly in practice. Instead, software developers often receive new requirements, or modifications to the original requirements, well after the earlier project phases have been completed. In particular, projects with shorter than ideal schedules are highly susceptible to frequent requirements changes, as the software requirements analysis phase is often forced to begin before the overall system requirements and top-level design are complete. This results in later modifications to the software requirements, even though the software design and development phases may be complete. Requirements changes received in the later stages of a software project inevitably lead to modification of existing developed software. Presented here is a series of software design techniques that can greatly reduce the impact of last-minute requirements changes. These techniques were successfully used to add built-in flexibility to two complex software systems in which the requirements were expected to (and did) change frequently. These large, real-time systems were developed at NASA Langley Research Center (LaRC) to test and control the Lidar In-Space Technology Experiment (LITE) instrument which flew aboard the space shuttle Discovery as the primary payload on the STS-64 mission.
Integrated Advanced Sounding Unit-A (AMSU-A). Configuration Management Plan
NASA Technical Reports Server (NTRS)
Cavanaugh, J.
1996-01-01
The purpose of this plan is to identify the baseline to be established during the development life cycle of the integrated AMSU-A, and define the methods and procedures which Aerojet will follow in the implementation of configuration control for each established baseline. Also this plan establishes the Configuration Management process to be used for the deliverable hardware, software, and firmware of the Integrated AMSU-A during development, design, fabrication, test, and delivery.
Quantifying the Relationship between AMC Resources and U.S. Army Materiel Readiness
1989-08-25
Resource Management report 984 for the same period. Insufficient data precluded analysis of the OMA PEs Total Package Fielding and Life Cycle Software...procurement, had the greatest failure rates when subjected to the statistical tests merely because of the reduced number of data pairs. Analyses of...ENGINEERING DEVELOPMENT 6.5 - MANAGEMENT AND SUPPORT 6.7 - OPERATIONAL SYSTEM DEVELOPMENT P2 - GENERAL PURPOSE FORCES P3 - INTELIGENCE AND COMMUNICATIONS P7
Software Requirements Specification for an Ammunition Management System
1986-09-01
thesis takes the form of a software requirements specification. Such a specification, according to Pressman [Ref. 7], establishes a complete...defined by Pressman , is depicted in Figure 1.1. 11 Figure 1.1 Generalized Software Life Cycle The common thread which binds the various phases together...application of software engineering principles requires an established methodology. This methodology, according to Pressman [Ref. 8:p. 151 is an
COSTMODL - AN AUTOMATED SOFTWARE DEVELOPMENT COST ESTIMATION TOOL
NASA Technical Reports Server (NTRS)
Roush, G. B.
1994-01-01
The cost of developing computer software consumes an increasing portion of many organizations' budgets. As this trend continues, the capability to estimate the effort and schedule required to develop a candidate software product becomes increasingly important. COSTMODL is an automated software development estimation tool which fulfills this need. Assimilating COSTMODL to any organization's particular environment can yield significant reduction in the risk of cost overruns and failed projects. This user-customization capability is unmatched by any other available estimation tool. COSTMODL accepts a description of a software product to be developed and computes estimates of the effort required to produce it, the calendar schedule required, and the distribution of effort and staffing as a function of the defined set of development life-cycle phases. This is accomplished by the five cost estimation algorithms incorporated into COSTMODL: the NASA-developed KISS model; the Basic, Intermediate, and Ada COCOMO models; and the Incremental Development model. This choice affords the user the ability to handle project complexities ranging from small, relatively simple projects to very large projects. Unique to COSTMODL is the ability to redefine the life-cycle phases of development and the capability to display a graphic representation of the optimum organizational structure required to develop the subject project, along with required staffing levels and skills. The program is menu-driven and mouse sensitive with an extensive context-sensitive help system that makes it possible for a new user to easily install and operate the program and to learn the fundamentals of cost estimation without having prior training or separate documentation. The implementation of these functions, along with the customization feature, into one program makes COSTMODL unique within the industry. COSTMODL was written for IBM PC compatibles, and it requires Turbo Pascal 5.0 or later and Turbo Professional 5.0 for recompilation. An executable is provided on the distribution diskettes. COSTMODL requires 512K RAM. The standard distribution medium for COSTMODL is three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. COSTMODL was developed in 1991. IBM PC is a registered trademark of International Business Machines. Borland and Turbo Pascal are registered trademarks of Borland International, Inc. Turbo Professional is a trademark of TurboPower Software. MS-DOS is a registered trademark of Microsoft Corporation. Turbo Professional is a trademark of TurboPower Software.
Integrated Modeling Environment
NASA Technical Reports Server (NTRS)
Mosier, Gary; Stone, Paul; Holtery, Christopher
2006-01-01
The Integrated Modeling Environment (IME) is a software system that establishes a centralized Web-based interface for integrating people (who may be geographically dispersed), processes, and data involved in a common engineering project. The IME includes software tools for life-cycle management, configuration management, visualization, and collaboration.
NASA Technical Reports Server (NTRS)
Currit, P. A.
1983-01-01
The Cleanroom software development methodology is designed to take the gamble out of product releases for both suppliers and receivers of the software. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (Mean Time To Failure) of the product at the time of its release. A statistical approach to software product testing using randomly selected samples of test cases is considered. A statistical model is defined for the certification process which uses the timing data recorded during test. A reasonableness argument for this model is provided that uses previously published data on software product execution. Also included is a derivation of the certification model estimators and a comparison of the proposed least squares technique with the more commonly used maximum likelihood estimators.
Software project management tools in global software development: a systematic mapping study.
Chadli, Saad Yasser; Idri, Ali; Ros, Joaquín Nicolás; Fernández-Alemán, José Luis; de Gea, Juan M Carrillo; Toval, Ambrosio
2016-01-01
Global software development (GSD) which is a growing trend in the software industry is characterized by a highly distributed environment. Performing software project management (SPM) in such conditions implies the need to overcome new limitations resulting from cultural, temporal and geographic separation. The aim of this research is to discover and classify the various tools mentioned in literature that provide GSD project managers with support and to identify in what way they support group interaction. A systematic mapping study has been performed by means of automatic searches in five sources. We have then synthesized the data extracted and presented the results of this study. A total of 102 tools were identified as being used in SPM activities in GSD. We have classified these tools, according to the software life cycle process on which they focus and how they support the 3C collaboration model (communication, coordination and cooperation). The majority of the tools found are standalone tools (77%). A small number of platforms (8%) also offer a set of interacting tools that cover the software development lifecycle. Results also indicate that SPM areas in GSD are not adequately supported by corresponding tools and deserve more attention from tool builders.
Formal Methods Case Studies for DO-333
NASA Technical Reports Server (NTRS)
Cofer, Darren; Miller, Steven P.
2014-01-01
RTCA DO-333, Formal Methods Supplement to DO-178C and DO-278A provides guidance for software developers wishing to use formal methods in the certification of airborne systems and air traffic management systems. The supplement identifies the modifications and additions to DO-178C and DO-278A objectives, activities, and software life cycle data that should be addressed when formal methods are used as part of the software development process. This report presents three case studies describing the use of different classes of formal methods to satisfy certification objectives for a common avionics example - a dual-channel Flight Guidance System. The three case studies illustrate the use of theorem proving, model checking, and abstract interpretation. The material presented is not intended to represent a complete certification effort. Rather, the purpose is to illustrate how formal methods can be used in a realistic avionics software development project, with a focus on the evidence produced that could be used to satisfy the verification objectives found in Section 6 of DO-178C.
Integration and validation testing for PhEDEx, DBS and DAS with the PhEDEx LifeCycle agent
NASA Astrophysics Data System (ADS)
Boeser, C.; Chwalek, T.; Giffels, M.; Kuznetsov, V.; Wildish, T.
2014-06-01
The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within CMS experiment at LHC. To approach this problem we developed an integration test suite based on the LifeCycle agent, a tool originally conceived for stress-testing new releases of PhEDEx, the CMS data-placement tool. The LifeCycle agent provides a framework for customising the test workflow in arbitrary ways, and can scale to levels of activity well beyond those seen in normal running. This means we can run realistic performance tests at scales not likely to be seen by the experiment for some years, or with custom topologies to examine particular situations that may cause concern some time in the future. The LifeCycle agent has recently been enhanced to become a general purpose integration and validation testing tool for major CMS services. It allows cross-system integration tests of all three components to be performed in controlled environments, without interfering with production services. In this paper we discuss the design and implementation of the LifeCycle agent. We describe how it is used for small-scale debugging and validation tests, and how we extend that to large-scale tests of whole groups of sub-systems. We show how the LifeCycle agent can emulate the action of operators, physicists, or software agents external to the system under test, and how it can be scaled to large and complex systems.
A Framework for Performing Verification and Validation in Reuse Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1997-01-01
Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.
Software Formal Inspections Standard
NASA Technical Reports Server (NTRS)
1993-01-01
This Software Formal Inspections Standard (hereinafter referred to as Standard) is applicable to NASA software. This Standard defines the requirements that shall be fulfilled by the software formal inspections process whenever this process is specified for NASA software. The objective of this Standard is to define the requirements for a process that inspects software products to detect and eliminate defects as early as possible in the software life cycle. The process also provides for the collection and analysis of inspection data to improve the inspection process as well as the quality of the software.
NASA Software Documentation Standard
NASA Technical Reports Server (NTRS)
1991-01-01
The NASA Software Documentation Standard (hereinafter referred to as "Standard") is designed to support the documentation of all software developed for NASA; its goal is to provide a framework and model for recording the essential information needed throughout the development life cycle and maintenance of a software system. The NASA Software Documentation Standard can be applied to the documentation of all NASA software. The Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. The basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.
The Robust Software Feedback Model: An Effective Waterfall Model Tailoring for Space SW
NASA Astrophysics Data System (ADS)
Tipaldi, Massimo; Gotz, Christoph; Ferraguto, Massimo; Troiano, Luigi; Bruenjes, Bernhard
2013-08-01
The selection of the most suitable software life cycle process is of paramount importance in any space SW project. Despite being the preferred choice, the waterfall model is often exposed to some criticism. As matter of fact, its main assumption of moving to a phase only when the preceding one is completed and perfected (and under the demanding SW schedule constraints) is not easily attainable. In this paper, a tailoring of the software waterfall model (named “Robust Software Feedback Model”) is presented. The proposed methodology sorts out these issues by combining a SW waterfall model with a SW prototyping approach. The former is aligned with the SW main production line and is based on the full ECSS-E-ST-40C life-cycle reviews, whereas the latter is carried out in advance versus the main SW streamline (so as to inject its lessons learnt into the main streamline) and is based on a lightweight approach.
Guidance and Control Software Project Data - Volume 2: Development Documents
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J. (Editor)
2008-01-01
The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the development documents from the GCS project. Volume 2 contains three appendices: A. Guidance and Control Software Development Specification; B. Design Description for the Pluto Implementation of the Guidance and Control Software; and C. Source Code for the Pluto Implementation of the Guidance and Control Software
A Structured Approach for Reviewing Architecture Documentation
2009-12-01
as those found in ISO 12207 [ ISO /IEC 12207 :2008] (for software engineering), ISO 15288 [ ISO /IEC 15288:2008] (for systems engineering), the Rational...Open Distributed Processing - Reference Model: Foundations ( ISO /IEC 10746-2). 1996. [ ISO /IEC 12207 :2008] International Organization for...Standardization & International Electrotechnical Commission. Sys- tems and software engineering – Software life cycle processes ( ISO /IEC 12207 ). 2008. [ ISO
An Incremental Life-cycle Assurance Strategy for Critical System Certification
2014-11-04
for Safe Aircraft Operation Embedded software systems introduce a new class of problems not addressed by traditional system modeling & analysis...Platform Runtime Architecture Application Software Embedded SW System Engineer Data Stream Characteristics Latency jitter affects control behavior...do system level failures still occur despite fault tolerance techniques being deployed in systems ? Embedded software system as major source of
Discovering objects in a blood recipient information system.
Qiu, D; Junghans, G; Marquardt, K; Kroll, H; Mueller-Eckhardt, C; Dudeck, J
1995-01-01
Application of object-oriented (OO) methodologies has been generally considered as a solution to the problem of improving the software development process and managing the so-called software crisis. Among them, object-oriented analysis (OOA) is the most essential and is a vital prerequisite for the successful use of other OO methodologies. Though there are already a good deal of OOA methods published, the most important aspect common to all these methods: discovering objects classes truly relevant to the given problem domain, has remained a subject to be intensively researched. In this paper, using the successful development of a blood recipient information system as an example, we present our approach which is based on the conceptual framework of responsibility-driven OOA. In the discussion, we also suggest that it may be inadequate to simply attribute the software crisis to the waterfall model of the software development life-cycle. We are convinced that the real causes for the failure of some software and information systems should be sought in the methodologies used in some crucial phases of the software development process. Furthermore, a software system can also fail if object classes essential to the problem domain are not discovered, implemented and visualized, so that the real-world situation cannot be faithfully traced by it.
Software Reliability Analysis of NASA Space Flight Software: A Practical Experience
Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac
2017-01-01
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255
Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.
Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac
2016-01-01
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.
NASA Technical Reports Server (NTRS)
Green, Scott; Kouchakdjian, Ara; Basili, Victor; Weidow, David
1990-01-01
This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage.
Change management methodologies trained for automotive infotainment projects
NASA Astrophysics Data System (ADS)
Prostean, G.; Volker, S.; Hutanu, A.
2017-01-01
An Automotive Electronic Control Units (ECU) development project embedded within a car Environment is constantly under attack of a continuous flow of modifications of specifications throughout the life cycle. Root causes for those modifications are for instance simply software or hardware implementation errors or requirement changes to satisfy the forthcoming demands of the market to ensure the later commercial success. It is unavoidable that from the very beginning until the end of the project “requirement changes” will “expose” the agreed objectives defined by contract specifications, which are product features, budget, schedule and quality. The key discussions will focus upon an automotive radio-navigation (infotainment) unit, which challenges aftermarket devises such as smart phones. This competition stresses especially current used automotive development processes, which are fit into a 4 Year car development (introduction) cycle against a one-year update cycle of a smart phone. The research will focus the investigation of possible impacts of changes during all phases of the project: the Concept-Validation, Development and Debugging-Phase. Building a thorough understanding of prospective threats is of paramount importance in order to establish the adequate project management process to handle requirement changes. Personal automotive development experiences and Literature review of change- and configuration management software development methodologies led the authors to new conceptual models, which integrates into the structure of traditional development models used in automotive projects, more concretely of radio-navigation projects.
2014-01-01
Background According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). Methods The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. Results The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. Conclusions The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers. PMID:24655818
Höss, Angelika; Lampe, Christian; Panse, Ralf; Ackermann, Benjamin; Naumann, Jakob; Jäkel, Oliver
2014-03-21
According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers.
CMMI (Trademark) for Development, Version 1.2
2006-08-01
IEC TR 12207 Information Technology—Software Life Cycle Processes, 1995. http://www.jtc1-sc7.org. ISO 1998 International Organization for...We also consult other standards as needed, including the following: • ISO 9000 [ ISO 1987] • ISO /IEC 12207 [ ISO 1995] • ISO /IEC 15504 [ ISO 2006... ISO /IEC) body of standards. CMMs focus on improving processes in an organization. They contain the essential elements of effective processes for one
Discrete mathematics, formal methods, the Z schema and the software life cycle
NASA Technical Reports Server (NTRS)
Bown, Rodney L.
1991-01-01
The proper role and scope for the use of discrete mathematics and formal methods in support of engineering the security and integrity of components within deployed computer systems are discussed. It is proposed that the Z schema can be used as the specification language to capture the precise definition of system and component interfaces. This can be accomplished with an object oriented development paradigm.
Operability engineering in the Deep Space Network
NASA Technical Reports Server (NTRS)
Wilkinson, Belinda
1993-01-01
Many operability problems exist at the three Deep Space Communications Complexes (DSCC's) of the Deep Space Network (DSN). Four years ago, the position of DSN Operability Engineer was created to provide the opportunity for someone to take a system-level approach to solving these problems. Since that time, a process has been developed for personnel and development engineers and for enforcing user interface standards in software designed for the DSCC's. Plans are for the participation of operations personnel in the product life-cycle to expand in the future.
Software Engineering Education Directory
1988-01-01
Dana Hausman and Suzanne Woolf were crucial to the successful completion of this edition of the directory. Their teamwork, energy, and dedication...for this directory began in the summer of 1986 with a questionnaire mailed to schools selected from Peterson’s Graduate Programs in Engineering and...Christoper, and Siegel, Stan Software Cost Estimation and Life-Cycle Control by Putnam, Lawrence H. Software Quality Assurance: A Practical Approach by
A CMMI-based approach for medical software project life cycle study.
Chen, Jui-Jen; Su, Wu-Chen; Wang, Pei-Wen; Yen, Hung-Chi
2013-01-01
In terms of medical techniques, Taiwan has gained international recognition in recent years. However, the medical information system industry in Taiwan is still at a developing stage compared with the software industries in other nations. In addition, systematic development processes are indispensable elements of software development. They can help developers increase their productivity and efficiency and also avoid unnecessary risks arising during the development process. Thus, this paper presents an application of Light-Weight Capability Maturity Model Integration (LW-CMMI) to Chang Gung Medical Research Project (CMRP) in the Nuclear medicine field. This application was intended to integrate user requirements, system design and testing of software development processes into three layers (Domain, Concept and Instance) model. Then, expressing in structural System Modeling Language (SysML) diagrams and converts part of the manual effort necessary for project management maintenance into computational effort, for example: (semi-) automatic delivery of traceability management. In this application, it supports establishing artifacts of "requirement specification document", "project execution plan document", "system design document" and "system test document", and can deliver a prototype of lightweight project management tool on the Nuclear Medicine software project. The results of this application can be a reference for other medical institutions in developing medical information systems and support of project management to achieve the aim of patient safety.
NASA Technical Reports Server (NTRS)
1989-01-01
An overview of the five volume set of Information System Life-Cycle and Documentation Standards is provided with information on its use. The overview covers description, objectives, key definitions, structure and application of the standards, and document structure decisions. These standards were created to provide consistent NASA-wide structures for coordinating, controlling, and documenting the engineering of an information system (hardware, software, and operational procedures components) phase by phase.
Methods for cost estimation in software project management
NASA Astrophysics Data System (ADS)
Briciu, C. V.; Filip, I.; Indries, I. I.
2016-02-01
The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.
Biomes and Natural Cycles. [CD-ROM].
ERIC Educational Resources Information Center
1996
This interactive multimedia software illustrates and explains life on planet Earth through colorful and dynamic representations. Clear explanations and animation elucidate a variety of subjects such as the organization of the ecosphere, the flux of energy, water cycles, climates, and characteristics of regions across the globe. Five animated films…
Towards a general object-oriented software development methodology
NASA Technical Reports Server (NTRS)
Seidewitz, ED; Stark, Mike
1986-01-01
An object is an abstract software model of a problem domain entity. Objects are packages of both data and operations of that data (Goldberg 83, Booch 83). The Ada (tm) package construct is representative of this general notion of an object. Object-oriented design is the technique of using objects as the basic unit of modularity in systems design. The Software Engineering Laboratory at the Goddard Space Flight Center is currently involved in a pilot program to develop a flight dynamics simulator in Ada (approximately 40,000 statements) using object-oriented methods. Several authors have applied object-oriented concepts to Ada (e.g., Booch 83, Cherry 85). It was found that these methodologies are limited. As a result a more general approach was synthesized with allows a designer to apply powerful object-oriented principles to a wide range of applications and at all stages of design. An overview is provided of this approach. Further, how object-oriented design fits into the overall software life-cycle is considered.
NASA Technical Reports Server (NTRS)
Mallasch, Paul G.
1993-01-01
This volume contains the complete software system documentation for the Federal Communications Commission (FCC) Transponder Loading Data Conversion Software (FIX-FCC). This software was written to facilitate the formatting and conversion of FCC Transponder Occupancy (Loading) Data before it is loaded into the NASA Geosynchronous Satellite Orbital Statistics Database System (GSOSTATS). The information that FCC supplies NASA is in report form and must be converted into a form readable by the database management software used in the GSOSTATS application. Both the User's Guide and Software Maintenance Manual are contained in this document. This volume of documentation passed an independent quality assurance review and certification by the Product Assurance and Security Office of the Planning Research Corporation (PRC). The manuals were reviewed for format, content, and readability. The Software Management and Assurance Program (SMAP) life cycle and documentation standards were used in the development of this document. Accordingly, these standards were used in the review. Refer to the System/Software Test/Product Assurance Report for the Geosynchronous Satellite Orbital Statistics Database System (GSOSTATS) for additional information.
Using Ada: The deeper challenges
NASA Technical Reports Server (NTRS)
Feinberg, David A.
1986-01-01
The Ada programming language and the associated Ada Programming Support Environment (APSE) and Ada Run Time Environment (ARTE) provide the potential for significant life-cycle cost reductions in computer software development and maintenance activities. The Ada programming language itself is standardized, trademarked, and controlled via formal validation procedures. Though compilers are not yet production-ready as most would desire, the technology for constructing them is sufficiently well known and understood that time and money should suffice to correct current deficiencies. The APSE and ARTE are, on the other hand, significantly newer issues within most software development and maintenance efforts. Currently, APSE and ARTE are highly dependent on differing implementer concepts, strategies, and market objectives. Complex and sophisticated mission-critical computing systems require the use of a complete Ada-based capability, not just the programming language itself; yet the range of APSE and ARTE features which must actually be utilized can vary significantly from one system to another. As a consequence, the need to understand, objectively evaluate, and select differing APSE and ARTE capabilities and features is critical to the effective use of Ada and the life-cycle efficiencies it is intended to promote. It is the selection, collection, and understanding of APSE and ARTE which provide the deeper challenges of using Ada for real-life mission-critical computing systems. Some of the current issues which must be clarified, often on a case-by-case basis, in order to successfully realize the full capabilities of Ada are discussed.
1992-04-01
contractor’s existing data collection, analysis and corrective action system shall be utilized, with modification only as necessary to meet the...either from test or from analysis of field data . The procedures of MIL-STD-756B assume that the reliability of a 18 DEFINE IDENTIFY SOFTWARE LIFE CYCLE...to generate sufficient data to report a statistically valid reliability figure for a class of software. Casual data gathering accumulates data more
Increasing productivity through Total Reuse Management (TRM)
NASA Technical Reports Server (NTRS)
Schuler, M. P.
1991-01-01
Total Reuse Management (TRM) is a new concept currently being promoted by the NASA Langley Software Engineering and Ada Lab (SEAL). It uses concepts similar to those promoted in Total Quality Management (TQM). Both technical and management personnel are continually encouraged to think in terms of reuse. Reuse is not something that is aimed for after a product is completed, but rather it is built into the product from inception through development. Lowering software development costs, reducing risk, and increasing code reliability are the more prominent goals of TRM. Procedures and methods used to adopt and apply TRM are described. Reuse is frequently thought of as only being applicable to code. However, reuse can apply to all products and all phases of the software life cycle. These products include management and quality assurance plans, designs, and testing procedures. Specific examples of successfully reused products are given and future goals are discussed.
Guidance and Control Software Project Data - Volume 1: Planning Documents
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J. (Editor)
2008-01-01
The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the planning documents from the GCS project. Volume 1 contains five appendices: A. Plan for Software Aspects of Certification for the Guidance and Control Software Project; B. Software Development Standards for the Guidance and Control Software Project; C. Software Verification Plan for the Guidance and Control Software Project; D. Software Configuration Management Plan for the Guidance and Control Software Project; and E. Software Quality Assurance Activities.
Probabilistic Fatigue Damage Program (FATIG)
NASA Technical Reports Server (NTRS)
Michalopoulos, Constantine
2012-01-01
FATIG computes fatigue damage/fatigue life using the stress rms (root mean square) value, the total number of cycles, and S-N curve parameters. The damage is computed by the following methods: (a) traditional method using Miner s rule with stress cycles determined from a Rayleigh distribution up to 3*sigma; and (b) classical fatigue damage formula involving the Gamma function, which is derived from the integral version of Miner's rule. The integration is carried out over all stress amplitudes. This software solves the problem of probabilistic fatigue damage using the integral form of the Palmgren-Miner rule. The software computes fatigue life using an approach involving all stress amplitudes, up to N*sigma, as specified by the user. It can be used in the design of structural components subjected to random dynamic loading, or by any stress analyst with minimal training for fatigue life estimates of structural components.
Development of a smart type motor operated valve for nuclear power plants
NASA Astrophysics Data System (ADS)
Kim, Chang-Hwoi; Park, Joo-Hyun; Lee, Dong-young; Koo, In-Soo
2005-12-01
In this paper, the design concept of the smart type motor operator valve for nuclear power plant was described. The development objective of the smart valve is to achieve superior accuracy, long-term reliability, and ease of use. In this reasons, developed smart valve has fieldbus communication such as deviceNet and Profibus-DP, auto-tuning PID controller, self-diagnostics, and on-line calibration capabilities. And also, to achieve pressure, temperature, and flow control with internal PID controller, the pressure sensor and transmitter were included in this valve. And, temperature and flow signal acquisition port was prepared. The developed smart valve will be performed equipment qualification test such as environment, EMI/EMC, and vibration in Korea Test Lab. And, the valve performance is tested in a test loop which is located in Seoul National University Lab. To apply nuclear power plant, the software is being developed according to software life cycle. The developed software is verified by independent software V and V team. It is expected that the smart valve can be applied to an existing NPPs for replacing or to a new nuclear power plants. The design and fabrication of smart valve is now being processed.
Solving the Software Legacy Problem with RISA
NASA Astrophysics Data System (ADS)
Ibarra, A.; Gabriel, C.
2012-09-01
Nowadays hardware and system infrastructure evolve on time scales much shorter than the typical duration of space astronomy missions. Data processing software capabilities have to evolve to preserve the scientific return during the entire experiment life time. Software preservation is a key issue that has to be tackled before the end of the project to keep the data usable over many years. We present RISA (Remote Interface to Science Analysis) as a solution to decouple data processing software and infrastructure life-cycles, using JAVA applications and web-services wrappers to existing software. This architecture employs embedded SAS in virtual machines assuring a homogeneous job execution environment. We will also present the first studies to reactivate the data processing software of the EXOSAT mission, the first ESA X-ray astronomy mission launched in 1983, using the generic RISA approach.
Software Engineering Laboratory (SEL) cleanroom process model
NASA Technical Reports Server (NTRS)
Green, Scott; Basili, Victor; Godfrey, Sally; Mcgarry, Frank; Pajerski, Rose; Waligora, Sharon
1991-01-01
The Software Engineering Laboratory (SEL) cleanroom process model is described. The term 'cleanroom' originates in the integrated circuit (IC) production process, where IC's are assembled in dust free 'clean rooms' to prevent the destructive effects of dust. When applying the clean room methodology to the development of software systems, the primary focus is on software defect prevention rather than defect removal. The model is based on data and analysis from previous cleanroom efforts within the SEL and is tailored to serve as a guideline in applying the methodology to future production software efforts. The phases that are part of the process model life cycle from the delivery of requirements to the start of acceptance testing are described. For each defined phase, a set of specific activities is discussed, and the appropriate data flow is described. Pertinent managerial issues, key similarities and differences between the SEL's cleanroom process model and the standard development approach used on SEL projects, and significant lessons learned from prior cleanroom projects are presented. It is intended that the process model described here will be further tailored as additional SEL cleanroom projects are analyzed.
Hardware development process for Human Research facility applications
NASA Astrophysics Data System (ADS)
Bauer, Liz
2000-01-01
The simple goal of the Human Research Facility (HRF) is to conduct human research experiments on the International Space Station (ISS) astronauts during long-duration missions. This is accomplished by providing integration and operation of the necessary hardware and software capabilities. A typical hardware development flow consists of five stages: functional inputs and requirements definition, market research, design life cycle through hardware delivery, crew training, and mission support. The purpose of this presentation is to guide the audience through the early hardware development process: requirement definition through selecting a development path. Specific HRF equipment is used to illustrate the hardware development paths. .
Flight Dynamics Mission Support and Quality Assurance Process
NASA Technical Reports Server (NTRS)
Oh, InHwan
1996-01-01
This paper summarizes the method of the Computer Sciences Corporation Flight Dynamics Operation (FDO) quality assurance approach to support the National Aeronautics and Space Administration Goddard Space Flight Center Flight Dynamics Support Branch. Historically, a strong need has existed for developing systematic quality assurance using methods that account for the unique nature and environment of satellite Flight Dynamics mission support. Over the past few years FDO has developed and implemented proactive quality assurance processes applied to each of the six phases of the Flight Dynamics mission support life cycle: systems and operations concept, system requirements and specifications, software development support, operations planing and training, launch support, and on-orbit mission operations. Rather than performing quality assurance as a final step after work is completed, quality assurance has been built in as work progresses in the form of process assurance. Process assurance activities occur throughout the Flight Dynamics mission support life cycle. The FDO Product Assurance Office developed process checklists for prephase process reviews, mission team orientations, in-progress reviews, and end-of-phase audits. This paper will outline the evolving history of FDO quality assurance approaches, discuss the tailoring of Computer Science Corporations's process assurance cycle procedures, describe some of the quality assurance approaches that have been or are being developed, and present some of the successful results.
2011-10-01
Systems engineer- ing knowledge has also been documented through the standards bodies, most notably : • ISO /IEC/IEEE 15288, Systems Engineer- ing...System Life Cycle Processes, 2008 (see [10]). • ANSI/EIA 632, Processes for Engineering a System, (1998) • IEEE 1220, ISO /IEC 26702 Application...tion • United States Defense Acquisition Guidebook, Chapter 4, June 27, 2011 • IEEE/EIA 12207 , Software Life Cycle Processes, 2008 • United
Software Formal Inspections Guidebook
NASA Technical Reports Server (NTRS)
1993-01-01
The Software Formal Inspections Guidebook is designed to support the inspection process of software developed by and for NASA. This document provides information on how to implement a recommended and proven method for conducting formal inspections of NASA software. This Guidebook is a companion document to NASA Standard 2202-93, Software Formal Inspections Standard, approved April 1993, which provides the rules, procedures, and specific requirements for conducting software formal inspections. Application of the Formal Inspections Standard is optional to NASA program or project management. In cases where program or project management decide to use the formal inspections method, this Guidebook provides additional information on how to establish and implement the process. The goal of the formal inspections process as documented in the above-mentioned Standard and this Guidebook is to provide a framework and model for an inspection process that will enable the detection and elimination of defects as early as possible in the software life cycle. An ancillary aspect of the formal inspection process incorporates the collection and analysis of inspection data to effect continual improvement in the inspection process and the quality of the software subjected to the process.
Error Cost Escalation Through the Project Life Cycle
NASA Technical Reports Server (NTRS)
Stecklein, Jonette M.; Dabney, Jim; Dick, Brandon; Haskins, Bill; Lovell, Randy; Moroney, Gregory
2004-01-01
It is well known that the costs to fix errors increase as the project matures, but how fast do those costs build? A study was performed to determine the relative cost of fixing errors discovered during various phases of a project life cycle. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. If the cost of fixing a requirements error discovered during the requirements phase is defined to be 1 unit, the cost to fix that error if found during the design phase increases to 3 - 8 units; at the manufacturing/build phase, the cost to fix the error is 7 - 16 units; at the integration and test phase, the cost to fix the error becomes 21 - 78 units; and at the operations phase, the cost to fix the requirements error ranged from 29 units to more than 1500 units
Development strategies for the satellite flight software on-board Meteosat Third Generation
NASA Astrophysics Data System (ADS)
Tipaldi, Massimo; Legendre, Cedric; Koopmann, Olliver; Ferraguto, Massimo; Wenker, Ralf; D'Angelo, Gianni
2018-04-01
Nowadays, satellites are becoming increasingly software dependent. Satellite Flight Software (FSW), that is to say, the application software running on the satellite main On-Board Computer (OBC), plays a relevant role in implementing complex space mission requirements. In this paper, we examine relevant technical approaches and programmatic strategies adopted for the development of the Meteosat Third Generation Satellite (MTG) FSW. To begin with, we present its layered model-based architecture, and the means for ensuring a robust and reliable interaction among the FSW components. Then, we focus on the selection of an effective software development life cycle model. In particular, by combining plan-driven and agile approaches, we can fulfill the need of having preliminary SW versions. They can be used for the elicitation of complex system-level requirements as well as for the initial satellite integration and testing activities. Another important aspect can be identified in the testing activities. Indeed, very demanding quality requirements have to be fulfilled in satellite SW applications. This manuscript proposes a test automation framework, which uses an XML-based test procedure language independent of the underlying test environment. Finally, a short overview of the MTG FSW sizing and timing budgets concludes the paper.
Models and metrics for software management and engineering
NASA Technical Reports Server (NTRS)
Basili, V. R.
1988-01-01
This paper attempts to characterize and present a state of the art view of several quantitative models and metrics of the software life cycle. These models and metrics can be used to aid in managing and engineering software projects. They deal with various aspects of the software process and product, including resources allocation and estimation, changes and errors, size, complexity and reliability. Some indication is given of the extent to which the various models have been used and the success they have achieved.
2009-02-01
management, available at <http://www.iso.org/ iso /en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=39612&ICS1=35&ICS2=40 &ICS3=>. ISO /IEC 27001 . Information...Management of the Systems Engineering Process. [ ISO /IEC 27001 ] ISO /IEC 27001 :2005. Information technology -- Security techniques -- Information security...software life cycles [ ISO /IEC 15026]. Software assurance is a key element of national security and homeland security. It is critical because dramatic
Developing a Virtual Physics World
ERIC Educational Resources Information Center
Wegener, Margaret; McIntyre, Timothy J.; McGrath, Dominic; Savage, Craig M.; Williamson, Michael
2012-01-01
In this article, the successful implementation of a development cycle for a physics teaching package based on game-like virtual reality software is reported. The cycle involved several iterations of evaluating students' use of the package followed by instructional and software development. The evaluation used a variety of techniques, including…
Making the Business Case for Software Assurance
2009-04-01
and Capability dEtermination-SPICE, ISO /IEC 15504, 1998. [ ISO 2007] International Organization for Standardization. " ISO /IEC 27001 & 27002 ...Implementing the Process Areas 6.2.7 Differences Between the CMMI and Software CMM Process Areas 6.3 The CMMI Appraisal Process 6.4 Adapting ISO 15504 to...Secure Software Assurance 6.4.1 Assessment and the Secure Life Cycle 6.4.2 ISO 15504 Capability Levels 6.5 Adapting the ISOIIEC 21287 Standard Approach to
Integrating Software-Architecture-Centric Methods into the Rational Unified Process
2004-07-01
Architecture Design ...................................................................................... 19...QAW in a life- cycle context. One issue that needs to be addressed is how scenarios produced in a QAW can be used by a software architecture design method...implementation testing. 18 CMU/SEI-2004-TR-011 CMU/SEI-2004-TR-011 19 4 Architecture Design The Attribute-Driven Design (ADD) method
Spacecraft Avionics Software Development Then and Now: Different but the Same
NASA Technical Reports Server (NTRS)
Mangieri, Mark L.; Garman, John (Jack); Vice, Jason
2012-01-01
NASA has always been in the business of balancing new technologies and techniques to achieve human space travel objectives. NASA s historic Software Production Facility (SPF) was developed to serve complex avionics software solutions during an era dominated by mainframes, tape drives, and lower level programming languages. These systems have proven themselves resilient enough to serve the Shuttle Orbiter Avionics life cycle for decades. The SPF and its predecessor the Software Development Lab (SDL) at NASA s Johnson Space Center (JSC) hosted flight software (FSW) engineering, development, simulation, and test. It was active from the beginning of Shuttle Orbiter development in 1972 through the end of the shuttle program in the summer of 2011 almost 40 years. NASA s Kedalion engineering analysis lab is on the forefront of validating and using many contemporary avionics HW/SW development and integration techniques, which represent new paradigms to NASA s heritage culture in avionics software engineering. Kedalion has validated many of the Orion project s HW/SW engineering techniques borrowed from the adjacent commercial aircraft avionics environment, inserting new techniques and skills into the Multi-Purpose Crew Vehicle (MPCV) Orion program. Using contemporary agile techniques, COTS products, early rapid prototyping, in-house expertise and tools, and customer collaboration, NASA has adopted a cost effective paradigm that is currently serving Orion effectively. This paper will explore and contrast differences in technology employed over the years of NASA s space program, due largely to technological advances in hardware and software systems, while acknowledging that the basic software engineering and integration paradigms share many similarities.
NASA Technical Reports Server (NTRS)
Liaw, Morris; Evesson, Donna
1988-01-01
Software Engineering and Ada Database (SEAD) was developed to provide an information resource to NASA and NASA contractors with respect to Ada-based resources and activities which are available or underway either in NASA or elsewhere in the worldwide Ada community. The sharing of such information will reduce duplication of effort while improving quality in the development of future software systems. SEAD data is organized into five major areas: information regarding education and training resources which are relevant to the life cycle of Ada-based software engineering projects such as those in the Space Station program; research publications relevant to NASA projects such as the Space Station Program and conferences relating to Ada technology; the latest progress reports on Ada projects completed or in progress both within NASA and throughout the free world; Ada compilers and other commercial products that support Ada software development; and reusable Ada components generated both within NASA and from elsewhere in the free world. This classified listing of reusable components shall include descriptions of tools, libraries, and other components of interest to NASA. Sources for the data include technical newletters and periodicals, conference proceedings, the Ada Information Clearinghouse, product vendors, and project sponsors and contractors.
WEBTAS Software Life Cycle Development
2006-09-01
may be published in both html and pdf formats via menu selection. Adobe® FrameMaker ® 7.1 and Quadralay Corporation WebWorks® Professional 2003...X X WebTAS 2.5.3 ISAM X X WebTAS 2.5.3 Domain Editor Guide X X 13 The backbone of the ISS publishing environment consists of Adobe® FrameMaker ...and WebWorks® Publisher Professional 2003. FrameMaker ® provides an enterprise-class authoring and publishing solution that combines the
Formalization of software requirements for information systems using fuzzy logic
NASA Astrophysics Data System (ADS)
Yegorov, Y. S.; Milov, V. R.; Kvasov, A. S.; Sorokoumova, S. N.; Suvorova, O. V.
2018-05-01
The paper considers an approach to the design of information systems based on flexible software development methodologies. The possibility of improving the management of the life cycle of information systems by assessing the functional relationship between requirements and business objectives is described. An approach is proposed to establish the relationship between the degree of achievement of business objectives and the fulfillment of requirements for the projected information system. It describes solutions that allow one to formalize the process of formation of functional and non-functional requirements with the help of fuzzy logic apparatus. The form of the objective function is formed on the basis of expert knowledge and is specified via learning from very small data set.
A Validation of Object-Oriented Design Metrics as Quality Indicators
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio
1997-01-01
This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.
A Validation of Object-Oriented Design Metrics
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.
1995-01-01
This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.
System-of-Systems Technology-Portfolio-Analysis Tool
NASA Technical Reports Server (NTRS)
O'Neil, Daniel; Mankins, John; Feingold, Harvey; Johnson, Wayne
2012-01-01
Advanced Technology Life-cycle Analysis System (ATLAS) is a system-of-systems technology-portfolio-analysis software tool. ATLAS affords capabilities to (1) compare estimates of the mass and cost of an engineering system based on competing technological concepts; (2) estimate life-cycle costs of an outer-space-exploration architecture for a specified technology portfolio; (3) collect data on state-of-the-art and forecasted technology performance, and on operations and programs; and (4) calculate an index of the relative programmatic value of a technology portfolio. ATLAS facilitates analysis by providing a library of analytical spreadsheet models for a variety of systems. A single analyst can assemble a representation of a system of systems from the models and build a technology portfolio. Each system model estimates mass, and life-cycle costs are estimated by a common set of cost models. Other components of ATLAS include graphical-user-interface (GUI) software, algorithms for calculating the aforementioned index, a technology database, a report generator, and a form generator for creating the GUI for the system models. At the time of this reporting, ATLAS is a prototype, embodied in Microsoft Excel and several thousand lines of Visual Basic for Applications that run on both Windows and Macintosh computers.
Taking advantage of ground data systems attributes to achieve quality results in testing software
NASA Technical Reports Server (NTRS)
Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.
1994-01-01
During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.
Automatic programming for critical applications
NASA Technical Reports Server (NTRS)
Loganantharaj, Raj L.
1988-01-01
The important phases of a software life cycle include verification and maintenance. Usually, the execution performance is an expected requirement in a software development process. Unfortunately, the verification and the maintenance of programs are the time consuming and the frustrating aspects of software engineering. The verification cannot be waived for the programs used for critical applications such as, military, space, and nuclear plants. As a consequence, synthesis of programs from specifications, an alternative way of developing correct programs, is becoming popular. The definition, or what is understood by automatic programming, has been changed with our expectations. At present, the goal of automatic programming is the automation of programming process. Specifically, it means the application of artificial intelligence to software engineering in order to define techniques and create environments that help in the creation of high level programs. The automatic programming process may be divided into two phases: the problem acquisition phase and the program synthesis phase. In the problem acquisition phase, an informal specification of the problem is transformed into an unambiguous specification while in the program synthesis phase such a specification is further transformed into a concrete, executable program.
An Integrated Fuel Depletion Calculator for Fuel Cycle Options Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Erich; Scopatz, Anthony
2016-04-25
Bright-lite is a reactor modeling software developed at the University of Texas Austin to expand upon the work done with the Bright [1] reactor modeling software. Originally, bright-lite was designed to function as a standalone reactor modeling software. However, this aim was refocused t couple bright-lite with the Cyclus fuel cycle simulator [2] to make it a module for the fuel cycle simulator.
Waste-to-energy: A review of life cycle assessment and its extension methods.
Zhou, Zhaozhi; Tang, Yuanjun; Chi, Yong; Ni, Mingjiang; Buekens, Alfons
2018-01-01
This article proposes a comprehensive review of evaluation tools based on life cycle thinking, as applied to waste-to-energy. Habitually, life cycle assessment is adopted to assess environmental burdens associated with waste-to-energy initiatives. Based on this framework, several extension methods have been developed to focus on specific aspects: Exergetic life cycle assessment for reducing resource depletion, life cycle costing for evaluating its economic burden, and social life cycle assessment for recording its social impacts. Additionally, the environment-energy-economy model integrates both life cycle assessment and life cycle costing methods and judges simultaneously these three features for sustainable waste-to-energy conversion. Life cycle assessment is sufficiently developed on waste-to-energy with concrete data inventory and sensitivity analysis, although the data and model uncertainty are unavoidable. Compared with life cycle assessment, only a few evaluations are conducted to waste-to-energy techniques by using extension methods and its methodology and application need to be further developed. Finally, this article succinctly summarises some recommendations for further research.
Specialty Engineering Supplement to IEEE-15288.1
2015-05-15
receiver required to work in a dense EMI environment. (15) Any RF receiver with a burnout level of less than 30 dBm (1 mW). b. A summary of all...Context 2.1 ISO-IEC-IEEE-15288: 2015, Systems and Software Engineering — System life cycle processes ISO-IEC-IEEE 15288 is the DOD-adopted standard for...to ISO-15288 for application of systems engineering on defense programs that was developed by a joint services working group under the auspices of the
Long-term care information systems: an overview of the selection process.
Nahm, Eun-Shim; Mills, Mary Etta; Feege, Barbara
2006-06-01
Under the current Medicare Prospective Payment System method and the ever-changing managed care environment, the long-term care information system is vital to providing quality care and to surviving in business. system selection process should be an interdisciplinary effort involving all necessary stakeholders for the proposed system. The system selection process can be modeled following the Systems Developmental Life Cycle: identifying problems, opportunities, and objectives; determining information requirements; analyzing system needs; designing the recommended system; and developing and documenting software.
2013-06-01
for crop irrigation. The disruptions also 1 idled key industries, led to billions of dollars of lost productivity, and stressed the entire Western...modify the super-system, and to resume the super-system run. 2.2 Requirements An important step in the software development life cycle is to capture...detects the . gms file and associated files in the remote directory that is allocated to the user. 4. If all of the files are present, the files are
NASA Technical Reports Server (NTRS)
Izygon, Michel E.
1992-01-01
This report is an attempt to clarify some of the concerns raised about the OMT method, specifically that OMT is weaker than the Booch method in a few key areas. This interim report specifically addresses the following issues: (1) is OMT object-oriented or only data-driven?; (2) can OMT be used as a front-end to implementation in C++?; (3) the inheritance concept in OMT is in contradiction with the 'pure and real' inheritance concept found in object-oriented (OO) design; (4) low support for software life-cycle issues, for project and risk management; (5) uselessness of functional modeling for the ROSE project; and (6) problems with event-driven and simulation systems. The conclusion of this report is that both Booch's method and Rumbaugh's method are good OO methods, each with strengths and weaknesses in different areas of the development process.
Software Dependability and Safety Evaluations ESA's Initiative
NASA Astrophysics Data System (ADS)
Hernek, M.
ESA has allocated funds for an initiative to evaluate Dependability and Safety methods of Software. The objectives of this initiative are; · More extensive validation of Safety and Dependability techniques for Software · Provide valuable results to improve the quality of the Software thus promoting the application of Dependability and Safety methods and techniques. ESA space systems are being developed according to defined PA requirement specifications. These requirements may be implemented through various design concepts, e.g. redundancy, diversity etc. varying from project to project. Analysis methods (FMECA. FTA, HA, etc) are frequently used during requirements analysis and design activities to assure the correct implementation of system PA requirements. The criticality level of failures, functions and systems is determined and by doing that the critical sub-systems are identified, on which dependability and safety techniques are to be applied during development. Proper performance of the software development requires the development of a technical specification for the products at the beginning of the life cycle. Such technical specification comprises both functional and non-functional requirements. These non-functional requirements address characteristics of the product such as quality, dependability, safety and maintainability. Software in space systems is more and more used in critical functions. Also the trend towards more frequent use of COTS and reusable components pose new difficulties in terms of assuring reliable and safe systems. Because of this, its dependability and safety must be carefully analysed. ESA identified and documented techniques, methods and procedures to ensure that software dependability and safety requirements are specified and taken into account during the design and development of a software system and to verify/validate that the implemented software systems comply with these requirements [R1].
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Chelmins, David; Downey, Joseph A.; Johnson, Sandra K.; Nappier, Jennifer M.
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA s Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test waveforms were developed to measure the gain of the transmit system across the tunable frequency band. These were used during thermal vacuum testing to enable characterization of the integrated system in the wide operational temperature range of space. Receive power indicators were used for Electromagnetic Interference tests (EMI) to understand the platform s susceptibility to external interferers independent of the waveform. Additional approaches and lessons learned during the SCaN Testbed subsystem and system level testing will be discussed that may help future SDR integrators
Unique Challenges Testing SDRs for Space
NASA Technical Reports Server (NTRS)
Johnson, Sandra; Chelmins, David; Downey, Joseph; Nappier, Jennifer
2013-01-01
This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA's Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test waveforms were developed to measure the gain of the transmit system across the tunable frequency band. These were used during thermal vacuum testing to enable characterization of the integrated system in the wide operational temperature range of space. Receive power indicators were used for Electromagnetic Interference tests (EMI) to understand the platform's susceptibility to external interferers independent of the waveform. Additional approaches and lessons learned during the SCaN Testbed subsystem and system level testing will be discussed that may help future SDR integrators.
Pavement management segment consolidation
DOT National Transportation Integrated Search
1998-01-01
Dividing roads into "homogeneous" segments has been a major problem for all areas of highway engineering. SDDOT uses Deighton Associates Limited software, dTIMS, to analyze life-cycle costs for various rehabilitation strategies on each segment of roa...
Guidance and Control Software Project Data - Volume 3: Verification Documents
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J. (Editor)
2008-01-01
The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the verification documents from the GCS project. Volume 3 contains four appendices: A. Software Verification Cases and Procedures for the Guidance and Control Software Project; B. Software Verification Results for the Pluto Implementation of the Guidance and Control Software; C. Review Records for the Pluto Implementation of the Guidance and Control Software; and D. Test Results Logs for the Pluto Implementation of the Guidance and Control Software.
The Model Life-cycle: Training Module
Model Life-Cycle includes identification of problems & the subsequent development, evaluation, & application of the model. Objectives: define ‘model life-cycle’, explore stages of model life-cycle, & strategies for development, evaluation, & applications.
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J. (Editor)
2008-01-01
The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes configuration management and quality assurance documents from the GCS project. Volume 4 contains six appendices: A. Software Accomplishment Summary for the Guidance and Control Software Project; B. Software Configuration Index for the Guidance and Control Software Project; C. Configuration Management Records for the Guidance and Control Software Project; D. Software Quality Assurance Records for the Guidance and Control Software Project; E. Problem Report for the Pluto Implementation of the Guidance and Control Software Project; and F. Support Documentation Change Reports for the Guidance and Control Software Project.
NASA Technical Reports Server (NTRS)
Berard, Edward V.
1986-01-01
An increasing number of programmers have attempted to change their image. They have made it plain that they wish not only to be taken seriously, but they also wish to be regarded as professionals. Many programmers now wish to referred to as software engineers. If programmers wish to be considered professionals in every sense of the word, two obstacles must be overcome: the inability to think of software as a product, and the idea that little or no skill is required to create and handle software throughout its life cycle. The steps to be taken toward professionalization are outlined along with recommendations.
Lean Development with the Morpheus Simulation Software
NASA Technical Reports Server (NTRS)
Brogley, Aaron C.
2013-01-01
The Morpheus project is an autonomous robotic testbed currently in development at NASA's Johnson Space Center (JSC) with support from other centers. Its primary objectives are to test new 'green' fuel propulsion systems and to demonstrate the capability of the Autonomous Lander Hazard Avoidance Technology (ALHAT) sensor, provided by the Jet Propulsion Laboratory (JPL) on a lunar landing trajectory. If successful, these technologies and lessons learned from the Morpheus testing cycle may be incorporated into a landing descent vehicle used on the moon, an asteroid, or Mars. In an effort to reduce development costs and cycle time, the project employs lean development engineering practices in its development of flight and simulation software. The Morpheus simulation makes use of existing software packages where possible to reduce the development time. The development and testing of flight software occurs primarily through the frequent test operation of the vehicle and incrementally increasing the scope of the test. With rapid development cycles, risk of loss of the vehicle and loss of the mission are possible, but efficient progress in development would not be possible without that risk.
A Framework for Performing V&V within Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1996-01-01
Verification and validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In order to provide early detection of errors, V&V is conducted in parallel with system development, often beginning with the concept phase. In reuse-based software engineering, however, decisions on the requirements, design and even implementation of domain assets can be made prior to beginning development of a specific system. In this case, V&V must be performed during domain engineering in order to have an impact on system development. This paper describes a framework for performing V&V within architecture-centric, reuse-based software engineering. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.
Software use cases to elicit the software requirements analysis within the ASTRI project
NASA Astrophysics Data System (ADS)
Conforti, Vito; Antolini, Elisa; Bonnoli, Giacomo; Bruno, Pietro; Bulgarelli, Andrea; Capalbi, Milvia; Fioretti, Valentina; Fugazza, Dino; Gardiol, Daniele; Grillo, Alessandro; Leto, Giuseppe; Lombardi, Saverio; Lucarelli, Fabrizio; Maccarone, Maria Concetta; Malaguti, Giuseppe; Pareschi, Giovanni; Russo, Federico; Sangiorgi, Pierluca; Schwarz, Joseph; Scuderi, Salvatore; Tanci, Claudio; Tosti, Gino; Trifoglio, Massimo; Vercellone, Stefano; Zanmar Sanchez, Ricardo
2016-07-01
The Italian National Institute for Astrophysics (INAF) is leading the Astrofisica con Specchi a Tecnologia Replicante Italiana (ASTRI) project whose main purpose is the realization of small size telescopes (SST) for the Cherenkov Telescope Array (CTA). The first goal of the ASTRI project has been the development and operation of an innovative end-to-end telescope prototype using a dual-mirror optical configuration (SST-2M) equipped with a camera based on silicon photo-multipliers and very fast read-out electronics. The ASTRI SST-2M prototype has been installed in Italy at the INAF "M.G. Fracastoro" Astronomical Station located at Serra La Nave, on Mount Etna, Sicily. This prototype will be used to test several mechanical, optical, control hardware and software solutions which will be used in the ASTRI mini-array, comprising nine telescopes proposed to be placed at the CTA southern site. The ASTRI mini-array is a collaborative and international effort led by INAF and carried out by Italy, Brazil and South-Africa. We present here the use cases, through UML (Unified Modeling Language) diagrams and text details, that describe the functional requirements of the software that will manage the ASTRI SST-2M prototype, and the lessons learned thanks to these activities. We intend to adopt the same approach for the Mini Array Software System that will manage the ASTRI miniarray operations. Use cases are of importance for the whole software life cycle; in particular they provide valuable support to the validation and verification activities. Following the iterative development approach, which breaks down the software development into smaller chunks, we have analysed the requirements, developed, and then tested the code in repeated cycles. The use case technique allowed us to formalize the problem through user stories that describe how the user procedurally interacts with the software system. Through the use cases we improved the communication among team members, fostered common agreement about system requirements, defined the normal and alternative course of events, understood better the business process, and defined the system test to ensure that the delivered software works properly. We present a summary of the ASTRI SST-2M prototype use cases, and how the lessons learned can be exploited for the ASTRI mini-array proposed for the CTA Observatory.
Implementing Software Safety in the NASA Environment
NASA Technical Reports Server (NTRS)
Wetherholt, Martha S.; Radley, Charles F.
1994-01-01
Until recently, NASA did not consider allowing computers total control of flight systems. Human operators, via hardware, have constituted the ultimate safety control. In an attempt to reduce costs, NASA has come to rely more and more heavily on computers and software to control space missions. (For example. software is now planned to control most of the operational functions of the International Space Station.) Thus the need for systematic software safety programs has become crucial for mission success. Concurrent engineering principles dictate that safety should be designed into software up front, not tested into the software after the fact. 'Cost of Quality' studies have statistics and metrics to prove the value of building quality and safety into the development cycle. Unfortunately, most software engineers are not familiar with designing for safety, and most safety engineers are not software experts. Software written to specifications which have not been safety analyzed is a major source of computer related accidents. Safer software is achieved step by step throughout the system and software life cycle. It is a process that includes requirements definition, hazard analyses, formal software inspections, safety analyses, testing, and maintenance. The greatest emphasis is placed on clearly and completely defining system and software requirements, including safety and reliability requirements. Unfortunately, development and review of requirements are the weakest link in the process. While some of the more academic methods, e.g. mathematical models, may help bring about safer software, this paper proposes the use of currently approved software methodologies, and sound software and assurance practices to show how, to a large degree, safety can be designed into software from the start. NASA's approach today is to first conduct a preliminary system hazard analysis (PHA) during the concept and planning phase of a project. This determines the overall hazard potential of the system to be built. Shortly thereafter, as the system requirements are being defined, the second iteration of hazard analyses takes place, the systems hazard analysis (SHA). During the systems requirements phase, decisions are made as to what functions of the system will be the responsibility of software. This is the most critical time to affect the safety of the software. From this point, software safety analyses as well as software engineering practices are the main focus for assuring safe software. While many of the steps proposed in this paper seem like just sound engineering practices, they are the best technical and most cost effective means to assure safe software within a safe system.
Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model
NASA Astrophysics Data System (ADS)
Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna
2017-06-01
Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.
A Model for Joint Software Reviews
1998-10-01
CEPMAN 1, 1996; Gabb, 1997], and with the growing popularity of outsourcing, they are becoming more important in the commercial sector [ ISO /IEC 12207 ...technical and management reviews [MIL-STD-498, 1996; ISO /IEC 12207 , 1995]. Management reviews occur after technical reviews, and are focused on the cost...characteristics, Standard (No. ISO /IEC 9126-1). [ ISO /IEC 12207 , 1995] Information Technology Software Life Cycle Processes, Standard (No. ISO /IEC 12207
Testing, Requirements, and Metrics
NASA Technical Reports Server (NTRS)
Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William
1998-01-01
The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.
Life Cycle Impact Assessment Research Developments and Needs
Life Cycle Impact Assessment (LCIA) developments are explained along with key publications which record discussions which comprised ISO 14042 and SETAC document development, UNEP SETAC Life Cycle Initiative research, and research from public and private research institutions. It ...
Transient Reliability Analysis Capability Developed for CARES/Life
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
2001-01-01
The CARES/Life software developed at the NASA Glenn Research Center provides a general-purpose design tool that predicts the probability of the failure of a ceramic component as a function of its time in service. This award-winning software has been widely used by U.S. industry to establish the reliability and life of a brittle material (e.g., ceramic, intermetallic, and graphite) structures in a wide variety of 21st century applications.Present capabilities of the NASA CARES/Life code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code can compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth failure conditions CARES/Life can handle sustained and linearly increasing time-dependent loads, whereas in cyclic fatigue applications various types of repetitive constant-amplitude loads can be accounted for. However, in real applications applied loads are rarely that simple but vary with time in more complex ways such as engine startup, shutdown, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. A methodology has now been developed to allow the CARES/Life computer code to perform reliability analysis of ceramic components undergoing transient thermal and mechanical loading. This means that CARES/Life will be able to analyze finite element models of ceramic components that simulate dynamic engine operating conditions. The methodology developed is generalized to account for material property variation (on strength distribution and fatigue) as a function of temperature. This allows CARES/Life to analyze components undergoing rapid temperature change in other words, components undergoing thermal shock. In addition, the capability has been developed to perform reliability analysis for components that undergo proof testing involving transient loads. This methodology was developed for environmentally assisted crack growth (crack growth as a function of time and loading), but it will be extended to account for cyclic fatigue (crack growth as a function of load cycles) as well.
ELISA, a demonstrator environment for information systems architecture design
NASA Technical Reports Server (NTRS)
Panem, Chantal
1994-01-01
This paper describes an approach of reusability of software engineering technology in the area of ground space system design. System engineers have lots of needs similar to software developers: sharing of a common data base, capitalization of knowledge, definition of a common design process, communication between different technical domains. Moreover system designers need to simulate dynamically their system as early as possible. Software development environments, methods and tools now become operational and widely used. Their architecture is based on a unique object base, a set of common management services and they host a family of tools for each life cycle activity. In late '92, CNES decided to develop a demonstrative software environment supporting some system activities. The design of ground space data processing systems was chosen as the application domain. ELISA (Integrated Software Environment for Architectures Specification) was specified as a 'demonstrator', i.e. a sufficient basis for demonstrations, evaluation and future operational enhancements. A process with three phases was implemented: system requirements definition, design of system architectures models, and selection of physical architectures. Each phase is composed of several activities that can be performed in parallel, with the provision of Commercial Off the Shelves Tools. ELISA has been delivered to CNES in January 94, currently used for demonstrations and evaluations on real projects (e.g. SPOT4 Satellite Control Center). It is on the way of new evolutions.
A report on NASA software engineering and Ada training requirements
NASA Technical Reports Server (NTRS)
Legrand, Sue; Freedman, Glenn B.; Svabek, L.
1987-01-01
NASA's software engineering and Ada skill base are assessed and information that may result in new models for software engineering, Ada training plans, and curricula are provided. A quantitative assessment which reflects the requirements for software engineering and Ada training across NASA is provided. A recommended implementation plan including a suggested curriculum with associated duration per course and suggested means of delivery is also provided. The distinction between education and training is made. Although it was directed to focus on NASA's need for the latter, the key relationships to software engineering education are also identified. A rationale and strategy for implementing a life cycle education and training program are detailed in support of improved software engineering practices and the transition to Ada.
Environmental sustainability assessment of hydropower plant in Europe using life cycle assessment
NASA Astrophysics Data System (ADS)
Mahmud, M. A. P.; Huda, N.; Farjana, S. H.; Lang, C.
2018-05-01
Hydropower is the oldest and most common type of renewable source of electricity available on this planet. The end of life process of hydropower plant have significant environmental impacts, which needs to be identified and minimized to ensure an environment friendly power generation. However, identifying the environmental impacts and health hazards are very little explored in the hydropower processing routes despite a significant quantity of production worldwide. This paper highlight the life-cycle environmental impact assessment of the reservoir based hydropower generation system located in alpine and non-alpine region of Europe, addressing their ecological effects by the ReCiPe and CML methods under several impact-assessment categories such as human health, ecosystems, global warming potential, acidification potential, etc. The Australasian life-cycle inventory database and SimaPro software are utilized to accumulate life-cycle inventory dataset and to evaluate the impacts. The results reveal that plants of alpine region offer superior environmental performance for couple of considered categories: global warming and photochemical oxidation, whilst in the other cases the outcomes are almost similar. Results obtained from this study will take part an important role in promoting sustainable generation of hydropower, and thus towards environment friendly energy production.
e!DAL--a framework to store, share and publish research data.
Arend, Daniel; Lange, Matthias; Chen, Jinbo; Colmsee, Christian; Flemming, Steffen; Hecht, Denny; Scholz, Uwe
2014-06-24
The life-science community faces a major challenge in handling "big data", highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the "big data life cycle". The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed "out-of-the-box" as an on-site repository. e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK's role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de.
NOSC Program Managers Handbook. Revision 1
1988-02-01
cost. The effects of application of life-cycle cost analysis through the planning and RIDT&E phases of a program, and the " design to cost" concept on...is the plan for assuring the quality of the design , design documentation, and fabricated/assembled hardware and associated computer software. 13.5.3.2...listings and printouts, which document the n. requirements, design , or details of compute : software; explain the capabilities and limitations of the
Extensibility Experiments with the Software Life-Cycle Support Environment
1991-11-01
APRICOT ) and Bit- Oriented Message Definer (BMD); and three from the Ada Software Repository (ASR) at White Sands-the NASA/Goddard Space Flight Center...Graphical Kernel System (GKS). c. AMS - The Automated Measurement System tool supports the definition, collec- tion, and reporting of quality metric...Ada Primitive Order Compilation Order Tool ( APRICOT ) 2. Bit-Oriented Message Definer (BMD) 3. LGEN: A Language Generator Tool 4. I"ilc Chc-ker 5
Towards a general object-oriented software development methodology
NASA Technical Reports Server (NTRS)
Seidewitz, ED; Stark, Mike
1986-01-01
Object diagrams were used to design a 5000 statement team training exercise and to design the entire dynamics simulator. The object diagrams are also being used to design another 50,000 statement Ada system and a personal computer based system that will be written in Modula II. The design methodology evolves out of these experiences as well as the limitations of other methods that were studied. Object diagrams, abstraction analysis, and associated principles provide a unified framework which encompasses concepts from Yourdin, Booch, and Cherry. This general object-oriented approach handles high level system design, possibly with concurrency, through object-oriented decomposition down to a completely functional level. How object-oriented concepts can be used in other phases of the software life-cycle, such as specification and testing is being studied concurrently.
AI tools in computer based problem solving
NASA Technical Reports Server (NTRS)
Beane, Arthur J.
1988-01-01
The use of computers to solve value oriented, deterministic, algorithmic problems, has evolved a structured life cycle model of the software process. The symbolic processing techniques used, primarily in research, for solving nondeterministic problems, and those for which an algorithmic solution is unknown, have evolved a different model, much less structured. Traditionally, the two approaches have been used completely independently. With the advent of low cost, high performance 32 bit workstations executing identical software with large minicomputers and mainframes, it became possible to begin to merge both models into a single extended model of computer problem solving. The implementation of such an extended model on a VAX family of micro/mini/mainframe systems is described. Examples in both development and deployment of applications involving a blending of AI and traditional techniques are given.
Object technology: A white paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, S.R.; Arrowood, L.F.; Cain, W.D.
1992-05-11
Object-Oriented Technology (OOT), although not a new paradigm, has recently been prominently featured in the trade press and even general business publications. Indeed, the promises of object technology are alluring: the ability to handle complex design and engineering information through the full manufacturing production life cycle or to manipulate multimedia information, and the ability to improve programmer productivity in creating and maintaining high quality software. Groups at a number of the DOE facilities have been exploring the use of object technology for engineering, business, and other applications. In this white paper, the technology is explored thoroughly and compared with previousmore » means of developing software and storing databases of information. Several specific projects within the DOE Complex are described, and the state of the commercial marketplace is indicated.« less
The software-cycle model for re-engineering and reuse
NASA Technical Reports Server (NTRS)
Bailey, John W.; Basili, Victor R.
1992-01-01
This paper reports on the progress of a study which will contribute to our ability to perform high-level, component-based programming by describing means to obtain useful components, methods for the configuration and integration of those components, and an underlying economic model of the costs and benefits associated with this approach to reuse. One goal of the study is to develop and demonstrate methods to recover reusable components from domain-specific software through a combination of tools, to perform the identification, extraction, and re-engineering of components, and domain experts, to direct the applications of those tools. A second goal of the study is to enable the reuse of those components by identifying techniques for configuring and recombining the re-engineered software. This component-recovery or software-cycle model addresses not only the selection and re-engineering of components, but also their recombination into new programs. Once a model of reuse activities has been developed, the quantification of the costs and benefits of various reuse options will enable the development of an adaptable economic model of reuse, which is the principal goal of the overall study. This paper reports on the conception of the software-cycle model and on several supporting techniques of software recovery, measurement, and reuse which will lead to the development of the desired economic model.
Manikandan, Narayanan; Subha, Srinivasan
2016-01-01
Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used.
Manikandan, Narayanan; Subha, Srinivasan
2016-01-01
Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used. PMID:26881271
Space station advanced automation
NASA Technical Reports Server (NTRS)
Woods, Donald
1990-01-01
In the development of a safe, productive and maintainable space station, Automation and Robotics (A and R) has been identified as an enabling technology which will allow efficient operation at a reasonable cost. The Space Station Freedom's (SSF) systems are very complex, and interdependent. The usage of Advanced Automation (AA) will help restructure, and integrate system status so that station and ground personnel can operate more efficiently. To use AA technology for the augmentation of system management functions requires a development model which consists of well defined phases of: evaluation, development, integration, and maintenance. The evaluation phase will consider system management functions against traditional solutions, implementation techniques and requirements; the end result of this phase should be a well developed concept along with a feasibility analysis. In the development phase the AA system will be developed in accordance with a traditional Life Cycle Model (LCM) modified for Knowledge Based System (KBS) applications. A way by which both knowledge bases and reasoning techniques can be reused to control costs is explained. During the integration phase the KBS software must be integrated with conventional software, and verified and validated. The Verification and Validation (V and V) techniques applicable to these KBS are based on the ideas of consistency, minimal competency, and graph theory. The maintenance phase will be aided by having well designed and documented KBS software.
NASA Technical Reports Server (NTRS)
Lindvall, Mikael; Godfrey, Sally; Ackermann, Chris; Ray, Arnab; Yonkwa, Lyly; Ganesan, Dharma; Stratton, William C.; Sibol, Deane E.
2008-01-01
Analyze, Visualize, and Evaluate structure and behavior using static and dynamic information, individual systems as well as systems of systems. Next steps: Refine software tool support; Apply to other systems; and Apply earlier in system life cycle.
NREL, Johns Hopkins SAIS Develop Method to Quantify Life Cycle Land Use of
Life Cycle Land Use of Electricity from Natural Gas News Release: NREL, Johns Hopkins SAIS Develop Method to Quantify Life Cycle Land Use of Electricity from Natural Gas October 2, 2017 A case study of time provides quantifiable information on the life cycle land use of generating electricity from
NASA Astrophysics Data System (ADS)
Kumar, Aravinda; Singh, Jeetendra Kumar; Mohan, K.
2012-06-01
Desuperheater assembly experiences thermal cycling in operation by design. During power plant's start up, load change and shut down, thermal gradient is highest. Desuperheater should be able to handle rapid ramp up or ramp down of temperature in these operations. With "hump style" two nozzle desuperheater, cracks were appearing in the pipe after only few cycles of operation. From the field data, it was clear that desuperheater is not able to handle disproportionate thermal expansion happening in the assembly during temperature ramp up and ramp down in operation and leading to cracks appearing in the piping. Growth of thermal fatigue crack is influenced by several factors including geometry, severity of thermal stress and applied mechanical load. This paper seeks to determine cause of failure of two nozzle "hump style" desuperheater using Finite Element Method (FEM) simulation technique. Thermal stress simulation and fatigue life calculation were performed using commercial FEA software "ANSYS" [from Ansys Inc, USA]. Simulation result showed that very high thermal stress is developing in the region where cracks are seen in the field. From simulation results, it is also clear that variable thermal expansion of two nozzle studs is creating high stress at the water manifold junction. A simple and viable solution is suggested by increasing the length of the manifold which solved the cracking issues in the pipe.
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali; Baggs, Rhoda
2007-01-01
In the early problem-solution era of software programming, functional decompositions were mainly used to design and implement software solutions. In functional decompositions, functions and data are introduced as two separate entities during the design phase, and are followed as such in the implementation phase. Functional decompositions make use of refactoring through optimizing the algorithms, grouping similar functionalities into common reusable functions, and using abstract representations of data where possible; all these are done during the implementation phase. This paper advocates the usage of object-oriented methodologies and design patterns as the centerpieces of refactoring software solutions. Refactoring software is a method of changing software design while explicitly preserving its external functionalities. The combined usage of object-oriented methodologies and design patterns to refactor should also benefit the overall software life cycle cost with improved software.
A Digital Knowledge Preservation Platform for Environmental Sciences
NASA Astrophysics Data System (ADS)
Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Pertinez, Esther; Palacio, Aida; Perez, David
2017-04-01
The Digital Knowledge Preservation Platform is the evolution of a pilot project for Open Data supporting the full research data life cycle. It is currently being evolved at IFCA (Instituto de Física de Cantabria) as a combination of different open tools that have been extended: DMPTool (https://dmptool.org/) with pilot semantics features (RDF export, parameters definition), INVENIO (http://invenio-software.org/ ) customized version to integrate the entire research data life cycle and Jupyter (http://jupyter.org/) as processing tool and reproducibility environment. This complete platform aims to provide an integrated environment for research data management following the FAIR+R principles: -Findable: The Web portal based on Invenio provides a search engine and all elements including metadata to make them easily findable. -Accessible: Both data and software are available online with internal PIDs and DOIs (provided by Datacite). -Interoperable: Datasets can be combined to perform new analysis. The OAI-PMH standard is also integrated. -Re-usable: different licenses types and embargo periods can be defined. -+Reproducible: directly integrated with cloud computing resources. The deployment of the entire system over a Cloud framework helps to build a dynamic and scalable solution, not only for managing open datasets but also as a useful tool for the final user, who is able to directly process and analyse the open data. In parallel, the direct use of semantics and metadata is being explored and integrated in the framework. Ontologies, being a knowledge representation, can contribute to define the elements and relationships of the research data life cycle, including DMP, datasets, software, etc. The first advantage of developing an ontology of a knowledge domain is that they provide a common vocabulary hierarchy (i.e. a conceptual schema) that can be used and standardized by all the agents interested in the domain (either humans or machines). This way of using ontologies is one of the basis of the Semantic Web, where ontologies are set to play a key role in establishing a common terminology between agents. To develop the ontology we are using a graphical tool called Protégé. Protégé is a graphical ontology-development tool which supports a rich knowledge model and it is open-source and freely available. However in order to process and manage the ontology from the web framework, we are using Semantic MediaWiki, which is able to process queries. Semantic MediaWiki is an extension of MediaWiki where we can do semantic search and export data in RDF and CSV format. This system is used as a testbed for the potential use of semantics in a more general environment. This Digital Knowledge Preservation Platform is very closed related to INDIGO-DataCloud project (https://www.indigo-datacloud.eu) since the same data life cycle approach is taking into account (Planning, Collect, Curate, Analyze, Publish, Preserve). INDIGO-DataCloud solutions will be able to support all the different elements in the system, as we showed in the last Research Data Alliance Plenary. This presentation will show the different elements on the system and how they work, as well as the roadmap of their continuous integration.
The methodology of multi-viewpoint clustering analysis
NASA Technical Reports Server (NTRS)
Mehrotra, Mala; Wild, Chris
1993-01-01
One of the greatest challenges facing the software engineering community is the ability to produce large and complex computer systems, such as ground support systems for unmanned scientific missions, that are reliable and cost effective. In order to build and maintain these systems, it is important that the knowledge in the system be suitably abstracted, structured, and otherwise clustered in a manner which facilitates its understanding, manipulation, testing, and utilization. Development of complex mission-critical systems will require the ability to abstract overall concepts in the system at various levels of detail and to consider the system from different points of view. Multi-ViewPoint - Clustering Analysis MVP-CA methodology has been developed to provide multiple views of large, complicated systems. MVP-CA provides an ability to discover significant structures by providing an automated mechanism to structure both hierarchically (from detail to abstract) and orthogonally (from different perspectives). We propose to integrate MVP/CA into an overall software engineering life cycle to support the development and evolution of complex mission critical systems.
Othman, Murnira; Latif, Mohd Talib; Mohamed, Ahmad Fariz
2018-02-01
This study intends to determine the health impacts from two office life cycles (St.1 and St.2) using life cycle assessment (LCA) and health risk assessment of indoor metals in coarse particulates (particulate matter with diameters of less than 10µm). The first building (St.1) is located in the city centre and the second building (St.2) is located within a new development 7km away from the city centre. All life cycle stages are considered and was analysed using SimaPro software. The trace metal concentrations were determined by inductively couple plasma-mass spectrometry (ICP-MS). Particle deposition in the human lung was estimated using the multiple-path particle dosimetry model (MPPD). The results showed that the total human health impact for St.1 (0.027 DALY m -2 ) was higher than St.2 (0.005 DALY m -2 ) for a 50-year lifespan, with the highest contribution from the operational phase. The potential health risk to indoor workers was quantified as a hazard quotient (HQ) for non-carcinogenic elements, where the total values for ingestion contact were 4.38E-08 (St.1) and 2.59E-08 (St.2) while for dermal contact the values were 5.12E-09 (St.1) and 2.58E-09 (St.2). For the carcinogenic risk, the values for dermal and ingestion routes for both St.1 and St.2 were lower than the acceptable limit which indicated no carcinogenic risk. Particle deposition for coarse particles in indoor workers was concentrated in the head, followed by the pulmonary region and tracheobronchial tract deposition. The results from this study showed that human health can be significantly affected by all the processes in office building life cycle, thus the minimisation of energy consumption and pollutant exposures are crucially required. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Melton, R.; Thomas, J.
With the rapid growth in the number of space actors, there has been a marked increase in the complexity and diversity of software systems utilized to support SSA target tracking, indication, warning, and collision avoidance. Historically, most SSA software has been constructed with "closed" proprietary code, which limits interoperability, inhibits the code transparency that some SSA customers need to develop domain expertise, and prevents the rapid injection of innovative concepts into these systems. Open-source aerospace software, a rapidly emerging, alternative trend in code development, is based on open collaboration, which has the potential to bring greater transparency, interoperability, flexibility, and reduced development costs. Open-source software is easily adaptable, geared to rapidly changing mission needs, and can generally be delivered at lower costs to meet mission requirements. This paper outlines Ball's COSMOS C2 system, a fully open-source, web-enabled, command-and-control software architecture which provides several unique capabilities to move the current legacy SSA software paradigm to an open source model that effectively enables pre- and post-launch asset command and control. Among the unique characteristics of COSMOS is the ease with which it can integrate with diverse hardware. This characteristic enables COSMOS to serve as the command-and-control platform for the full life-cycle development of SSA assets, from board test, to box test, to system integration and test, to on-orbit operations. The use of a modern scripting language, Ruby, also permits automated procedures to provide highly complex decision making for the tasking of SSA assets based on both telemetry data and data received from outside sources. Detailed logging enables quick anomaly detection and resolution. Integrated real-time and offline data graphing renders the visualization of the both ground and on-orbit assets simple and straightforward.
NREL: U.S. Life Cycle Inventory Database - User Poll
User Poll In preparation for the 2009 U.S. Life Cycle Inventory (LCI) Data Stakeholder meeting, the interested in life cycle analysis. The results from that poll and information gathered from the stakeholders polling data and feedback from life cycle analysis supporters helped develop the U.S. Life Cycle Inventory
Wake Cycle Robustness of the Mars Science Laboratory Flight Software
NASA Technical Reports Server (NTRS)
Whitehill, Robert
2011-01-01
The Mars Science Laboratory (MSL) is a spacecraft being developed by the Jet Propulsion Laboratory (JPL) for the purpose of in-situ exploration on the surface of Mars. The objective of MSL is to explore and quantitatively assess a local region on the Martian surface as a habitat for microbial life, past or present. This objective will be accomplished through the assessment of the biological potential of at least one target environment, the characterization of the geology and geochemistry of the landing region, an investigation of the planetary process relevant to past habitability, and a characterization of surface radiation. For this purpose, MSL incorporates a total of ten scientific instruments for which functions are to include, among others, atmospheric and descent imaging, chemical composition analysis, and radiation measurement. The Flight Software (FSW) system is responsible for all mission phases, including launch, cruise, entry-descent-landing, and surface operation of the rover. Because of the essential nature of flight software to project success, each of the software modules is undergoing extensive testing to identify and correct errors.
Software Product Lines: Report of the 2010 US Army Software Product Line Workshop
2010-06-01
requirements and statement of work ( SOW ) tasks can be in- cluded in the request for proposal (RFP) and the contract. 2.2.1 Basic Product Line Acquisition... SOW tasks in Figure 1. Two additional tasks (at the third tier level) ac- count for sustaining the production capability over the life cycle and...Acquisition Strategy RFP and SOW Initial Product Line Scope Product Line Business Case Capability Description Document Teaming Product Line
Support for comprehensive reuse
NASA Technical Reports Server (NTRS)
Basili, V. R.; Rombach, H. D.
1991-01-01
Reuse of products, processes, and other knowledge will be the key to enable the software industry to achieve the dramatic improvement in productivity and quality required to satisfy the anticipated growing demands. Although experience shows that certain kinds of reuse can be successful, general success has been elusive. A software life-cycle technology which allows comprehensive reuse of all kinds of software-related experience could provide the means to achieving the desired order-of-magnitude improvements. A comprehensive framework of models, model-based characterization schemes, and support mechanisms for better understanding, evaluating, planning, and supporting all aspects of reuse are introduced.
Model format for a vaccine stability report and software solutions.
Shin, Jinho; Southern, James; Schofield, Timothy
2009-11-01
A session of the International Association for Biologicals Workshop on Stability Evaluation of Vaccine, a Life Cycle Approach was devoted to a model format for a vaccine stability report, and software solutions. Presentations highlighted the utility of a model format that will conform to regulatory requirements and the ICH common technical document. However, there need be flexibility to accommodate individual company practices. Adoption of a model format is premised upon agreement regarding content between industry and regulators, and ease of use. Software requirements will include ease of use and protections against inadvertent misspecification of stability design or misinterpretation of program output.
Upgrading Custom Simulink Library Components for Use in Newer Versions of Matlab
NASA Technical Reports Server (NTRS)
Stewart, Camiren L.
2014-01-01
The Spaceport Command and Control System (SCCS) at Kennedy Space Center (KSC) is a control system for monitoring and launching manned launch vehicles. Simulations of ground support equipment (GSE) and the launch vehicle systems are required throughout the life cycle of SCCS to test software, hardware, and procedures to train the launch team. The simulations of the GSE at the launch site in conjunction with off-line processing locations are developed using Simulink, a piece of Commercial Off-The-Shelf (COTS) software. The simulations that are built are then converted into code and ran in a simulation engine called Trick, a Government off-the-shelf (GOTS) piece of software developed by NASA. In the world of hardware and software, it is not uncommon to see the products that are utilized be upgraded and patched or eventually fade away into an obsolete status. In the case of SCCS simulation software, Matlab, a MathWorks product, has released a number of stable versions of Simulink since the deployment of the software on the Development Work Stations in the Linux environment (DWLs). The upgraded versions of Simulink has introduced a number of new tools and resources that, if utilized fully and correctly, will save time and resources during the overall development of the GSE simulation and its correlating documentation. Unfortunately, simply importing the already built simulations into the new Matlab environment will not suffice as it will produce results that may not be expected as they were in the version that is currently being utilized. Thus, an upgrade execution plan was developed and executed to fully upgrade the simulation environment to one of the latest versions of Matlab.
The Improvement Cycle: Analyzing Our Experience
NASA Technical Reports Server (NTRS)
Pajerski, Rose; Waligora, Sharon
1996-01-01
NASA's Software Engineering Laboratory (SEL), one of the earliest pioneers in the areas of software process improvement and measurement, has had a significant impact on the software business at NASA Goddard. At the heart of the SEL's improvement program is a belief that software products can be improved by optimizing the software engineering process used to develop them and a long-term improvement strategy that facilitates small incremental improvements that accumulate into significant gains. As a result of its efforts, the SEL has incrementally reduced development costs by 60%, decreased error rates by 85%, and reduced cycle time by 25%. In this paper, we analyze the SEL's experiences on three major improvement initiatives to better understand the cyclic nature of the improvement process and to understand why some improvements take much longer than others.
Life Cycle Assessment for the Production of Oil Palm Seeds
Muhamad, Halimah; Ai, Tan Yew; Khairuddin, Nik Sasha Khatrina; Amiruddin, Mohd Din; May, Choo Yuen
2014-01-01
The oil palm seed production unit that generates germinated oil palm seeds is the first link in the palm oil supply chain, followed by the nursery to produce seedling, the plantation to produce fresh fruit bunches (FFB), the mill to produce crude palm oil (CPO) and palm kernel, the kernel crushers to produce crude palm kernel oil (CPKO), the refinery to produce refined palm oil (RPO) and finally the palm biodiesel plant to produce palm biodiesel. This assessment aims to investigate the life cycle assessment (LCA) of germinated oil palm seeds and the use of LCA to identify the stage/s in the production of germinated oil palm seeds that could contribute to the environmental load. The method for the life cycle impact assessment (LCIA) is modelled using SimaPro version 7, (System for Integrated environMental Assessment of PROducts), an internationally established tool used by LCA practitioners. This software contains European and US databases on a number of materials in addition to a variety of European- and US-developed impact assessment methodologies. LCA was successfully conducted for five seed production units and it was found that the environmental impact for the production of germinated oil palm was not significant. The characterised results of the LCIA for the production of 1000 germinated oil palm seeds showed that fossil fuel was the major impact category followed by respiratory inorganics and climate change. PMID:27073598
Life Cycle Assessment for the Production of Oil Palm Seeds.
Muhamad, Halimah; Ai, Tan Yew; Khairuddin, Nik Sasha Khatrina; Amiruddin, Mohd Din; May, Choo Yuen
2014-12-01
The oil palm seed production unit that generates germinated oil palm seeds is the first link in the palm oil supply chain, followed by the nursery to produce seedling, the plantation to produce fresh fruit bunches (FFB), the mill to produce crude palm oil (CPO) and palm kernel, the kernel crushers to produce crude palm kernel oil (CPKO), the refinery to produce refined palm oil (RPO) and finally the palm biodiesel plant to produce palm biodiesel. This assessment aims to investigate the life cycle assessment (LCA) of germinated oil palm seeds and the use of LCA to identify the stage/s in the production of germinated oil palm seeds that could contribute to the environmental load. The method for the life cycle impact assessment (LCIA) is modelled using SimaPro version 7, (System for Integrated environMental Assessment of PROducts), an internationally established tool used by LCA practitioners. This software contains European and US databases on a number of materials in addition to a variety of European- and US-developed impact assessment methodologies. LCA was successfully conducted for five seed production units and it was found that the environmental impact for the production of germinated oil palm was not significant. The characterised results of the LCIA for the production of 1000 germinated oil palm seeds showed that fossil fuel was the major impact category followed by respiratory inorganics and climate change.
Crash Attenuator Data Collection and Life Cycle Tool Development
DOT National Transportation Integrated Search
2014-06-14
This research study was aimed at data collection and development of a decision support tool for life cycle cost assessment of crash attenuators. Assessing arrenuator life cycle costs based on in-place expected costs and not just the initial cost enha...
The Utility of Handheld Programmable Calculators in Aircraft Life Cycle Cost Estimation.
1982-09-01
are available for extended mem - ory, hardcopy printout, video interface, and special application software. Any calculator of comparable memory could...condi- tioning system. OG Total number of engine, air turbine motor (ATM) and auxiliary power unit (APU) driven generator/alternators. OHP Total number
NASA Astrophysics Data System (ADS)
Brezgin, V. I.; Brodov, Yu M.; Kultishev, A. Yu
2017-11-01
The report presents improvement methods review in the fields of the steam turbine units design and operation based on modern information technologies application. In accordance with the life cycle methodology support, a conceptual model of the information support system during life cycle main stages (LC) of steam turbine unit is suggested. A classifying system, which ensures the creation of sustainable information links between the engineer team (manufacture’s plant) and customer organizations (power plants), is proposed. Within report, the principle of parameterization expansion beyond the geometric constructions at the design and improvement process of steam turbine unit equipment is proposed, studied and justified. The report presents the steam turbine unit equipment design methodology based on the brand new oil-cooler design system that have been developed and implemented by authors. This design system combines the construction subsystem, which is characterized by extensive usage of family tables and templates, and computation subsystem, which includes a methodology for the thermal-hydraulic zone-by-zone oil coolers design calculations. The report presents data about the developed software for operational monitoring, assessment of equipment parameters features as well as its implementation on five power plants.
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
Comparative muscle development of scyphozoan jellyfish with simple and complex life cycles.
Helm, Rebecca R; Tiozzo, Stefano; Lilley, Martin K S; Lombard, Fabien; Dunn, Casey W
2015-01-01
Simple life cycles arise from complex life cycles when one or more developmental stages are lost. This raises a fundamental question - how can an intermediate stage, such as a larva, be removed, and development still produce a normal adult? To address this question, we examined the development in several species of pelagiid jellyfish. Most members of Pelagiidae have a complex life cycle with a sessile polyp that gives rise to ephyrae (juvenile medusae); but one species within Pelagiidae, Pelagia noctiluca, spends its whole life in the water column, developing from a larva directly into an ephyra. In many complex life cycles, adult features develop from cell populations that remain quiescent in larvae, and this is known as life cycle compartmentalization and may facilitate the evolution of direct life cycles. A second type of metamorphic processes, known as remodeling, occurs when adult features are formed through modification of already differentiated larval structures. We examined muscle morphology to determine which of these alternatives may be present in Pelagiidae. We first examined the structure and development of polyp and ephyra musculature in Chrysaora quinquecirrha, a close relative of P. noctiluca with a complex life cycle. Using phallotoxin staining and confocal microscopy, we verified that polyps have four to six cord muscles that persist in strobilae and discovered that cord muscles is physically separated from ephyra muscle. When cord muscle is removed from ephyra segments, normal ephyra muscle still develops. This suggests that polyp cord muscle is not necessary for ephyra muscle formation. We also found no evidence of polyp-like muscle in P. noctiluca. In both species, we discovered that ephyra muscle arises de novo in a similar manner, regardless of the life cycle. The separate origins of polyp and ephyra muscle in C. quinquecirrha and the absence of polyp-like muscle in P. noctiluca suggest that polyp muscle is not remodeled to form ephyra muscle in Pelagiidae. Life cycle stages in Scyphozoa may instead be compartmentalized. Because polyp muscle is not directly remodeled, this may have facilitated the loss of the polyp stage in the evolution of P. noctiluca.
NASA Astrophysics Data System (ADS)
Frouin, Jerome; Sathish, Shamachary; Na, Jeong K.
2000-05-01
An in-situ technique to measure sound velocity, ultrasonic attenuation and acoustic nonlinear property has been developed for characterization and early detection of fatigue damage in aerospace materials. For this purpose we have developed a computer software and measurement technique including hardware for the automation of the measurement. New transducer holder and special grips are designed. The automation has allowed us to test the long-term stability of the electronics over a period of time and so proof of the linearity of the system. Real-time monitoring of the material nonlinearity has been performed on dog-bone specimens from zero fatigue all the way to the final fracture under low-cycle fatigue test condition (LCF) and high-cycle test condition (HCF). Real-time health monitoring of the material can greatly contribute to the understanding of material behavior under cyclic loading. Interpretation of the results show that correlation exist between the slope of the curve described by the material nonlinearity and the life of the component. This new methodology was developed with an objective to predict the initiation of fatigue microcracks, and to detect, in-situ fatigue crack initiation as well as to quantify early stages of fatigue damage.
Towards improving software security by using simulation to inform requirements and conceptual design
Nutaro, James J.; Allgood, Glenn O.; Kuruganti, Teja
2015-06-17
We illustrate the use of modeling and simulation early in the system life-cycle to improve security and reduce costs. The models that we develop for this illustration are inspired by problems in reliability analysis and supervisory control, for which similar models are used to quantify failure probabilities and rates. In the context of security, we propose that models of this general type can be used to understand trades between risk and cost while writing system requirements and during conceptual design, and thereby significantly reduce the need for expensive security corrections after a system enters operation
Review of Estelle and LOTOS with respect to critical computer applications
NASA Technical Reports Server (NTRS)
Bown, Rodney L.
1991-01-01
Man rated NASA space vehicles seem to represent a set of ultimate critical computer applications. These applications require a high degree of security, integrity, and safety. A variety of formal and/or precise modeling techniques are becoming available for the designer of critical systems. The design phase of the software engineering life cycle includes the modification of non-development components. A review of the Estelle and LOTOS formal description languages is presented. Details of the languages and a set of references are provided. The languages were used to formally describe some of the Open System Interconnect (OSI) protocols.
Richard D. Bergman; James Salazar; Scott Bowe
2012-01-01
Static life cycle assessment does not fully describe the carbon footprint of construction wood because of carbon changes in the forest and product pools over time. This study developed a dynamic greenhouse gas (GHG) inventory approach using US Forest Service and life-cycle data to estimate GHG emissions on construction wood for two different end-of-life scenarios....
Examination of the Open Market Corridor
2003-12-01
105 D. BENEFITS OF THE PURCHASE CARD PROGRAM ..........................107 1. List of Benefits ...107 2. Additional Benefits and How OMC Can Increase the Benefits ...107 E. WEAKNESSES OF...software licenses and support services. Estimated life-cycle costs for FY 1995 through FY 2005 are $3.7 billion. Operational benefits from SPS are
Choosing an Optical Disc System: A Guide for Users and Resellers.
ERIC Educational Resources Information Center
Vane-Tempest, Stewart
1995-01-01
Presents a guide for selecting an optional disc system. Highlights include storage hierarchy; standards; data life cycles; security; implementing an optical jukebox system; optimizing the system; performance; quality and reliability; software; cost of online versus near-line; and growing opportunities. Sidebars provide additional information on…
NASA Technical Reports Server (NTRS)
1991-01-01
Recommendations are made after 32 interviews, lesson identification, lesson analysis, and mission characteristics identification. The major recommendations are as follows: (1) to develop end-to-end planning and scheduling operations concepts by mission class and to ensure their consideration in system life cycle documentation; (2) to create an organizational infrastructure at the Code 500 level, supported by a Directorate level steering committee with project representation, responsible for systems engineering of end-to-end planning and scheduling systems; (3) to develop and refine mission capabilities to assess impacts of early mission design decisions on planning and scheduling; and (4) to emphasize operational flexibility in the development of the Advanced Space Network, other institutional resources, external (e.g., project) capabilities and resources, operational software and support tools.
Optimizing spacecraft design - optimization engine development : progress and plans
NASA Technical Reports Server (NTRS)
Cornford, Steven L.; Feather, Martin S.; Dunphy, Julia R; Salcedo, Jose; Menzies, Tim
2003-01-01
At JPL and NASA, a process has been developed to perform life cycle risk management. This process requires users to identify: goals and objectives to be achieved (and their relative priorities), the various risks to achieving those goals and objectives, and options for risk mitigation (prevention, detection ahead of time, and alleviation). Risks are broadly defined to include the risk of failing to design a system with adequate performance, compatibility and robustness in addition to more traditional implementation and operational risks. The options for mitigating these different kinds of risks can include architectural and design choices, technology plans and technology back-up options, test-bed and simulation options, engineering models and hardware/software development techniques and other more traditional risk reduction techniques.
Evolution and regulation of complex life cycles: a brown algal perspective.
Cock, J Mark; Godfroy, Olivier; Macaisne, Nicolas; Peters, Akira F; Coelho, Susana M
2014-02-01
The life cycle of an organism is one of its fundamental features, influencing many aspects of its biology. The brown algae exhibit a diverse range of life cycles indicating that transitions between life cycle types may have been key adaptive events in the evolution of this group. Life cycle mutants, identified in the model organism Ectocarpus, are providing information about how life cycle progression is regulated at the molecular level in brown algae. We explore some of the implications of the phenotypes of the life cycle mutants described to date and draw comparisons with recent insights into life cycle regulation in the green lineage. Given the importance of coordinating growth and development with life cycle progression, we suggest that the co-option of ancient life cycle regulators to control key developmental events may be a common feature in diverse groups of multicellular eukaryotes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modi, Nishit B
2017-05-01
Increasing costs in discovering and developing new molecular entities and the continuing debate on limited company pipelines mean that pharmaceutical companies are under significant pressure to maximize the value of approved products. Life cycle management in the context of drug development comprises activities to maximize the effective life of a product. Life cycle approaches can involve new formulations, new routes of delivery, new indications or expansion of the population for whom the product is indicated, or development of combination products. Life cycle management may provide an opportunity to improve upon the current product through enhanced efficacy or reduced side effects and could expand the therapeutic market for the product. Successful life cycle management may include the potential for superior efficacy, improved tolerability, or a better prescriber or patient acceptance. Unlike generic products where bioequivalence to an innovator product may be sufficient for drug approval, life cycle management typically requires a series of studies to characterize the value of the product. This review summarizes key considerations in identifying product candidates that may be suitable for life cycle management and discusses the application of pharmacokinetics and pharmacodynamics in developing new products using a life cycle management approach. Examples and a case study to illustrate how pharmacokinetics and pharmacodynamics contributed to the selection of dosing regimens, demonstration of an improved therapeutic effect, or regulatory approval of an improved product label are presented.
Enhanced CARES Software Enables Improved Ceramic Life Prediction
NASA Technical Reports Server (NTRS)
Janosik, Lesley A.
1997-01-01
The NASA Lewis Research Center has developed award-winning software that enables American industry to establish the reliability and life of brittle material (e.g., ceramic, intermetallic, graphite) structures in a wide variety of 21st century applications. The CARES (Ceramics Analysis and Reliability Evaluation of Structures) series of software is successfully used by numerous engineers in industrial, academic, and government organizations as an essential element of the structural design and material selection processes. The latest version of this software, CARES/Life, provides a general- purpose design tool that predicts the probability of failure of a ceramic component as a function of its time in service. CARES/Life was recently enhanced by adding new modules designed to improve functionality and user-friendliness. In addition, a beta version of the newly-developed CARES/Creep program (for determining the creep life of monolithic ceramic components) has just been released to selected organizations.
Use of software engineering techniques in the design of the ALEPH data acquisition system
NASA Astrophysics Data System (ADS)
Charity, T.; McClatchey, R.; Harvey, J.
1987-08-01
The SASD methodology is being used to provide a rigorous design framework for various components of the ALEPH data acquisition system. The Entity-Relationship data model is used to describe the layout and configuration of the control and acquisition systems and detector components. State Transition Diagrams are used to specify control applications such as run control and resource management and Data Flow Diagrams assist in decomposing software tasks and defining interfaces between processes. These techniques encourage rigorous software design leading to enhanced functionality and reliability. Improved documentation and communication ensures continuity over the system life-cycle and simplifies project management.
Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as, life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relati...
The Experience Factory: Strategy and Practice
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Caldiera, Gianluigi
1995-01-01
The quality movement, that has had in recent years a dramatic impact on all industrial sectors, has recently reached the system and software industry. Although some concepts of quality management, originally developed for other product types, can be applied to software, its specificity as a product which is developed and not produced requires a special approach. This paper introduces a quality paradigm specifically tailored on the problem of the systems and software industry. Reuse of products, processes and experiences originating from the system life cycle is seen today as a feasible solution to the problem of developing higher quality systems at a lower cost. In fact, quality improvement is very often achieved by defining and developing an appropriate set of strategic capabilities and core competencies to support them. A strategic capability is, in this context, a corporate goal defined by the business position of the organization and implemented by key business processes. Strategic capabilities are supported by core competencies, which are aggregate technologies tailored to the specific needs of the organization in performing the needed business processes. Core competencies are non-transitional, have a consistent evolution, and are typically fueled by multiple technologies. Their selection and development requires commitment, investment and leadership. The paradigm introduced in this paper for developing core competencies is the Quality Improvement Paradigm which consists of six steps: (1) Characterize the environment, (2) Set the goals, (3) Choose the process, (4) Execute the process, (5) Analyze the process data, and (6) Package experience. The process must be supported by a goal oriented approach to measurement and control, and an organizational infrastructure, called Experience Factory. The Experience Factory is a logical and physical organization distinct from the project organizations it supports. Its goal is development and support of core competencies through capitalization and reuse of its cycle experience and products. The paper introduces the major concepts of the proposed approach, discusses their relationship with other approaches used in the industry, and presents a case in which those concepts have been successfully applied.
Evolving software reengineering technology for the emerging innovative-competitive era
NASA Technical Reports Server (NTRS)
Hwang, Phillip Q.; Lock, Evan; Prywes, Noah
1994-01-01
This paper reports on a multi-tool commercial/military environment combining software Domain Analysis techniques with Reusable Software and Reengineering of Legacy Software. It is based on the development of a military version for the Department of Defense (DOD). The integrated tools in the military version are: Software Specification Assistant (SSA) and Software Reengineering Environment (SRE), developed by Computer Command and Control Company (CCCC) for Naval Surface Warfare Center (NSWC) and Joint Logistics Commanders (JLC), and the Advanced Research Project Agency (ARPA) STARS Software Engineering Environment (SEE) developed by Boeing for NAVAIR PMA 205. The paper describes transitioning these integrated tools to commercial use. There is a critical need for the transition for the following reasons: First, to date, 70 percent of programmers' time is applied to software maintenance. The work of these users has not been facilitated by existing tools. The addition of Software Reengineering will also facilitate software maintenance and upgrading. In fact, the integrated tools will support the entire software life cycle. Second, the integrated tools are essential to Business Process Reengineering, which seeks radical process innovations to achieve breakthrough results. Done well, process reengineering delivers extraordinary gains in process speed, productivity and profitability. Most importantly, it discovers new opportunities for products and services in collaboration with other organizations. Legacy computer software must be changed rapidly to support innovative business processes. The integrated tools will provide commercial organizations important competitive advantages. This, in turn, will increase employment by creating new business opportunities. Third, the integrated system will produce much higher quality software than use of the tools separately. The reason for this is that producing or upgrading software requires keen understanding of extremely complex applications which is facilitated by the integrated tools. The radical savings in the time and cost associated with software, due to use of CASE tools that support combined Reuse of Software and Reengineering of Legacy Code, will add an important impetus to improving the automation of enterprises. This will be reflected in continuing operations, as well as in innovating new business processes. The proposed multi-tool software development is based on state of the art technology, which will be further advanced through the use of open systems for adding new tools and experience in their use.
Improving Reuse in Software Development for the Life Sciences
ERIC Educational Resources Information Center
Iannotti, Nicholas V.
2013-01-01
The last several years have seen unprecedented advancements in the application of technology to the life sciences, particularly in the area of data generation. Novel scientific insights are now often driven primarily by software development supporting new multidisciplinary and increasingly multifaceted data analysis. However, despite the…
Advanced nickel-hydrogen spacecraft battery development
NASA Technical Reports Server (NTRS)
Coates, Dwaine K.; Fox, Chris L.; Standlee, D. J.; Grindstaff, B. K.
1994-01-01
Eagle-Picher currently has several advanced nickel-hydrogen (NiH2) cell component and battery designs under development including common pressure vessel (CPV), single pressure vessel (SPV), and dependent pressure vessel (DPV) designs. A CPV NiH2 battery, utilizing low-cost 64 mm (2.5 in.) cell diameter technology, has been designed and built for multiple smallsat programs, including the TUBSAT B spacecraft which is currently scheduled (24 Nov. 93) for launch aboard a Russian Proton rocket. An advanced 90 mm (3.5 in.) NiH2 cell design is currently being manufactured for the Space Station Freedom program. Prototype 254 mm (10 in.) diameter SPV batteries are currently under construction and initial boilerplate testing has shown excellent results. NiH2 cycle life testing is being continued at Eagle-Picher and IPV cells have currently completed more than 89,000 accelerated LEO cycles at 15% DOD, 49,000 real-time LEO cycles at 30 percent DOD, 37,800 cycles under a real-time LEO profile, 30 eclipse seasons in accelerated GEO, and 6 eclipse seasons in real-time GEO testing at 75 percent DOD maximum. Nickel-metal hydride battery development is continuing for both aerospace and electric vehicle applications. Eagle-Picher has also developed an extensive range of battery evaluation, test, and analysis (BETA) measurement and control equipment and software, based on Hewlett-Packard computerized data acquisition/control hardware.
Integrated Component-based Data Acquisition Systems for Aerospace Test Facilities
NASA Technical Reports Server (NTRS)
Ross, Richard W.
2001-01-01
The Multi-Instrument Integrated Data Acquisition System (MIIDAS), developed by the NASA Langley Research Center, uses commercial off the shelf (COTS) products, integrated with custom software, to provide a broad range of capabilities at a low cost throughout the system s entire life cycle. MIIDAS combines data acquisition capabilities with online and post-test data reduction computations. COTS products lower purchase and maintenance costs by reducing the level of effort required to meet system requirements. Object-oriented methods are used to enhance modularity, encourage reusability, and to promote adaptability, reducing software development costs. Using only COTS products and custom software supported on multiple platforms reduces the cost of porting the system to other platforms. The post-test data reduction capabilities of MIIDAS have been installed at four aerospace testing facilities at NASA Langley Research Center. The systems installed at these facilities provide a common user interface, reducing the training time required for personnel that work across multiple facilities. The techniques employed by MIIDAS enable NASA to build a system with a lower initial purchase price and reduced sustaining maintenance costs. With MIIDAS, NASA has built a highly flexible next generation data acquisition and reduction system for aerospace test facilities that meets customer expectations.
NASA Technical Reports Server (NTRS)
Kocher, Walter M.
2003-01-01
Pollution prevention (P2) opportunities and Greening the Government (GtG) activities, including the development of the Real-Time Environmental Monitoring System (RTEMS), are currently under development at the NASA Glenn Research Center. The RTEMS project entails the ongoing development of a monitoring system which includes sensors, instruments, computer hardware and software, plus a data telemetry system.Professor Kocher has been directing the RTEMS project for more than 3 years, and the implementation of the prototype system at GRC will be a major portion of his summer effort. This prototype will provide mulitmedia environmental monitoring and control capabilities, although water quality and air emissions will be the immediate issues addressed this summer. Applications beyond those currently identified for environmental purposes will also be explored.
Formal Analysis of the Remote Agent Before and After Flight
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Lowry, Mike; Park, SeungJoon; Pecheur, Charles; Penix, John; Visser, Willem; White, Jon L.
2000-01-01
This paper describes two separate efforts that used the SPIN model checker to verify deep space autonomy flight software. The first effort occurred at the beginning of a spiral development process and found five concurrency errors early in the design cycle that the developers acknowledge would not have been found through testing. This effort required a substantial manual modeling effort involving both abstraction and translation from the prototype LISP code to the PROMELA language used by SPIN. This experience and others led to research to address the gap between formal method tools and the development cycle used by software developers. The Java PathFinder tool which directly translates from Java to PROMELA was developed as part of this research, as well as automatic abstraction tools. In 1999 the flight software flew on a space mission, and a deadlock occurred in a sibling subsystem to the one which was the focus of the first verification effort. A second quick-response "cleanroom" verification effort found the concurrency error in a short amount of time. The error was isomorphic to one of the concurrency errors found during the first verification effort. The paper demonstrates that formal methods tools can find concurrency errors that indeed lead to loss of spacecraft functions, even for the complex software required for autonomy. Second, it describes progress in automatic translation and abstraction that eventually will enable formal methods tools to be inserted directly into the aerospace software development cycle.
NASA Astrophysics Data System (ADS)
Misceo, Monica; Buonamici, Roberto; Buttol, Patrizia; Naldesi, Luciano; Grimaldi, Filomena; Rinaldi, Caterina
2004-12-01
TESPI (Tool for Environmental Sound Product Innovation) is the prototype of a software tool developed within the framework of the "eLCA" project. The project, (www.elca.enea.it)financed by the European Commission, is realising "On line green tools and services for Small and Medium sized Enterprises (SMEs)". The implementation by SMEs of environmental product innovation (as fostered by the European Integrated Product Policy, IPP) needs specific adaptation to their economic model, their knowledge of production and management processes and their relationships with innovation and the environment. In particular, quality and costs are the main driving forces of innovation in European SMEs, and well known barriers exist to the adoption of an environmental approach in the product design. Starting from these considerations, the TESPI tool has been developed to support the first steps of product design taking into account both the quality and the environment. Two main issues have been considered: (i) classic Quality Function Deployment (QFD) can hardly be proposed to SMEs; (ii) the environmental aspects of the product life cycle need to be integrated with the quality approach. TESPI is a user friendly web-based tool, has a training approach and applies to modular products. Users are guided through the investigation of the quality aspects of their product (customer"s needs and requirements fulfilment) and the identification of the key environmental aspects in the product"s life cycle. A simplified check list allows analyzing the environmental performance of the product. Help is available for a better understanding of the analysis criteria. As a result, the significant aspects for the redesign of the product are identified.
NASA Astrophysics Data System (ADS)
Dwi Susanto, Tony; Ingesti Prasetyo, Anisa; Astuti, Hanim Maria
2018-03-01
At this moment, the need for web as an information media is highly important. Not only confined in the infotainment area, government, and education, but health as well uses the web media as a medium for providing information effectively. BloobIS is a web based application which integrates blood supply and distribution information at the Blood Transfusion Unit. Knowing how easy information is on BloobIS is marked by how convenient the website is used. Up until now, the BloobIS website is nearing completion but testing has not yet been performed to users and is on the testing and development phase in the Development Life Cycle software. Thus, an evaluation namely the quality control software which focuses on the perspective of BloobIs web usability is required. Hallway Usability Testing and ISO 9241:11 are the methods chosen to measure the BloobIS application usability. The expected outputs of the quality control software focusing on the usability evaluation are being able to rectify the usability deficiencies on the BloobIs web and provide recommendations to develop the web as a basic BloobIS web quality upgraed which sets a goal to amplify the satisfaction of web users based on usability factors in ISO 9241:11.
A Case Study in CAD Design Automation
ERIC Educational Resources Information Center
Lowe, Andrew G.; Hartman, Nathan W.
2011-01-01
Computer-aided design (CAD) software and other product life-cycle management (PLM) tools have become ubiquitous in industry during the past 20 years. Over this time they have continuously evolved, becoming programs with enormous capabilities, but the companies that use them have not evolved their design practices at the same rate. Due to the…
The Role of Skepticism in Preparing Teachers for the Use of Technology.
ERIC Educational Resources Information Center
Albaugh, Patti R.
The complexity of technology training for teachers can be partially explained in terms of three phenomena: the historical resistance of teachers to use media, the nature of teaching itself, and the life cycle of technological innovations. Factors that influence teachers' use of technology include: accessibility of hardware and software,…
Developing sustainable software solutions for bioinformatics by the “ Butterfly” paradigm
Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas
2014-01-01
Software design and sustainable software engineering are essential for the long-term development of bioinformatics software. Typical challenges in an academic environment are short-term contracts, island solutions, pragmatic approaches and loose documentation. Upcoming new challenges are big data, complex data sets, software compatibility and rapid changes in data representation. Our approach to cope with these challenges consists of iterative intertwined cycles of development (“ Butterfly” paradigm) for key steps in scientific software engineering. User feedback is valued as well as software planning in a sustainable and interoperable way. Tool usage should be easy and intuitive. A middleware supports a user-friendly Graphical User Interface (GUI) as well as a database/tool development independently. We validated the approach of our own software development and compared the different design paradigms in various software solutions. PMID:25383181
Agile IT: Thinking in User-Centric Models
NASA Astrophysics Data System (ADS)
Margaria, Tiziana; Steffen, Bernhard
We advocate a new teaching direction for modern CS curricula: extreme model-driven development (XMDD), a new development paradigm designed to continuously involve the customer/application expert throughout the whole systems' life cycle. Based on the `One-Thing Approach', which works by successively enriching and refining one single artifact, system development becomes in essence a user-centric orchestration of intuitive service functionality. XMDD differs radically from classical software development, which, in our opinion is no longer adequate for the bulk of application programming - in particular when it comes to heterogeneous, cross organizational systems which must adapt to rapidly changing market requirements. Thus there is a need for new curricula addressing this model-driven, lightweight, and cooperative development paradigm that puts the user process in the center of the development and the application expert in control of the process evolution.
Applying CASE Tools for On-Board Software Development
NASA Astrophysics Data System (ADS)
Brammer, U.; Hönle, A.
For many space projects the software development is facing great pressure with respect to quality, costs and schedule. One way to cope with these challenges is the application of CASE tools for automatic generation of code and documentation. This paper describes two CASE tools: Rhapsody (I-Logix) featuring UML and ISG (BSSE) that provides modeling of finite state machines. Both tools have been used at Kayser-Threde in different space projects for the development of on-board software. The tools are discussed with regard to the full software development cycle.
De Feo, G; Ferrara, C
2017-08-01
This paper investigates the total and per capita environmental impacts of municipal wastewater treatment in the function of the population equivalent (PE) with a Life Cycle Assessment (LCA) approach using the processes of the Ecoinvent 2.2 database available in the software tool SimaPro v.7.3. Besides the wastewater treatment plant (WWTP), the study also considers the sewerage system. The obtained results confirm that there is a 'scale factor' for the wastewater collection and treatment even in environmental terms, in addition to the well-known scale factor in terms of management costs. Thus, the more the treatment plant size is, the less the per capita environmental impacts are. However, the Ecoinvent 2.2 database does not contain information about treatment systems with a capacity lower than 30 PE. Nevertheless, worldwide there are many sparsely populated areas, where it is not convenient to realize a unique centralized WWTP. Therefore, it would be very important to conduct an LCA study in order to compare alternative on-site small-scale systems with treatment capacity of few PE.
Life cycle impact assessment of ammonia production in Algeria: A comparison with previous studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhlouf, Ali, E-mail: almakhsme@gmail.com; Serradj, Tayeb; Cheniti, Hamza
In this paper, a Life Cycle Analysis (LCA) from “cradle to gate” of one anhydrous ton of ammonia with a purity of 99% was achieved. Particularly, the energy and environmental performance of the product (ammonia) were evaluated. The eco-profile of the product and the share of each stage of the Life Cycle on the whole environmental impacts have been evaluated. The flows of material and energy for each phase of the life cycle were counted and the associated environmental problems were identified. Evaluation of the impact was achieved using GEMIS 4.7 software. The primary data collection was executed at themore » production installations located in Algeria (Annaba locality). The analysis was conducted according to the LCA standards ISO 14040 series. The results show that Cumulative Energy Requirement (CER) is of 51.945 × 10{sup 3} MJ/t of ammonia, which is higher than the global average. Global Warming Potential (GWP) is of 1.44 t CO{sub 2} eq/t of ammonia; this value is lower than the world average. Tropospheric ozone precursor and Acidification are also studied in this article, their values are: 549.3 × 10{sup −6} t NMVOC eq and 259.3 × 10{sup −6} t SO{sub 2} eq respectively.« less
All about Animal Life Cycles. Animal Life for Children. [Videotape].
ERIC Educational Resources Information Center
2000
While watching the development from tadpole to frog, caterpillar to butterfly, and pup to wolf, children learn about the life cycles of animals, the different stages of development, and the average life spans of a variety of creatures. This videotape correlates to the following National Science Education Standards for Life Science: characteristics…
Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) software management plan
NASA Technical Reports Server (NTRS)
Schwantje, Robert
1994-01-01
This document defines the responsibilites for the management of the like-cycle development of the flight software installed in the AMSU-A instruments, and the ground support software used in the test and integration of the AMSU-A instruments.
NASA Astrophysics Data System (ADS)
Frailis, M.; Maris, M.; Zacchei, A.; Morisset, N.; Rohlfs, R.; Meharga, M.; Binko, P.; Türler, M.; Galeotta, S.; Gasparo, F.; Franceschi, E.; Butler, R. C.; D'Arcangelo, O.; Fogliani, S.; Gregorio, A.; Lowe, S. R.; Maggio, G.; Malaspina, M.; Mandolesi, N.; Manzato, P.; Pasian, F.; Perrotta, F.; Sandri, M.; Terenzi, L.; Tomasi, M.; Zonca, A.
2009-12-01
The Level 1 of the Planck LFI Data Processing Centre (DPC) is devoted to the handling of the scientific and housekeeping telemetry. It is a critical component of the Planck ground segment which has to strictly commit to the project schedule to be ready for the launch and flight operations. In order to guarantee the quality necessary to achieve the objectives of the Planck mission, the design and development of the Level 1 software has followed the ESA Software Engineering Standards. A fundamental step in the software life cycle is the Verification and Validation of the software. The purpose of this work is to show an example of procedures, test development and analysis successfully applied to a key software project of an ESA mission. We present the end-to-end validation tests performed on the Level 1 of the LFI-DPC, by detailing the methods used and the results obtained. Different approaches have been used to test the scientific and housekeeping data processing. Scientific data processing has been tested by injecting signals with known properties directly into the acquisition electronics, in order to generate a test dataset of real telemetry data and reproduce as much as possible nominal conditions. For the HK telemetry processing, validation software have been developed to inject known parameter values into a set of real housekeeping packets and perform a comparison with the corresponding timelines generated by the Level 1. With the proposed validation and verification procedure, where the on-board and ground processing are viewed as a single pipeline, we demonstrated that the scientific and housekeeping processing of the Planck-LFI raw data is correct and meets the project requirements.
Challenges and Demands on Automated Software Revision
NASA Technical Reports Server (NTRS)
Bonakdarpour, Borzoo; Kulkarni, Sandeep S.
2008-01-01
In the past three decades, automated program verification has undoubtedly been one of the most successful contributions of formal methods to software development. However, when verification of a program against a logical specification discovers bugs in the program, manual manipulation of the program is needed in order to repair it. Thus, in the face of existence of numerous unverified and un- certified legacy software in virtually any organization, tools that enable engineers to automatically verify and subsequently fix existing programs are highly desirable. In addition, since requirements of software systems often evolve during the software life cycle, the issue of incomplete specification has become a customary fact in many design and development teams. Thus, automated techniques that revise existing programs according to new specifications are of great assistance to designers, developers, and maintenance engineers. As a result, incorporating program synthesis techniques where an algorithm generates a program, that is correct-by-construction, seems to be a necessity. The notion of manual program repair described above turns out to be even more complex when programs are integrated with large collections of sensors and actuators in hostile physical environments in the so-called cyber-physical systems. When such systems are safety/mission- critical (e.g., in avionics systems), it is essential that the system reacts to physical events such as faults, delays, signals, attacks, etc, so that the system specification is not violated. In fact, since it is impossible to anticipate all possible such physical events at design time, it is highly desirable to have automated techniques that revise programs with respect to newly identified physical events according to the system specification.
Life Cycle Assessment of Wall Systems
NASA Astrophysics Data System (ADS)
Ramachandran, Sriranjani
Natural resource depletion and environmental degradation are the stark realities of the times we live in. As awareness about these issues increases globally, industries and businesses are becoming interested in understanding and minimizing the ecological footprints of their activities. Evaluating the environmental impacts of products and processes has become a key issue, and the first step towards addressing and eventually curbing climate change. Additionally, companies are finding it beneficial and are interested in going beyond compliance using pollution prevention strategies and environmental management systems to improve their environmental performance. Life-cycle Assessment (LCA) is an evaluative method to assess the environmental impacts associated with a products' life-cycle from cradle-to-grave (i.e. from raw material extraction through to material processing, manufacturing, distribution, use, repair and maintenance, and finally, disposal or recycling). This study focuses on evaluating building envelopes on the basis of their life-cycle analysis. In order to facilitate this analysis, a small-scale office building, the University Services Building (USB), with a built-up area of 148,101 ft2 situated on ASU campus in Tempe, Arizona was studied. The building's exterior envelope is the highlight of this study. The current exterior envelope is made of tilt-up concrete construction, a type of construction in which the concrete elements are constructed horizontally and tilted up, after they are cured, using cranes and are braced until other structural elements are secured. This building envelope is compared to five other building envelope systems (i.e. concrete block, insulated concrete form, cast-in-place concrete, steel studs and curtain wall constructions) evaluating them on the basis of least environmental impact. The research methodology involved developing energy models, simulating them and generating changes in energy consumption due to the above mentioned envelope types. Energy consumption data, along with various other details, such as building floor area, areas of walls, columns, beams etc. and their material types were imported into Life-Cycle Assessment software called ATHENA impact estimator for buildings. Using this four-stepped LCA methodology, the results showed that the Steel Stud envelope performed the best and less environmental impact compared to other envelope types. This research methodology can be applied to other building typologies.
A Life Cycle Cost Analysis of Rigid Pavements
DOT National Transportation Integrated Search
1999-09-01
The Texas Department of Transportation (TxDOT)commissioned a research project in 1996, summarized here, to promote life cycle cost analysis of rigid pavements throughout the TxDOT districts by developing a uniform methodology for performing life cycl...
Intersection life cycle cost comparison tool user guide version 1.0.
DOT National Transportation Integrated Search
2016-05-01
The Intersection Life Cycle Cost Comparison Tool User Guide was developed as part of North : Carolina Department of Transportation Research Project No. 201411: Evaluation of Life Cycle : Impacts of Intersection Control Type Selection. : This sprea...
Matalin, A V
2014-01-01
The life cycles of Carabidae are highly diverse, and 25 variants of these cycles are realized In the European part of Russia, from semideserts to continental tundras. The diversity of the life cycle spectrum sharply decreases (by more than half) upon transition from nemoral to boreal forest communities, and its phenological unification takes place at high latitudes. The greatest proportion of species with polyvariant development (25%) is characteristic of temporal latitudes, which may be explained by relatively long growing season and considerable cenotic diversity. In both southern (semidesert and steppe) and northern regions (middle and northern boreal forests), this proportion does not exceed 5%. At low latitudes, the polyvariant pattern of development is often manifested in the form of facultative bivoltine life cycles or as facultative biennial life cycles in species with the initial "spring" breeding type.
NASA Technical Reports Server (NTRS)
Happell, Nadine; Miksell, Steve; Carlisle, Candace
1989-01-01
A major barrier in taking expert systems from prototype to operational status involves instilling end user confidence in the operational system. The software of different life cycle models is examined and the advantages and disadvantages of each when applied to expert system development are explored. The Fault Isolation Expert System for Tracking and data relay satellite system Applications (FIESTA) is presented as a case study of development of an expert system. The end user confidence necessary for operational use of this system is accentuated by the fact that it will handle real-time data in a secure environment, allowing little tolerance for errors. How FIESTA is dealing with transition problems as it moves from an off-line standalone prototype to an on-line real-time system is discussed.
NASA Technical Reports Server (NTRS)
Happell, Nadine; Miksell, Steve; Carlisle, Candace
1989-01-01
A major barrier in taking expert systems from prototype to operational status involves instilling end user confidence in the operational system. The software of different life cycle models is examined and the advantages and disadvantages of each when applied to expert system development are explored. The Fault Isolation Expert System for Tracking and data relay satellite system Applications (FIESTA) is presented as a case study of development of an expert system. The end user confidence necessary for operational use of this system is accentuated by the fact that it will handle real-time data in a secure environment, allowing little tolerance for errors. How FIESTA is dealing with transition problems as it moves from an off-line standalone prototype to an on-line real-time system is discussed.
Wilke, Georgia; Ravindran, Soumya; Funkhouser-Jones, Lisa; Barks, Jennifer; Wang, Qiuling; VanDussen, Kelli L; Stappenbeck, Thaddeus S; Kuhlenschmidt, Theresa B; Kuhlenschmidt, Mark S; Sibley, L David
2018-06-27
Among the obstacles hindering Cryptosporidium research is the lack of an in vitro culture system that supports complete life development and propagation. This major barrier has led to a shortage of widely available anti- Cryptosporidium antibodies and a lack of markers for staging developmental progression. Previously developed antibodies against Cryptosporidium were raised against extracellular stages or recombinant proteins, leading to antibodies with limited reactivity across the parasite life cycle. Here we sought to create antibodies that recognize novel epitopes that could be used to define intracellular development. We identified a mouse epithelial cell line that supported C. parvum growth, enabling immunization of mice with infected cells to create a bank of monoclonal antibodies (MAbs) against intracellular parasite stages while avoiding the development of host-specific antibodies. From this bank, we identified 12 antibodies with a range of reactivities across the parasite life cycle. Importantly, we identified specific MAbs that can distinguish different life cycle stages, such as trophozoites, merozoites, type I versus II meronts, and macrogamonts. These MAbs provide valuable tools for the Cryptosporidium research community and will facilitate future investigation into parasite biology. IMPORTANCE Cryptosporidium is a protozoan parasite that causes gastrointestinal disease in humans and animals. Currently, there is a limited array of antibodies available against the parasite, which hinders imaging studies and makes it difficult to visualize the parasite life cycle in different culture systems. In order to alleviate this reagent gap, we created a library of novel antibodies against the intracellular life cycle stages of Cryptosporidium We identified antibodies that recognize specific life cycle stages in distinctive ways, enabling unambiguous description of the parasite life cycle. These MAbs will aid future investigation into Cryptosporidium biology and help illuminate growth differences between various culture platforms. Copyright © 2018 Wilke et al.
Schlimpert, Susan; Flärdh, Klas; Buttner, Mark
2016-02-28
Live-cell imaging of biological processes at the single cell level has been instrumental to our current understanding of the subcellular organization of bacterial cells. However, the application of time-lapse microscopy to study the cell biological processes underpinning development in the sporulating filamentous bacteria Streptomyces has been hampered by technical difficulties. Here we present a protocol to overcome these limitations by growing the new model species, Streptomyces venezuelae, in a commercially available microfluidic device which is connected to an inverted fluorescence widefield microscope. Unlike the classical model species, Streptomyces coelicolor, S. venezuelae sporulates in liquid, allowing the application of microfluidic growth chambers to cultivate and microscopically monitor the cellular development and differentiation of S. venezuelae over long time periods. In addition to monitoring morphological changes, the spatio-temporal distribution of fluorescently labeled target proteins can also be visualized by time-lapse microscopy. Moreover, the microfluidic platform offers the experimental flexibility to exchange the culture medium, which is used in the detailed protocol to stimulate sporulation of S. venezuelae in the microfluidic chamber. Images of the entire S. venezuelae life cycle are acquired at specific intervals and processed in the open-source software Fiji to produce movies of the recorded time-series.
Fluorescence Time-lapse Imaging of the Complete S. venezuelae Life Cycle Using a Microfluidic Device
Schlimpert, Susan; Flärdh, Klas; Buttner, Mark
2016-01-01
Live-cell imaging of biological processes at the single cell level has been instrumental to our current understanding of the subcellular organization of bacterial cells. However, the application of time-lapse microscopy to study the cell biological processes underpinning development in the sporulating filamentous bacteria Streptomyces has been hampered by technical difficulties. Here we present a protocol to overcome these limitations by growing the new model species, Streptomyces venezuelae, in a commercially available microfluidic device which is connected to an inverted fluorescence widefield microscope. Unlike the classical model species, Streptomyces coelicolor, S. venezuelae sporulates in liquid, allowing the application of microfluidic growth chambers to cultivate and microscopically monitor the cellular development and differentiation of S. venezuelae over long time periods. In addition to monitoring morphological changes, the spatio-temporal distribution of fluorescently labeled target proteins can also be visualized by time-lapse microscopy. Moreover, the microfluidic platform offers the experimental flexibility to exchange the culture medium, which is used in the detailed protocol to stimulate sporulation of S. venezuelae in the microfluidic chamber. Images of the entire S. venezuelae life cycle are acquired at specific intervals and processed in the open-source software Fiji to produce movies of the recorded time-series. PMID:26967231
Information and communication systems for the assistance of carers based on ACTION.
Kraner, M; Emery, D; Cvetkovic, S R; Procter, P; Smythe, C
1999-01-01
Recent advances in telecommunication technologies allow the design of information and communication systems for people who are caring for others in the home as family members or as professionals in the health or community centres. The present paper analyses and classifies the information flow and maps it to an information life cycle, which governs the design of the deployed hardware, software and the data-structure. This is based on the initial findings of ACTION (assisting carers using telematics interventions to meet older persons' needs) a European Union funded project. The proposed information architecture discusses different designs such as centralized or decentralized Web and Client server solutions. A user interface is developed reflecting the special requirements of the targeted user group, which influences the functionality and design of the software, data architecture and the integrated communication system using video-conferencing. ACTION has engineered a system using plain Web technology based on HTML, extended with JavaScript and ActiveX and a software switch enabling the integration of different types of videoconferencing and other applications providing manufacturer independence.
Cost and schedule estimation study report
NASA Technical Reports Server (NTRS)
Condon, Steve; Regardie, Myrna; Stark, Mike; Waligora, Sharon
1993-01-01
This report describes the analysis performed and the findings of a study of the software development cost and schedule estimation models used by the Flight Dynamics Division (FDD), Goddard Space Flight Center. The study analyzes typical FDD projects, focusing primarily on those developed since 1982. The study reconfirms the standard SEL effort estimation model that is based on size adjusted for reuse; however, guidelines for the productivity and growth parameters in the baseline effort model have been updated. The study also produced a schedule prediction model based on empirical data that varies depending on application type. Models for the distribution of effort and schedule by life-cycle phase are also presented. Finally, this report explains how to use these models to plan SEL projects.
Ip, Kenneth; She, Kaiming; Adeyeye, Kemi
2017-10-18
Recovering heat from waste water discharged from showers to preheat the incoming cold water has been promoted as a cost-effective, energy-efficient, and low-carbon design option which has been included in the UK's Standard Assessment Procedure (SAP) for demonstrating compliance with the Building Regulation for dwellings. Incentivized by its carbon cost-effectiveness, waste water heat exchangers (WWHX) have been selected and incorporated in a newly constructed Sports Pavilion at the University of Brighton in the UK. This £2-m sports development serving several football fields was completed in August 2015 providing eight water- and energy-efficient shower rooms for students, staff, and external organizations. Six of the shower rooms are located on the ground floor and two on the first floor, each fitted with five or six thermostatically controlled shower units. Inline type of WWHX were installed, each consisted of a copper pipe section wound by an external coil of smaller copper pipe through which the cold water would be warmed before entering the shower mixers. Using the installation at Sport Pavilion as the case study, this research aims to evaluate the environmental and financial sustainability of a vertical waste heat recovery device, over a life cycle of 50 years, with comparison to the normal use of a PVC-u pipe. A heat transfer mathematical model representing the system has been developed to inform the development of the methodology for measuring the in-situ thermal performance of individual and multiple use of showers in each changing room. Adopting a system thinking modeling technique, a quasi-dynamic simulation computer model was established enabling the prediction of annual energy consumptions under different shower usage profiles. Data based on the process map and inventory of a functional unit of WWHX were applied to a proprietary assessment software to establish the relevant outputs for the life-cycle environmental impact assessment. Life-cycle cost models were developed and industry price book data were applied. The results indicated that the seasonal thermal effectiveness was over 50% enabling significant energy savings through heat recovery that led to short carbon payback time of less than 2 years to compensate for the additional greenhouse gas emissions associated with the WWHX. However, the life-cycle cost of the WWHX is much higher than using the PVC pipe, even with significant heat recovered under heavy usage, highlighting the need to adopt more economic configurations, such as combining waste water through fewer units, in order to maximize the return on investment and improve the financial viability.
The thermal limits to life on Earth
NASA Astrophysics Data System (ADS)
Clarke, Andrew
2014-04-01
Living organisms on Earth are characterized by three necessary features: a set of internal instructions encoded in DNA (software), a suite of proteins and associated macromolecules providing a boundary and internal structure (hardware), and a flux of energy. In addition, they replicate themselves through reproduction, a process that renders evolutionary change inevitable in a resource-limited world. Temperature has a profound effect on all of these features, and yet life is sufficiently adaptable to be found almost everywhere water is liquid. The thermal limits to survival are well documented for many types of organisms, but the thermal limits to completion of the life cycle are much more difficult to establish, especially for organisms that inhabit thermally variable environments. Current data suggest that the thermal limits to completion of the life cycle differ between the three major domains of life, bacteria, archaea and eukaryotes. At the very highest temperatures only archaea are found with the current high-temperature limit for growth being 122 °C. Bacteria can grow up to 100 °C, but no eukaryote appears to be able to complete its life cycle above ~60 °C and most not above 40 °C. The lower thermal limit for growth in bacteria, archaea, unicellular eukaryotes where ice is present appears to be set by vitrification of the cell interior, and lies at ~-20 °C. Lichens appear to be able to grow down to ~-10 °C. Higher plants and invertebrates living at high latitudes can survive down to ~-70 °C, but the lower limit for completion of the life cycle in multicellular organisms appears to be ~-2 °C.
Ada Software Design Methods Formulation.
1982-10-01
cycle organization is also appropriate for another reason. The source material for the case studies is the work of the two contractors who participated in... working version of the system exist. The integration phase takes the pieces developed and combines them into a single working system. Interfaces...hardware, developed separately from the software, is united with the software, and further testing is performed until the system is a working whole
Data Base Development of Automobile and Light Truck Maintenance : Volume II. Appendix E.
DOT National Transportation Integrated Search
1978-08-01
The document contains the scheduled maintenance data sheets and total cost summaries--both scheduled and unscheduled maintenance (Life cycle cost for Dealers, life cycle cost for Service Stations, life cycle cost for Independent Repair, and scheduled...
Data Base Development of Automobile and Light Truck Maintenance : Volume III. Appendix F.
DOT National Transportation Integrated Search
1978-08-01
The document contains the scheduled maintenance data sheets and total cost summaries--both scheduled and unscheduled maintenance (Life cycle cost for Dealers, life cycle cost for Service Stations, life cycle cost for Independent Repair, and scheduled...
ERIC Educational Resources Information Center
Randolph, W. Alan; Posner, Barry Z.
1982-01-01
Explored the effectiveness of an intergroup development organization development (OD) intervention at different stages of an organization's life cycle through four simulated organizations. Results suggest intergroup development interventions can be effective at any life stage, but impacts will be felt in different outcome measures and perceptual…
Behind Linus's Law: Investigating Peer Review Processes in Open Source
ERIC Educational Resources Information Center
Wang, Jing
2013-01-01
Open source software has revolutionized the way people develop software, organize collaborative work, and innovate. The numerous open source software systems that have been created and adopted over the past decade are influential and vital in all aspects of work and daily life. The understanding of open source software development can enhance its…
CMMI(Registered) for Acquisition, Version 1.3. CMMI-ACQ, V1.3
2010-11-01
and Software Engineering – System Life Cycle Processes [ ISO 2008b] ISO /IEC 27001 :2005 Information technology – Security techniques – Information...International Organization for Standardization and International Electrotechnical Commission. ISO /IEC 27001 Information Technology – Security Techniques...International Organization for Standardization/International Electrotechnical Commission ( ISO /IEC) body of standards. CMMs focus on improving processes
Ontology for Life-Cycle Modeling of Electrical Distribution Systems: Model View Definition
2013-06-01
building information models ( BIM ) at the coordinated design stage of building construction. 1.3 Approach To...standard for exchanging Building Information Modeling ( BIM ) data, which defines hundreds of classes for common use in software, currently supported by...specifications, Construction Operations Building in- formation exchange (COBie), Building Information Modeling ( BIM ) 16. SECURITY CLASSIFICATION OF:
The US Army Corps of Engineers Roadmap for Life-Cycle Building Information Modeling (BIM)
2012-11-01
Building Information Modeling ( BIM ) En gi ne er R es ea rc h an...Abstract Building Information Modeling ( BIM ) technology has rapidly gained ac- ceptance throughout the planning, architecture, engineering...the Industry Foundation Class (IFC) definitions to create vendor-neutral data exchanges for use in BIM software tools. Building Information Modeling
LIFE CYCLE DESIGN FRAMEWORK AND DEMONSTRATION PROJECTS - PROFILES OF AT&T AND ALLIED SIGNAL
This document offers guidance and practical experience for integrating environmental considerations into product system development. Life cycle design seeks to minimize the environmental burden associated with a product's life cycle from raw materials acquisition through manufact...
From life cycle talking to taking action
The series of Life Cycle Management (LCM) conferences has aimed to create a platform for users and developers of life cycle assessment tools to share their experiences as they challenge traditional environmental management practices, which are narrowly confined (“gate-to-gate”) a...
NASA Technical Reports Server (NTRS)
Lu, George C.
2003-01-01
The purpose of the EXPRESS (Expedite the PRocessing of Experiments to Space Station) rack project is to provide a set of predefined interfaces for scientific payloads which allow rapid integration into a payload rack on International Space Station (ISS). VxWorks' was selected as the operating system for the rack and payload resource controller, primarily based on the proliferation of VME (Versa Module Eurocard) products. These products provide needed flexibility for future hardware upgrades to meet everchanging science research rack configuration requirements. On the International Space Station, there are multiple science research rack configurations, including: 1) Human Research Facility (HRF); 2) EXPRESS ARIS (Active Rack Isolation System); 3) WORF (Window Observational Research Facility); and 4) HHR (Habitat Holding Rack). The RIC (Rack Interface Controller) connects payloads to the ISS bus architecture for data transfer between the payload and ground control. The RIC is a general purpose embedded computer which supports multiple communication protocols, including fiber optic communication buses, Ethernet buses, EIA-422, Mil-Std-1553 buses, SMPTE (Society Motion Picture Television Engineers)-170M video, and audio interfaces to payloads and the ISS. As a cost saving and software reliability strategy, the Boeing Payload Software Organization developed reusable common software where appropriate. These reusable modules included a set of low-level driver software interfaces to 1553B. RS232, RS422, Ethernet buses, HRDL (High Rate Data Link), video switch functionality, telemetry processing, and executive software hosted on the FUC computer. These drivers formed the basis for software development of the HRF, EXPRESS, EXPRESS ARIS, WORF, and HHR RIC executable modules. The reusable RIC common software has provided extensive benefits, including: 1) Significant reduction in development flow time; 2) Minimal rework and maintenance; 3) Improved reliability; and 4) Overall reduction in software life cycle cost. Due to the limited number of crew hours available on ISS for science research, operational efficiency is a critical customer concern. The current method of upgrading RIC software is a time consuming process; thus, an improved methodology for uploading RIC software is currently under evaluation.
NASA Technical Reports Server (NTRS)
Ensey, Tyler S.
2013-01-01
During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a fluid component, a discrete pressure switch. The switch takes a fluid pressure input, and if the pressure is greater than a designated cutoff pressure, the switch would stop fluid flow.
Life cycle assessment of EPS and CPB inserts: design considerations and end of life scenarios.
Tan, Reginald B H; Khoo, Hsien H
2005-02-01
Expanded polystyrene (EPS) and corrugated paperboard (CPB) are used in many industrial applications, such as containers, shock absorbers or simply as inserts. Both materials pose two different types of environmental problems. The first is the pollution and resource consumption that occur during the production of these materials; the second is the growing landfills that arise out of the excessive disposal of these packaging materials. Life cycle assessment or LCA will be introduced in this paper as a useful tool to compare the environmental performance of both EPS and CPB throughout their life cycle stages. This paper is divided into two main parts. The first part investigates the environmental impacts of the production of EPS and CPB from 'cradle-to-gate', comparing two inserts--both the original and proposed new designs. In the second part, LCA is applied to investigate various end-of-life cases for the same materials. The study will evaluate the environmental impacts of the present waste management practices in Singapore. Several 'what-if' cases are also discussed, including various percentages of landfilling and incineration. The SimaPro LCA Version 5.0 software's Eco-indicator 99 method is used to investigate the following five environmental impact categories: climate change, acidification/eutrophication, ecotoxicity, fossil fuels and respiratory inorganics.
Carbon footprint of forest and tree utilization technologies in life cycle approach
NASA Astrophysics Data System (ADS)
Polgár, András; Pécsinger, Judit
2017-04-01
In our research project a suitable method has been developed related the technological aspect of the environmental assessment of land use changes caused by climate change. We have prepared an eco-balance (environmental inventory) to the environmental effects classification in life-cycle approach in connection with the typical agricultural / forest and tree utilization technologies. The use of balances and environmental classification makes possible to compare land-use technologies and their environmental effects per common functional unit. In order to test our environmental analysis model, we carried out surveys in sample of forest stands. We set up an eco-balance of the working systems of intermediate cutting and final harvest in the stands of beech, oak, spruce, acacia, poplar and short rotation energy plantations (willow, poplar). We set up the life-cycle plan of the surveyed working systems by using the GaBi 6.0 Professional software and carried out midpoint and endpoint impact assessment. Out of the results, we applied the values of CML 2001 - Global Warming Potential (GWP 100 years) [kg CO2-Equiv.] and Eco-Indicator 99 - Human health, Climate Change [DALY]. On the basis of the values we set up a ranking of technology. By this, we received the environmental impact classification of the technologies based on carbon footprint. The working systems had the greatest impact on global warming (GWP 100 years) throughout their whole life cycle. This is explained by the amount of carbon dioxide releasing to the atmosphere resulting from the fuel of the technologies. Abiotic depletion (ADP foss) and marine aquatic ecotoxicity (MAETP) emerged also as significant impact categories. These impact categories can be explained by the share of input of fuel and lube. On the basis of the most significant environmental impact category (carbon footprint), we perform the relative life cycle contribution and ranking of each technologies. The technological life cycle stages examined in the stands are the followings: Stage 1. cleaning cutting Stage 2. selection thinning Stage 3. increment thinning Stage 4. final harvest In these priority impact categories, the life cycle contribution of technologies varied according to the life cycle stages. • The spruce stand showed the smallest contribution in the stages 1, 2, 3 alike. • After the large contribution of beech stand at the beginning (stage 1), it continues representing a moderate level in stage 2 and 3, and it shares the smallest rate in final harvest (stage 4). • The oak stand showed the largest contribution in the stages 2, 3, 4 alike. • In the case of acacia and poplar, we have got the same results as in the case of oak stands. • In the case of short rotation energy plantations (willow, poplar), we got the results typical on stage 4 of spruce stands. We can conclude, that in case of the stage of final harvest, which represents the most significant environmental impact, the ranking of working systems showes the increasing order of „energy plantations - beech - spruce - acacia - poplar - oak". The environmental assessment of technological aspects of land use and land use change represent an important added value to the climate research. Acknowledgement: This research has been supported by the Agroclimate.2 VKSZ_12-1- 2013-0034 project. Keywords: life-cycle assessment / forest utilization technology / carbon footprint / life-cycle thinking
NASA Astrophysics Data System (ADS)
Dudka, A. P.; Antipin, A. M.; Verin, I. A.
2017-09-01
Huber-5042 diffractometer with a closed-cycle Displex DE-202 helium cryostat is a unique scientific instrument for carrying out X-ray diffraction experiments when studying the single crystal structure in the temperature range of 20-300 K. To make the service life longer and develop new experimental techniques, the diffractometer control is transferred to a new hardware and software platform. To this end, a modern computer; a new detector reader unit; and new control interfaces for stepper motors, temperature controller, and cryostat vacuum pumping system are used. The system for cooling the X-ray tube, the high-voltage generator, and the helium compressor and pump for maintaining the desired vacuum in the cryostat are replaced. The system for controlling the primary beam shutter is upgraded. A biological shielding is installed. The new program tools, which use the Linux Ubuntu operating system and SPEC constructor, include a set of drivers for control units through the aforementioned interfaces. A program for searching reflections from a sample using fast continuous scanning and a priori information about crystal is written. Thus, the software package for carrying out the complete cycle of precise diffraction experiment (from determining the crystal unit cell to calculating the integral reflection intensities) is upgraded. High quality of the experimental data obtained on this equipment is confirmed in a number of studies in the temperature range from 20 to 300 K.
LIFE CYCLE DESIGN OF AMORPHOUS SILICON PHOTOVOLTAIC MODULES
The life cycle design framework was applied to photovoltaic module design. The primary objective of this project was to develop and evaluate design metrics for assessing and guiding the Improvement of PV product systems. Two metrics were used to assess life cycle energy perform...
Development and application of basis database for materials life cycle assessment in china
NASA Astrophysics Data System (ADS)
Li, Xiaoqing; Gong, Xianzheng; Liu, Yu
2017-03-01
As the data intensive method, high quality environmental burden data is an important premise of carrying out materials life cycle assessment (MLCA), and the reliability of data directly influences the reliability of the assessment results and its application performance. Therefore, building Chinese MLCA database is the basic data needs and technical supports for carrying out and improving LCA practice. Firstly, some new progress on database which related to materials life cycle assessment research and development are introduced. Secondly, according to requirement of ISO 14040 series standards, the database framework and main datasets of the materials life cycle assessment are studied. Thirdly, MLCA data platform based on big data is developed. Finally, the future research works were proposed and discussed.
Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Birge, B.
2013-01-01
A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.
A life cycle database for parasitic acanthocephalans, cestodes, and nematodes
Benesh, Daniel P.; Lafferty, Kevin D.; Kuris, Armand
2017-01-01
Parasitologists have worked out many complex life cycles over the last ~150 years, yet there have been few efforts to synthesize this information to facilitate comparisons among taxa. Most existing host-parasite databases focus on particular host taxa, do not distinguish final from intermediate hosts, and lack parasite life-history information. We summarized the known life cycles of trophically transmitted parasitic acanthocephalans, cestodes, and nematodes. For 973 parasite species, we gathered information from the literature on the hosts infected at each stage of the parasite life cycle (8510 host-parasite species associations), what parasite stage is in each host, and whether parasites need to infect certain hosts to complete the life cycle. We also collected life-history data for these parasites at each life cycle stage, including 2313 development time measurements and 7660 body size measurements. The result is the most comprehensive data summary available for these parasite taxa. In addition to identifying gaps in our knowledge of parasite life cycles, these data can be used to test hypotheses about life cycle evolution, host specificity, parasite life-history strategies, and the roles of parasites in food webs.
LIFE-CYCLE IMPACT ASSESSMENT DEMONSTRATION FOR THE BGU-24
The primary goal of this project was to develop and demonstrate a life-cycle impact assessment (LCIA) approach using existing life-cycle inventory (LCI) data on one of the propellants, energetics, and pyrotechnic (PEP) materials of interest to the U.S. Department of Defense (DoD)...
A Game to Teach the Life Cycles of Fungi
ERIC Educational Resources Information Center
Blum, Abraham
1976-01-01
Presented is a biological game utilized to teach fungi life cycles to secondary biology students. The game is designed to overcome difficulties of correlating schematic drawings with images seen through the microscope, correlating life cycles of fungi and host, and understanding cyclic development of fungi. (SL)
THE EPA'S EMERGING FOCUS ON LIFE CYCLE ASSESSMENT
EPA has been actively engaged in LCA research since 1990 to help advance the methodology and application of life cycle thinking in decision making. Across the Agency consideration of the life cycle concept is increasing in the development of policies and programs. A major force i...
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Flores, Luis; Fleming, Land; Throop, Daiv
2002-01-01
A hybrid discrete/continuous simulation tool, CONFIG, has been developed to support evaluation of the operability life support systems. CON FIG simulates operations scenarios in which flows and pressures change continuously while system reconfigurations occur as discrete events. In simulations, intelligent control software can interact dynamically with hardware system models. CONFIG simulations have been used to evaluate control software and intelligent agents for automating life support systems operations. A CON FIG model of an advanced biological water recovery system has been developed to interact with intelligent control software that is being used in a water system test at NASA Johnson Space Center
[Life cycle of the taiga tick ixodes persulcatus in taiga forests of the eastern Sayan Plateau].
Korotkov, Iu S
2014-01-01
The Ixodes persulcatus life cycle has been studied in natural environments of taiga fo- rests in The Eastern Sayan Plateau (56 10' N, 91 30' E). Engorged larvae and nymphs de- velop with a morphogenetic diapause or without diapause, with ratio of these two ways of development for larvae and nymphs 77.25/22.75% and 43.43/56.57%, respectively. The hypothetic season hemipopulation consists of 34.5 +/- 4.5, 50.1 +/- 1.3, 13.2 +/- 4.0 n 2.2% of unfed imagoes, completing 3-year, 4-year, 5-year, and 6-year life cycles, respectively. Mean life span is 3.83 +/- 0.10 years per generation. The "life table" predicting the probability to complete life cycle through phases from egg to adult, was developed.
Why and how Mastering an Incremental and Iterative Software Development Process
NASA Astrophysics Data System (ADS)
Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe
2004-06-01
One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.
Manned/Unmanned Common Architecture Program (MCAP) net centric flight tests
NASA Astrophysics Data System (ADS)
Johnson, Dale
2009-04-01
Properly architected avionics systems can reduce the costs of periodic functional improvements, maintenance, and obsolescence. With this in mind, the U.S. Army Aviation Applied Technology Directorate (AATD) initiated the Manned/Unmanned Common Architecture Program (MCAP) in 2003 to develop an affordable, high-performance embedded mission processing architecture for potential application to multiple aviation platforms. MCAP analyzed Army helicopter and unmanned air vehicle (UAV) missions, identified supporting subsystems, surveyed advanced hardware and software technologies, and defined computational infrastructure technical requirements. The project selected a set of modular open systems standards and market-driven commercial-off-theshelf (COTS) electronics and software, and, developed experimental mission processors, network architectures, and software infrastructures supporting the integration of new capabilities, interoperability, and life cycle cost reductions. MCAP integrated the new mission processing architecture into an AH-64D Apache Longbow and participated in Future Combat Systems (FCS) network-centric operations field experiments in 2006 and 2007 at White Sands Missile Range (WSMR), New Mexico and at the Nevada Test and Training Range (NTTR) in 2008. The MCAP Apache also participated in PM C4ISR On-the-Move (OTM) Capstone Experiments 2007 (E07) and 2008 (E08) at Ft. Dix, NJ and conducted Mesa, Arizona local area flight tests in December 2005, February 2006, and June 2008.
Architectures and Evaluation for Adjustable Control Autonomy for Space-Based Life Support Systems
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra K.
2001-01-01
In the past five years, a number of automation applications for control of crew life support systems have been developed and evaluated in the Adjustable Autonomy Testbed at NASA's Johnson Space Center. This paper surveys progress on an adjustable autonomous control architecture for situations where software and human operators work together to manage anomalies and other system problems. When problems occur, the level of control autonomy can be adjusted, so that operators and software agents can work together on diagnosis and recovery. In 1997 adjustable autonomy software was developed to manage gas transfer and storage in a closed life support test. Four crewmembers lived and worked in a chamber for 91 days, with both air and water recycling. CO2 was converted to O2 by gas processing systems and wheat crops. With the automation software, significantly fewer hours were spent monitoring operations. System-level validation testing of the software by interactive hybrid simulation revealed problems both in software requirements and implementation. Since that time, we have been developing multi-agent approaches for automation software and human operators, to cooperatively control systems and manage problems. Each new capability has been tested and demonstrated in realistic dynamic anomaly scenarios, using the hybrid simulation tool.
1976-11-01
system. b. Read different program configurations to reconfigure the software during flight. c. Write Digital Integrated Test System (DITS) results...associated witn > inor C):l.e Event must be Unlatched. The sole difference between a Latched ana an lnratcrec Condition is that upon the Scheduling...Table. Furthermore, the block of pointers for one Minor Cycle may be wholly contained witnir the Diock of ocinters for a different Minor Cycle. For
Integrated testing and verification system for research flight software
NASA Technical Reports Server (NTRS)
Taylor, R. N.
1979-01-01
The MUST (Multipurpose User-oriented Software Technology) program is being developed to cut the cost of producing research flight software through a system of software support tools. An integrated verification and testing capability was designed as part of MUST. Documentation, verification and test options are provided with special attention on real-time, multiprocessing issues. The needs of the entire software production cycle were considered, with effective management and reduced lifecycle costs as foremost goals.
A Practical Tutorial on Modified Condition/Decision Coverage
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Veerhusen, Dan S.; Chilenski, John J.; Rierson, Leanna K.
2001-01-01
This tutorial provides a practical approach to assessing modified condition/decision coverage (MC/DC) for aviation software products that must comply with regulatory guidance for DO-178B level A software. The tutorial's approach to MC/DC is a 5-step process that allows a certification authority or verification analyst to evaluate MC/DC claims without the aid of a coverage tool. In addition to the MC/DC approach, the tutorial addresses factors to consider in selecting and qualifying a structural coverage analysis tool, tips for reviewing life cycle data related to MC/DC, and pitfalls common to structural coverage analysis.
V&V Plan for FPGA-based ESF-CCS Using System Engineering Approach.
NASA Astrophysics Data System (ADS)
Maerani, Restu; Mayaka, Joyce; El Akrat, Mohamed; Cheon, Jung Jae
2018-02-01
Instrumentation and Control (I&C) systems play an important role in maintaining the safety of Nuclear Power Plant (NPP) operation. However, most current I&C safety systems are based on Programmable Logic Controller (PLC) hardware, which is difficult to verify and validate, and is susceptible to software common cause failure. Therefore, a plan for the replacement of the PLC-based safety systems, such as the Engineered Safety Feature - Component Control System (ESF-CCS), with Field Programmable Gate Arrays (FPGA) is needed. By using a systems engineering approach, which ensures traceability in every phase of the life cycle, from system requirements, design implementation to verification and validation, the system development is guaranteed to be in line with the regulatory requirements. The Verification process will ensure that the customer and stakeholder’s needs are satisfied in a high quality, trustworthy, cost efficient and schedule compliant manner throughout a system’s entire life cycle. The benefit of the V&V plan is to ensure that the FPGA based ESF-CCS is correctly built, and to ensure that the measurement of performance indicators has positive feedback that “do we do the right thing” during the re-engineering process of the FPGA based ESF-CCS.
Data Flow in Relation to Life-Cycle Costing of Construction Projects in the Czech Republic
NASA Astrophysics Data System (ADS)
Biolek, Vojtěch; Hanák, Tomáš; Marović, Ivan
2017-10-01
Life-cycle costing is an important part of every construction project, as it makes it possible to take into consideration future costs relating to the operation and demolition phase of a built structure. In this way, investors can optimize the project design to minimize the total project costs. Even though there have already been some attempts to implement BIM software in the Czech Republic, the current state of affairs does not support automated data flow between the bill of costs and applications that support building facility management. The main aim of this study is to critically evaluate the current situation and outline a future framework that should allow for the use of the data contained in the bill of costs to manage building operating costs.
DOT National Transportation Integrated Search
2015-05-01
The research team developed a comprehensive Benefit/Cost (B/C) analysis framework to evaluate existing and anticipated : intelligent transportation system (ITS) strategies, particularly, adaptive traffic control systems and ramp metering systems, : i...
LIFE-CYCLE IMPACT ASSESSMENT DEMONSTRATION FOR THE GBU-24
The primary goal of this project was to develop and demonstrate a life-cycle impact assessment (LCIA) approach using existing life-cycle inventory (LCI) data on one of the propellants, energetics, and pyro-technic (PEP) materials of interest to the U.S. Department of Defense (DoD...
DOT National Transportation Integrated Search
2009-05-01
The development of life-cycle energy and emissions factors for passenger transportation modes : is critical for understanding the total environmental costs of travel. Previous life-cycle studies : have focused on the automobile given its dominating s...
LCACCESS: A GLOBAL DIRECTORY OF LIFE CYCLE ASSESSMENT RESOURCES
LCAccess is an EPA-sponsored website intended to promote the use of Life Cycle Assessment (LCA) in business decision-making by faciliatating access to data sources that are useful in developing a life cycle inventory (LCI). While LCAccess does not itself contain data, it is a sea...
Models of the Organizational Life Cycle: Applications to Higher Education.
ERIC Educational Resources Information Center
Cameron, Kim S.; Whetten, David A.
1983-01-01
A review of models of group and organization life cycle development is provided and the applicability of those models for institutions of higher education are discussed. An understanding of the problems and characteristics present in different life cycle stages can help institutions manage transitions more effectively. (Author/MLW)
ERIC Educational Resources Information Center
Reeske, Mike
2000-01-01
Explains a project called "Life Cycle of a Pencil" which was developed by the National Science Teachers Association (NSTA) and the U.S. Environmental Protection Agency (USEPA). Describes the life cycle of a pencil in stages starting from the first stage of design to the sixth stage of product disposal. (YDS)
Towards a comprehensive framework for reuse: A reuse-enabling software evolution environment
NASA Technical Reports Server (NTRS)
Basili, V. R.; Rombach, H. D.
1988-01-01
Reuse of products, processes and knowledge will be the key to enable the software industry to achieve the dramatic improvement in productivity and quality required to satisfy the anticipated growing demand. Although experience shows that certain kinds of reuse can be successful, general success has been elusive. A software life-cycle technology which allows broad and extensive reuse could provide the means to achieving the desired order-of-magnitude improvements. The scope of a comprehensive framework for understanding, planning, evaluating and motivating reuse practices and the necessary research activities is outlined. As a first step towards such a framework, a reuse-enabling software evolution environment model is introduced which provides a basis for the effective recording of experience, the generalization and tailoring of experience, the formalization of experience, and the (re-)use of experience.
Life cycle-based water assessment of a hand dishwashing product: opportunities and limitations.
Van Hoof, Gert; Buyle, Bea; Kounina, Anna; Humbert, Sebastien
2013-10-01
It is only recently that life cycle-based indicators have been used to evaluate products from a water use impact perspective. The applicability of some of these methods has been primarily demonstrated on agricultural materials or products, because irrigation requirements in food production can be water-intensive. In view of an increasing interest on life cycle-based water indicators from different products, we ran a study on a hand dishwashing product. A number of water assessment methods were applied with the purpose of identifying both product improvement opportunities, as well as understanding the potential for underlying database and methodological improvements. The study covered the entire life cycle of the product and focused on environmental issues related to water use, looking in-depth at inventory, midpoint, and endpoint methods. "Traditional" water emission driven methods, such as freshwater eutrophication, were excluded from the analysis. The use of a single formula with the same global supply chain, manufactured in 1 location was evaluated in 2 countries with different water scarcity conditions. The study shows differences ranging up to 4 orders in magnitude for indicators with similar units associated with different water use types (inventory methods) and different cause-effect chain models (midpoint and endpoint impact categories). No uncertainty information was available on the impact assessment methods, whereas uncertainty from stochastic variability was not available at the time of study. For the majority of the indicators studied, the contribution from the consumer use stage is the most important (>90%), driven by both direct water use (dishwashing process) as well as indirect water use (electricity generation to heat the water). Creating consumer awareness on how the product is used, particularly in water-scarce areas, is the largest improvement opportunity for a hand dishwashing product. However, spatial differentiation in the inventory and impact assessment model may lead to very different results for the product used under exactly the same consumer use conditions, making the communication of results a real challenge. From a practitioner's perspective, the data collection step in relation to the goal and scope of the study sets high requirements for both foreground and background data. In particular, databases covering a broad spectrum of inventory data with spatially differentiated water use information are lacking. For some impact methods, it is unknown as to whether or not characterization factors should be spatially differentiated, which creates uncertainty in their interpretation and applicability. Finally, broad application of life cycle-based water assessment will require further development of commercial life cycle assessment software. © 2013 SETAC.
CMS Software: Installation Guide and User Manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straut, Christine
A Chemical Inventory Management System (CIMS) is a system or program that is used to track chemicals at a facility or institution. An effective CIMS begins tracking these chemicals at the point of procurement and continues through use and disposal. The management of chemicals throughout the life cycle (procurement to disposal) is a key concept for the secure management of chemicals at any institution.
Building Maintenance and Repair Data for Life-Cycle Cost Analyses: Electrical Systems.
1991-05-01
Repair Data for Life-Cycle Cost Analyses: Electrical Systems by Edgar S. Neely Robert D. Neathammer James R. Stirn Robert P. Winkler This research...systems have been developed to assist planners in preparing DD Form 1391 documentation, designers in life-cycle cost component selection, and maintainers...Maintenance and Repair Data for Life-Cycle Cost Analyses: RDTE dated 1980 Electrical Systems REIMB 1984 - 1989 6. AUTH4OR(S) Edgar S. Neely, Robert D
2016-09-01
Support Strategies (PBPSS), throughout the system life cycle . Maximizing competition, to include small business participation. Developing...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA JOINT APPLIED PROJECT WHY ARMY PROGRAM MANAGERS STRUGGLE AS LIFE CYCLE MANAGERS...SUBTITLE WHY ARMY PROGRAM MANAGERS STRUGGLE AS LIFE CYCLE MANAGERS: A STUDY OF THE PM’S ROLES, RESPONSIBILITIES, AND BARRIERS IN THE EXECUTION OF
Using Quality of Student Life Indicators at Three Cooperating Colleges: The Cycles Survey.
ERIC Educational Resources Information Center
Royer, Paula Nassif; Kegan, Daniel
The problems of developing a low cost, quality institutional research program capable of longitudinal research, continuous broad bandwidth monitoring and data comparisons with other institutions, led to the development of the Hampshire Cycles Survey as an initial set of student quality of life indicators. Cycles is a multidimensional survey…
Bare, Jane; Gloria, Thomas; Norris, Gregory
2006-08-15
Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relative contribution by substance and life cycle impact category. Normalization thus can significantly influence LCA-based decisions when tradeoffs exist. The U. S. Environmental Protection Agency (EPA) has developed a normalization database based on the spatial scale of the 48 continental U.S. states, Hawaii, Alaska, the District of Columbia, and Puerto Rico with a one-year reference time frame. Data within the normalization database were compiled based on the impact methodologies and lists of stressors used in TRACI-the EPA's Tool for the Reduction and Assessment of Chemical and other environmental Impacts. The new normalization database published within this article may be used for LCIA case studies within the United States, and can be used to assist in the further development of a global normalization database. The underlying data analyzed for the development of this database are included to allow the development of normalization data consistent with other impact assessment methodologies as well.
A Systems Analysis Role Play Case: We Sell Stuff, Inc.
ERIC Educational Resources Information Center
Mitri, Michel; Cole, Carey
2007-01-01
Most systems development projects incorporate some sort of life cycle approach in their development. Whether the development methodology involves a traditional life cycle, prototyping, rapid application development, or some other approach, the first step usually involves a system investigation, which includes problem identification, feasibility…
SAVANT: Solar Array Verification and Analysis Tool Demonstrated
NASA Technical Reports Server (NTRS)
Chock, Ricaurte
2000-01-01
The photovoltaics (PV) industry is now being held to strict specifications, such as end-oflife power requirements, that force them to overengineer their products to avoid contractual penalties. Such overengineering has been the only reliable way to meet such specifications. Unfortunately, it also results in a more costly process than is probably necessary. In our conversations with the PV industry, the issue of cost has been raised again and again. Consequently, the Photovoltaics and Space Environment Effects branch at the NASA Glenn Research Center at Lewis Field has been developing a software tool to address this problem. SAVANT, Glenn's tool for solar array verification and analysis is in the technology demonstration phase. Ongoing work has proven that more efficient and less costly PV designs should be possible by using SAVANT to predict the on-orbit life-cycle performance. The ultimate goal of the SAVANT project is to provide a user-friendly computer tool to predict PV on-orbit life-cycle performance. This should greatly simplify the tasks of scaling and designing the PV power component of any given flight or mission. By being able to predict how a particular PV article will perform, designers will be able to balance mission power requirements (both beginning-of-life and end-of-life) with survivability concerns such as power degradation due to radiation and/or contamination. Recent comparisons with actual flight data from the Photovoltaic Array Space Power Plus Diagnostics (PASP Plus) mission validate this approach.
Systematic on-site monitoring of compliance dust samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grayson, R.L.; Gandy, J.R.
1996-12-31
Maintaining compliance with U.S. respirable coal mine dust standards can be difficult on high-productivity longwall panels. Comprehensive and systematic analysis of compliance dust sample data, coupled with access to the U.S. Bureau of Mines (USBM) DUSTPRO, can yield important information for use in maintaining compliance. The objective of this study was to develop and apply a customized software for the collection, storage, modification, and analysis of respirable dust data while providing for flexible export of data and linking with the USBM`s expert advisory system on dust control. An executable, IBM-compatible software was created and customized for use by the personmore » in charge of collecting, submitting, analyzing, and monitoring respirable dust compliance samples. Both descriptive statistics and multiple regression analysis were incorporated. The software allows ASCH files to be exported and directly links with DUSTPRO. After development and validation of the software, longwall compliance data from two different mines was analyzed to evaluate the value of the software. Data included variables on respirable dust concentration, tons produced, the existence of roof/floor rock (dummy variable), and the sampling cycle (dummy variables). Because of confidentiality, specific data will not be presented, only the equations and ANOVA tables. The final regression models explained 83.8% and 61.1% of the variation in the data for the two panels. Important correlations among variables within sampling cycles showed the value of using dummy variables for sampling cycles. The software proved flexible and fast for its intended use. The insights obtained from use improved the systematic monitoring of respirable dust compliance data, especially for pinpointing the most effective dust control methods during specific sampling cycles.« less
1978-01-01
AD-A092 043 NAVAL AIR DEVELOPMENT CENTER WARMINSTER PA F/6 2/ I PROCEEDINGS OF 050 AIRCRAFT ENGINE DESIGN & LIFE CYCLE COST SEN--ETC (U NSI FE 1978 R...4 STANDAHAR, R R SHOREY. A PRESSMAN N PROCEEDINGS OFOSD AIRCRAFT ENGINE DESIGN & LIFE CYCLE COST SEMINAR HELD AT ,NAVAL AIR DEVELOPMENT CENTER f...RELIABILITY CAN BE MET. THIS INFORMATION WILL BE USED BY THE ACQUISITION ACTIVITY TO ESTABLISH THE PROPER DESIGN AND TEST REQUIREMENTS TO INSURE THAT THE
Predicting Defects Using Information Intelligence Process Models in the Software Technology Project
Selvaraj, Manjula Gandhi; Jayabal, Devi Shree; Srinivasan, Thenmozhi; Balasubramanie, Palanisamy
2015-01-01
A key differentiator in a competitive market place is customer satisfaction. As per Gartner 2012 report, only 75%–80% of IT projects are successful. Customer satisfaction should be considered as a part of business strategy. The associated project parameters should be proactively managed and the project outcome needs to be predicted by a technical manager. There is lot of focus on the end state and on minimizing defect leakage as much as possible. Focus should be on proactively managing and shifting left in the software life cycle engineering model. Identify the problem upfront in the project cycle and do not wait for lessons to be learnt and take reactive steps. This paper gives the practical applicability of using predictive models and illustrates use of these models in a project to predict system testing defects thus helping to reduce residual defects. PMID:26495427
Predicting Defects Using Information Intelligence Process Models in the Software Technology Project.
Selvaraj, Manjula Gandhi; Jayabal, Devi Shree; Srinivasan, Thenmozhi; Balasubramanie, Palanisamy
2015-01-01
A key differentiator in a competitive market place is customer satisfaction. As per Gartner 2012 report, only 75%-80% of IT projects are successful. Customer satisfaction should be considered as a part of business strategy. The associated project parameters should be proactively managed and the project outcome needs to be predicted by a technical manager. There is lot of focus on the end state and on minimizing defect leakage as much as possible. Focus should be on proactively managing and shifting left in the software life cycle engineering model. Identify the problem upfront in the project cycle and do not wait for lessons to be learnt and take reactive steps. This paper gives the practical applicability of using predictive models and illustrates use of these models in a project to predict system testing defects thus helping to reduce residual defects.
Lin, Xiaodan; Yu, Shen; Ma, Hwongwen
2018-01-01
Intense human activities have led to increasing deterioration of the watershed environment via pollutant discharge, which threatens human health and ecosystem function. To meet a need of comprehensive environmental impact/risk assessment for sustainable watershed development, a biogeochemical process-based life cycle assessment and risk assessment (RA) integration for pollutants aided by geographic information system is proposed in this study. The integration is to frame a conceptual protocol of "watershed life cycle assessment (WLCA) for pollutants". The proposed WLCA protocol consists of (1) geographic and environmental characterization mapping; (2) life cycle inventory analysis; (3) integration of life-cycle impact assessment (LCIA) with RA via characterization factor of pollutant of interest; and (4) result analysis and interpretation. The WLCA protocol can visualize results of LCIA and RA spatially for the pollutants of interest, which might be useful for decision or policy makers for mitigating impacts of watershed development.
Next Generation CAD/CAM/CAE Systems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)
1997-01-01
This document contains presentations from the joint UVA/NASA Workshop on Next Generation CAD/CAM/CAE Systems held at NASA Langley Research Center in Hampton, Virginia on March 18-19, 1997. The presentations focused on current capabilities and future directions of CAD/CAM/CAE systems, aerospace industry projects, and university activities related to simulation-based design. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the potential of emerging CAD/CAM/CAE technology for use in intelligent simulation-based design and to provide guidelines for focused future research leading to effective use of CAE systems for simulating the entire life cycle of aerospace systems.
Optimizing product life cycle processes in design phase
NASA Astrophysics Data System (ADS)
Faneye, Ola. B.; Anderl, Reiner
2002-02-01
Life cycle concepts do not only serve as basis in assisting product developers understand the dependencies between products and their life cycles, they also help in identifying potential opportunities for improvement in products. Common traditional concepts focus mainly on energy and material flow across life phases, necessitating the availability of metrics derived from a reference product. Knowledge of life cycle processes won from an existing product is directly reused in its redesign. Depending on sales volume nevertheless, the environmental impact before product optimization can be substantial. With modern information technologies today, computer-aided life cycle methodologies can be applied well before product use. On the basis of a virtual prototype, life cycle processes are analyzed and optimized, using simulation techniques. This preventive approach does not only help in minimizing (or even eliminating) environmental burdens caused by product, costs incurred due to changes in real product can also be avoided. The paper highlights the relationship between product and life cycle and presents a computer-based methodology for optimizing the product life cycle during design, as presented by SFB 392: Design for Environment - Methods and Tools at Technical University, Darmstadt.
ERIC Educational Resources Information Center
Becker, Wayne M.
This outline is intended for use in a unit of 10-12 lectures on plant growth and development at the introductory undergraduate level as part of a course on organismal biology. The series of lecture outlines is structured around the life cycle of rapid-cycling Brassica rapa (RCBr). The unit begins with three introductory lectures on general plant…
Research on the application of BIM technology in the whole life cycle of construction projects
NASA Astrophysics Data System (ADS)
Chang-liu, CHEN; Wei-wei, KOU; Shuai-hua, YE
2018-05-01
BIM technology can realize information sharing, and good BIM application will reduce the whole life cycle cost of construction projects. The popularization of BIM technology challenges the application of BIM technology at all stages of the whole life cycle of the construction project. It will give full play to the value of BIM, if developing a reasonable BIM project execution plan, defining BIM requirements, specifying Level of Development, determining the BIM quality control plan and clearing BIM application content of each stage, and will provide a unified method for project stakeholders, realize the whole life cycle of construction projects, and achieve the desired information sharing in construction project.
Biology Needs Evolutionary Software Tools: Let’s Build Them Right
Team, Galaxy; Goecks, Jeremy; Taylor, James
2018-01-01
Abstract Research in population genetics and evolutionary biology has always provided a computational backbone for life sciences as a whole. Today evolutionary and population biology reasoning are essential for interpretation of large complex datasets that are characteristic of all domains of today’s life sciences ranging from cancer biology to microbial ecology. This situation makes algorithms and software tools developed by our community more important than ever before. This means that we, developers of software tool for molecular evolutionary analyses, now have a shared responsibility to make these tools accessible using modern technological developments as well as provide adequate documentation and training. PMID:29688462
ERIC Educational Resources Information Center
Bedford, Denise A. D.
2015-01-01
The knowledge life cycle is applied to two core capabilities of library and information science (LIS) education--teaching, and research and development. The knowledge claim validation, invalidation and integration steps of the knowledge life cycle are translated to learning, unlearning and relearning processes. Mixed methods are used to determine…
A methodology is described for developing a gate-to-gate life cycle inventory (LCI) of a chemical manufacturing process to support the application of life cycle assessment in the design and regulation of sustainable chemicals. The inventories were derived by first applying proces...
THE EMERGING FOCUS ON LIFE-CYCLE ASSESSMENT IN THE U. S. ENVIRONMENTAL PROTECTION AGENCY
EPA has been actively engaged in LCA research since 1990 to help advance the methodology and application of life cycle thinking in decision-making. Across the Agency consideration of the life cycle concept is increasing in the development of policies and programs. A major force i...
Developing Students' Understanding of Industrially Relevant Economic and Life Cycle Assessments
ERIC Educational Resources Information Center
Bode, Claudia J.; Chapman, Clint; Pennybaker, Atherly; Subramaniam, Bala
2017-01-01
Training future leaders to understand life cycle assessment data is critical for effective research, business, and sociopolitical decision-making. However, the technical nature of these life cycle reports often makes them challenging for students and other nonexperts to comprehend. Therefore, we outline here the key takeaways from recent economic…
Generation of Finite Life Distributional Goodman Diagrams for Reliability Prediction
NASA Technical Reports Server (NTRS)
Kececioglu, D.; Guerrieri, W. N.
1971-01-01
The methodology of developing finite life distributional Goodman diagrams and surfaces is described for presenting allowable combinations of alternating stress and mean stress to the design engineer. The combined stress condition is that of an alternating bending stress and a constant shear stress. The finite life Goodman diagrams and surfaces are created from strength distributions developed at various ratios of alternating to mean stress at particular cycle life values. The conclusions indicate that the Von Mises-Hencky ellipse, for cycle life values above 1000 cycles, is an adequate model of the finite life Goodman diagram. In addition, suggestions are made which reduce the number of experimental data points required in a fatigue data acquisition program.
An Integrated Approach to Life Cycle Analysis
NASA Technical Reports Server (NTRS)
Chytka, T. M.; Brown, R. W.; Shih, A. T.; Reeves, J. D.; Dempsey, J. A.
2006-01-01
Life Cycle Analysis (LCA) is the evaluation of the impacts that design decisions have on a system and provides a framework for identifying and evaluating design benefits and burdens associated with the life cycles of space transportation systems from a "cradle-to-grave" approach. Sometimes called life cycle assessment, life cycle approach, or "cradle to grave analysis", it represents a rapidly emerging family of tools and techniques designed to be a decision support methodology and aid in the development of sustainable systems. The implementation of a Life Cycle Analysis can vary and may take many forms; from global system-level uncertainty-centered analysis to the assessment of individualized discriminatory metrics. This paper will focus on a proven LCA methodology developed by the Systems Analysis and Concepts Directorate (SACD) at NASA Langley Research Center to quantify and assess key LCA discriminatory metrics, in particular affordability, reliability, maintainability, and operability. This paper will address issues inherent in Life Cycle Analysis including direct impacts, such as system development cost and crew safety, as well as indirect impacts, which often take the form of coupled metrics (i.e., the cost of system unreliability). Since LCA deals with the analysis of space vehicle system conceptual designs, it is imperative to stress that the goal of LCA is not to arrive at the answer but, rather, to provide important inputs to a broader strategic planning process, allowing the managers to make risk-informed decisions, and increase the likelihood of meeting mission success criteria.
An approach to developing user interfaces for space systems
NASA Astrophysics Data System (ADS)
Shackelford, Keith; McKinney, Karen
1993-08-01
Inherent weakness in the traditional waterfall model of software development has led to the definition of the spiral model. The spiral model software development lifecycle model, however, has not been applied to NASA projects. This paper describes its use in developing real time user interface software for an Environmental Control and Life Support System (ECLSS) Process Control Prototype at NASA's Marshall Space Flight Center.
NASA Astrophysics Data System (ADS)
Song, Xiaolong; Yang, Jianxin; Lu, Bin; Yang, Dong
2017-01-01
China is now facing e-waste problems from both growing domestic generation and illegal imports. Many stakeholders are involved in the e-waste treatment system due to the complexity of e-waste life cycle. Beginning with the state of the e-waste treatment industry in China, this paper summarizes the latest progress in e-waste management from such aspects as the new edition of the China RoHS Directive, new Treatment List, new funding subsidy standard, and eco-design pilots. Thus, a conceptual model for life cycle management of e-waste is generalized. The operating procedure is to first identify the life cycle stages of the e-waste and extract the important life cycle information. Then, life cycle tools can be used to conduct a systematic analysis to help decide how to maximize the benefits from a series of life cycle engineering processes. Meanwhile, life cycle thinking is applied to improve the legislation relating to e-waste so as to continuously improve the sustainability of the e-waste treatment system. By providing an integrative framework, the life cycle management of e-waste should help to realize sustainable management of e-waste in developing countries.
Burghardt, Liana T; Metcalf, C Jessica E; Wilczek, Amity M; Schmitt, Johanna; Donohue, Kathleen
2015-02-01
Organisms develop through multiple life stages that differ in environmental tolerances. The seasonal timing, or phenology, of life-stage transitions determines the environmental conditions to which each life stage is exposed and the length of time required to complete a generation. Both environmental and genetic factors contribute to phenological variation, yet predicting their combined effect on life cycles across a geographic range remains a challenge. We linked submodels of the plasticity of individual life stages to create an integrated model that predicts life-cycle phenology in complex environments. We parameterized the model for Arabidopsis thaliana and simulated life cycles in four locations. We compared multiple "genotypes" by varying two parameters associated with natural genetic variation in phenology: seed dormancy and floral repression. The model predicted variation in life cycles across locations that qualitatively matches observed natural phenology. Seed dormancy had larger effects on life-cycle length than floral repression, and results suggest that a genetic cline in dormancy maintains a life-cycle length of 1 year across the geographic range of this species. By integrating across life stages, this approach demonstrates how genetic variation in one transition can influence subsequent transitions and the geographic distribution of life cycles more generally.
The Environmental Control and Life Support System (ECLSS) advanced automation project
NASA Technical Reports Server (NTRS)
Dewberry, Brandon S.; Carnes, Ray
1990-01-01
The objective of the environmental control and life support system (ECLSS) Advanced Automation Project is to influence the design of the initial and evolutionary Space Station Freedom Program (SSFP) ECLSS toward a man-made closed environment in which minimal flight and ground manpower is needed. Another objective includes capturing ECLSS design and development knowledge future missions. Our approach has been to (1) analyze the SSFP ECLSS, (2) envision as our goal a fully automated evolutionary environmental control system - an augmentation of the baseline, and (3) document the advanced software systems, hooks, and scars which will be necessary to achieve this goal. From this analysis, prototype software is being developed, and will be tested using air and water recovery simulations and hardware subsystems. In addition, the advanced software is being designed, developed, and tested using automation software management plan and lifecycle tools. Automated knowledge acquisition, engineering, verification and testing tools are being used to develop the software. In this way, we can capture ECLSS development knowledge for future use develop more robust and complex software, provide feedback to the knowledge based system tool community, and ensure proper visibility of our efforts.
Development of a green remediation tool in Japan.
Yasutaka, Tetsuo; Zhang, Hong; Murayama, Koki; Hama, Yoshihito; Tsukada, Yasuhisa; Furukawa, Yasuhide
2016-09-01
The green remediation assessment tool for Japan (GRATJ) presented in this study is a spreadsheet-based software package developed to facilitate comparisons of the environmental impacts associated with various countermeasures against contaminated soil in Japan. This tool uses a life-cycle assessment-based model to calculate inventory inputs/outputs throughout the activity life cycle during remediation. Processes of 14 remediation methods for heavy metal contamination and 12 for volatile organic compound contamination are built into the tool. This tool can evaluate 130 inventory inputs/outputs and easily integrate those inputs/outputs into 9 impact categories, 4 integrated endpoints, and 1 index. Comparative studies can be performed by entering basic data associated with a target site. The integrated results can be presented in a simpler and clearer manner than the results of an inventory analysis. As a case study, an arsenic-contaminated soil remediation site was examined using this tool. Results showed that the integrated environmental impacts were greater with onsite remediation methods than with offsite ones. Furthermore, the contributions of CO2 to global warming, SO2 to urban air pollution, and crude oil to resource consumption were greater than other inventory inputs/outputs. The GRATJ has the potential to improve green remediation and can serve as a valuable tool for decision makers and practitioners in selecting countermeasures in Japan. Copyright © 2016 Elsevier B.V. All rights reserved.
Variations in the Life Cycle of Anemone patens L. (Ranunculaceae) in Wild Populations of Canada
Kricsfalusy, Vladimir
2016-01-01
Based on a study of a perennial herb Anemone patens L. (Ranunculaceae) in a variety of natural habitats in Saskatchewan, Canada, eight life stages (seed, seedling, juvenile, immature, vegetative, generative, subsenile, and senile) are distinguished and characterized in detail. The species ontogenetic growth patterns are investigated. A. patens has a long life cycle that may last for several decades which leads to the formation of compact clumps. The distribution and age of clumps vary substantially in different environments with different levels of disturbance. The plant ontogeny includes the regular cycle with reproduction occurring through seeds. There is an optional subsenile vegetative disintegration at the end of the life span. The following variations in the life cycle of A. patens are identified: with slower development in young age, with an accelerated development, with omission of the generative stage, with retrogression to previous life stages in mature age, and with vegetative dormancy. The range of variations in the life cycle of A. patens may play an important role in maintaining population stability in different environmental conditions and management regimes. PMID:27376340
NASA Technical Reports Server (NTRS)
Rice, Amanda; Parris, Frank; Nerren, Philip
2000-01-01
Marshall Space Flight Center (MSFC) has been funding development of intelligent software models to benefit payload ground operations for nearly a decade. Experience gained from simulator development and real-time monitoring and control is being applied to engineering design, testing, and operation of the First Material Science Research Rack (MSRR-1). MSRR-1 is the first rack in a suite of three racks comprising the Materials Science Research Facility (MSRF) which will operate on the International Space Station (ISS). The MSRF will accommodate advanced microgravity investigations in areas such as the fields of solidification of metals and alloys, thermo-physical properties of polymers, crystal growth studies of semiconductor materials, and research in ceramics and glasses. The MSRR-1 is a joint venture between NASA and the European Space Agency (ESA) to study the behavior of different materials during high temperature processing in a low gravity environment. The planned MSRR-1 mission duration is five (5) years on-orbit and the total design life is ten (IO) years. The MSRR-1 launch is scheduled on the third Utilization Flight (UF-3) to ISS, currently in February of 2003). The objective of MSRR-1 is to provide an early capability on the ISS to conduct material science, materials technology, and space product research investigations in microgravity. It will provide a modular, multi-user facility for microgravity research in materials crystal growth and solidification. An intelligent software model of MSRR-1 is under development and will serve multiple purposes to support the engineering analysis, testing, training, and operational phases of the MSRR-1 life cycle development. The G2 real-time expert system software environment developed by Gensym Corporation was selected as the intelligent system shell for this development work based on past experience gained and the effectiveness of the programming environment. Our approach of multi- uses of the simulation model and its intuitive graphics capabilities is providing a concurrent engineering environment for rapid prototyping and development. Operational schematics of the MSRR-1 electrical, thermal control, vacuum access, and gas supply systems, and furnace inserts are represented graphically in the environment. Logic to represent first order engineering calculations is coded into the knowledge base to simulate the operational behavior of the MSRR-1 systems. An example of engineering data provided includes electrical currents, voltages, operational power, temperatures, thermal fluid flow rates. pressures, and component status indications. These type of data are calculated and displayed at appropriate instrumentation points, and the schematics are animated to reflect the simulated operational status of the MSRR-1. The software control functions are also simulated to represent appropriate operational behavior based on automated control and response to commands received by the crew or ground controllers. The first benefit of this simulation environment is being realized in the high fidelity engineering analysis results from the electrical power system G2 model. Secondly, the MSRR-1 simulation model will be embedded with a hardware mock-up of the MSRR-1 to provide crew training on MSRR-1 integrated payload operations. G2 gateway code will output the simulated instrumentation values, termed as telemetry, in a flight-like data stream so that the crew has realistic and accurate simulated MSRR-1 data on the flight displays which will be designed for crew use. The simulation will also respond appropriately to crew or ground initiated commands, which will be part of normal facility operations. A third use of the G2 model is being planned; the MSRR-1 simulation will be integrated with additional software code as part of the test configuration of the primary onboard computer, or Master Controller, for MSRR-1. We will take advantage of the G2 capability to simulate the flight like data stream to test flight software responses and behavior. A fourth use of the G2 model will be to train the Ground Support Personnel that will monitor the MSRR-1 systems and payloads while they are operating aboard the ISS. The intuitive, schematic based environment will provide an excellent foundation for personnel to understand the integrated configuration and operation of the MSRR-1, and the anticipated telemetry feedback based on operational modes of the equipment. Expert monitoring features will be enhanced to provide a smart monitoring environment for the operators. These features include: (1) Animated, intuitive schematic-based displays which reflect telemetry values, (1) Real-time plotting of simulated or incoming sensor values, (3) High/Low exception monitoring for analog data, (4) Expected state monitoring for discrete data, (5) Data trending, (6) Automated malfunction procedure execution to diagnose problems, (7) Look ahead capability to planned MSRR-1 activities in the onboard timeline. And finally, the logic to calculate telemetry values will be deactivated, and the same environment will interface to the incoming data for the real-time telemetry stream to schematically represent the onboard hardware configuration. G2 will be the foundation for the real-time monitoring and control environment. In summary, our MSRR-1 simulation model spans many elements of the life cycle development of this project: Engineering Analysis, Test and Checkout, Training of Crew and Ground Personnel, and Real-time monitoring and control. By utilizing the unique features afforded by an expert system development environment, we have been able to synergize a powerful tool capable of addressing our project needs at every phase of project development.
The circle of life: A cross-cultural comparison of children's attribution of life-cycle traits.
Burdett, Emily R R; Barrett, Justin L
2016-06-01
Do children attribute mortality and other life-cycle traits to all minded beings? The present study examined whether culture influences young children's ability to conceptualize and differentiate human beings from supernatural beings (such as God) in terms of life-cycle traits. Three-to-5-year-old Israeli and British children were questioned whether their mother, a friend, and God would be subject to various life-cycle processes: Birth, death, ageing, existence/longevity, and parentage. Children did not anthropomorphize but differentiated among human and supernatural beings, attributing life-cycle traits to humans, but not to God. Although 3-year-olds differentiated significantly among agents, 5-year-olds attributed correct life-cycle traits more consistently than younger children. The results also indicated some cross-cultural variation in these attributions. Implications for biological conceptual development are discussed. © 2015 The British Psychological Society.
Long life nickel electrodes for a nickel-hydrogen cell: Cycle life tests
NASA Technical Reports Server (NTRS)
Lim, H. S.; Verzwyvelt, S. A.
1985-01-01
In order to develop a long life nickel electrode for a Ni/H2 cell, the cycle life of nickel electrodes was tested in Ni/H2 boiler plate cells. A 19 test cell matrix was made of various nickel electrode designs including three levels each of plaque mechanical strength, median pore size of the plaque, and active material loading. Test cells were cycled to the end of their life (0.5v) in a 45 minute low Earth orbit cycle regime at 80% depth-of-discharge. It is shown that the active material loading level affects the cycle life the most with the optimum loading at 1.6 g/cc void. Mechanical strength does not affect the cycle life noticeably in the bend strength range of 400 to 700 psi. It is found that the best plaque is made of INCO nickel powder type 287 and has median pore size of 13 micron.
Karadimas, H.; Hemery, F.; Roland, P.; Lepage, E.
2000-01-01
In medical software development, the use of databases plays a central role. However, most of the databases have heterogeneous encoding and data models. To deal with these variations in the application code directly is error-prone and reduces the potential reuse of the produced software. Several approaches to overcome these limitations have been proposed in the medical database literature, which will be presented. We present a simple solution, based on a Java library, and a central Metadata description file in XML. This development approach presents several benefits in software design and development cycles, the main one being the simplicity in maintenance. PMID:11079915
Managing the Life Cycle Risks of Nanomaterials
2009-07-01
ISO International Organization for Standardization ISN Institute for Soldier Nanotechnologies LCA Life Cycle Assessment LCCA Life Cycle Cost Analysis...similar to their smaller Existing ISO /TS 27687:2008 Nanotechnologies -- Terminology and definitions for nano-objects -- Nanoparticle, nanofibre and...Nanotechnology Under Development ISO /CD TR 80004-1 Nanotechnologies - Terminology and definitions – Framework ISO /AWI TS 80004-2 Nanotechnologies
The Life Cycle of the Child Care Center -- Understanding Center Growth and Development.
ERIC Educational Resources Information Center
Bess, Gary; Ratekin, Cindy
2001-01-01
Identifies the seven stages of the life cycle for child care centers: entrepreneurial; development; formalization; maturity; stagnation; death; and renewal. Suggests that critical transition points exist for organizational development, and that, if they are aware of and understand each stage of development, administrators may intervene at those…
Comparative life cycle assessment of disposable and reusable laryngeal mask airways.
Eckelman, Matthew; Mosher, Margo; Gonzalez, Andres; Sherman, Jodi
2012-05-01
Growing awareness of the negative impacts from the practice of health care on the environment and public health calls for the routine inclusion of life cycle criteria into the decision-making process of device selection. Here we present a life cycle assessment of 2 laryngeal mask airways (LMAs), a one-time-use disposable Unique™ LMA and a 40-time-use reusable Classic™ LMA. In life cycle assessment, the basis of comparison is called the "functional unit." For this report, the functional unit of the disposable and reusable LMAs was taken to be maintenance of airway patency by 40 disposable LMAs or 40 uses of 1 reusable LMA. This was a cradle-to-grave study that included inputs and outputs for the manufacture, transport, use, and waste phases of the LMAs. The environmental impacts of the 2 LMAs were estimated using SimaPro life cycle assessment software and the Building for Environmental and Economic Sustainability impact assessment method. Sensitivity and simple life cycle cost analyses were conducted to aid in interpretation of the results. The reusable LMA was found to have a more favorable environmental profile than the disposable LMA as used at Yale New Haven Hospital. The most important sources of impacts for the disposable LMA were the production of polymers, packaging, and waste management, whereas for the reusable LMA, washing and sterilization dominated for most impact categories. The differences in environmental impacts between these devices strongly favor reusable devices. These benefits must be weighed against concerns regarding transmission of infection. Health care facilities can decrease their environmental impacts by using reusable LMAs, to a lesser extent by selecting disposable LMA models that are not made of certain plastics, and by ordering in bulk from local distributors. Certain practices would further reduce the environmental impacts of reusable LMAs, such as increasing the number of devices autoclaved in a single cycle to 10 (-25% GHG emissions) and improving the energy efficiency of the autoclaving machines by 10% (-8% GHG emissions). For both environmental and cost considerations, management and operating procedures should be put in place to ensure that reusable LMAs are not discarded prematurely.
Software Process Assurance for Complex Electronics
NASA Technical Reports Server (NTRS)
Plastow, Richard A.
2007-01-01
Complex Electronics (CE) now perform tasks that were previously handled in software, such as communication protocols. Many methods used to develop software bare a close resemblance to CE development. Field Programmable Gate Arrays (FPGAs) can have over a million logic gates while system-on-chip (SOC) devices can combine a microprocessor, input and output channels, and sometimes an FPGA for programmability. With this increased intricacy, the possibility of software-like bugs such as incorrect design, logic, and unexpected interactions within the logic is great. With CE devices obscuring the hardware/software boundary, we propose that mature software methodologies may be utilized with slight modifications in the development of these devices. Software Process Assurance for Complex Electronics (SPACE) is a research project that used standardized S/W Assurance/Engineering practices to provide an assurance framework for development activities. Tools such as checklists, best practices and techniques were used to detect missing requirements and bugs earlier in the development cycle creating a development process for CE that was more easily maintained, consistent and configurable based on the device used.
NASA Astrophysics Data System (ADS)
Ames, D. P.; Peterson, M.; Larsen, J.
2016-12-01
A steady flow of manuscripts describing integrated water resources management (IWRM) modelling has been published in Environmental Modelling & Software since the journal's inaugural issue in 1997. These papers represent two decades of peer-reviewed scientific knowledge regarding methods, practices, and protocols for conducting IWRM. We have undertaken to explore this specific assemblage of literature with the intention of identifying commonly reported procedures in terms of data integration methods, modelling techniques, approaches to stakeholder participation, means of communication of model results, and other elements of the model development and application life cycle. Initial results from this effort will be presented including a summary of commonly used practices, and their evolution over the past two decades. We anticipate that results will show a pattern of movement toward greater use of both stakeholder/participatory modelling methods as well as increased use of automated methods for data integration and model preparation. Interestingly, such results could be interpreted to show that the availability of better, faster, and more integrated software tools and technologies free the modeler to take a less technocratic and more human approach to water resources modelling.
RDD-100 and the systems engineering process
NASA Technical Reports Server (NTRS)
Averill, Robert D.
1994-01-01
An effective systems engineering approach applied through the project life cycle can help Langley produce a better product. This paper demonstrates how an enhanced systems engineering process for in-house flight projects assures that each system will achieve its goals with quality performance and within planned budgets and schedules. This paper also describes how the systems engineering process can be used in combination with available software tools.
Early-Life Origins of Life-Cycle Well-Being: Research and Policy Implications
ERIC Educational Resources Information Center
Currie, Janet; Rossin-Slater, Maya
2015-01-01
Mounting evidence across different disciplines suggests that early-life conditions can have consequences on individual outcomes throughout the life cycle. Relative to other developed countries, the United States fares poorly on standard indicators of early-life health, and this disadvantage may have profound consequences not only for population…
3D graphics hardware accelerator programming methods for real-time visualization systems
NASA Astrophysics Data System (ADS)
Souetov, Andrew E.
2001-02-01
The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.
3D graphics hardware accelerator programming methods for real-time visualization systems
NASA Astrophysics Data System (ADS)
Souetov, Andrew E.
2000-02-01
The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.
Peo Life Cycle Cost Accountability: Viability Of Foreign Suppliers For Weapon System Development
2016-02-16
i AIR WAR COLLEGE AIR UNIVERSITY PEO LIFE CYCLE COST ACCOUNTABILITY: VIABILITY OF FOREIGN SUPPLIERS FOR WEAPON SYSTEM DEVELOPMENT By...to decrease, then recycling may become more economically feasible. The need for the U.S. to develop affordable technologies for recycling has become
ANALYZING SHORT CUT METHODS FOR LIFE CYCLE ASSESSMENT INVENTORIES
Work in progress at the U.S. EPA's National Risk Management Research Laboratory is developing methods for quickly, easily, and inexpensively developing Life Cycle Assessment (LCA) inventories. An LCA inventory represents the inputs and outputs from processes, including fuel and ...
Rivas-García, Pasiano; Botello-Álvarez, José E; Abel Seabra, Joaquim E; da Silva Walter, Arnaldo C; Estrada-Baltazar, Alejandro
2015-01-01
The environmental profile of milk production in Mexico was analysed for three manure management scenarios: fertilization (F), anaerobic digestion (AD) and enhanced anaerobic digestion (EAD). The study used the life cycle assessment (LCA) technique, considering a 'cradle-to-gate' approach. The assessment model was constructed using SimaPro LCA software, and the life cycle impact assessment was performed according to the ReCiPe method. Dairy farms with AD and EAD scenarios were found to exhibit, respectively, 12% and 27% less greenhouse gas emissions, 58% and 31% less terrestrial acidification, and 3% and 18% less freshwater eutrophication than the F scenario. A different trend was observed in the damage to resource availability indicator, as the F scenario presented 6% and 22% less damage than the EAD and AD scenarios, respectively. The magnitude of environmental damage from milk production in the three dairy manure management scenarios, using a general single score indicator, was 0.118, 0.107 and 0.081 Pt/L of milk for the F, AD and EAD scenarios, respectively. These results indicate that manure management systems with anaerobic digestion can improve the environmental profile of each litre of milk produced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shonder, John A; Hughes, Patrick; Atkin, Erica
2006-11-01
A study was sponsored by FEMP in 2001 - 2002 to develop methods to compare life-cycle costs of federal energy conservation projects carried out through energy savings performance contracts (ESPCs) and projects that are directly funded by appropriations. The study described in this report follows up on the original work, taking advantage of new pricing data on equipment and on $500 million worth of Super ESPC projects awarded since the end of FY 2001. The methods developed to compare life-cycle costs of ESPCs and directly funded energy projects are based on the following tasks: (1) Verify the parity of equipmentmore » prices in ESPC vs. directly funded projects; (2) Develop a representative energy conservation project; (3) Determine representative cycle times for both ESPCs and appropriations-funded projects; (4) Model the representative energy project implemented through an ESPC and through appropriations funding; and (5) Calculate the life-cycle costs for each project.« less
Foolmaun, Rajendra Kumar; Ramjeeawon, Toolseeram
2012-09-01
The annual rise in population growth coupled with the flourishing tourism industry in Mauritius has lead to a considerable increase in the amount of solid waste generated. In parallel, the disposal of non-biodegradable wastes, especially plastic packaging and plastic bottles, has also shown a steady rise. Improper disposal of used polyethylene terephthalate (PET) bottles constitutes an eyesore to the environmental landscape and is a threat to the flourishing tourism industry. It is of utmost importance, therefore, to determine a suitable disposal method for used PET bottles which is not only environmentally efficient but is also cost effective. This study investigated the environmental impacts and the cost effectiveness of four selected disposal alternatives for used PET bottles in Mauritius. The four disposal routes investigated were: 100% landfilling; 75% incineration with energy recovery and 25% landfilling; 40% flake production (partial recycling) and 60% landfilling; and 75% flake production and 25% landfilling. Environmental impacts of the disposal alternatives were determined using ISO standardized life cycle assessment (LCA) and with the support of SimaPro 7.1 software. Cost effectiveness was determined using life cycle costing (LCC). Collected data were entered into a constructed Excel-based model to calculate the different cost categories, Net present values, damage costs and payback periods. LCA and LCC results indicated that 75% flake production and 25% landfilling was the most environmentally efficient and cost-effective disposal route for used PET bottles in Mauritius.
Design of a secure remote management module for a software-operated medical device.
Burnik, Urban; Dobravec, Štefan; Meža, Marko
2017-12-09
Software-based medical devices need to be maintained throughout their entire life cycle. The efficiency of after-sales maintenance can be improved by managing medical systems remotely. This paper presents how to design the remote access function extensions in order to prevent risks imposed by uncontrolled remote access. A thorough analysis of standards and legislation requirements regarding safe operation and risk management of medical devices is presented. Based on the formal requirements, a multi-layer machine design solution is proposed that eliminates remote connectivity risks by strict separation of regular device functionalities from remote management service, deploys encrypted communication links and uses digital signatures to prevent mishandling of software images. The proposed system may also be used as an efficient version update of the existing medical device designs.
Development of high-rise buildings: digitalization of life cycle management
NASA Astrophysics Data System (ADS)
Gusakova, Elena
2018-03-01
The analysis of the accumulated long-term experience in the construction and operation of high-rise buildings reveals not only the engineering specificity of such projects, but also systemic problems in the field of project management. Most of the project decisions are made by the developer and the investor in the early stages of the life cycle - from the acquisition of the site to the start of operation, so most of the participants in the construction and operation of the high-rise building are far from the strategic life-cycle management of the project. The solution of these tasks due to the informatization of management has largely exhausted its efficiency resource. This is due to the fact that the applied IT-systems automated traditional "inherited" processes and management structures, and, in addition, they were focused on informatization of the activities of the construction company, rather than the construction project. Therefore, in the development of high-rise buildings, the tasks of researching approaches and methods for managing the full life cycle of projects that will improve their competitiveness become topical. For this purpose, the article substantiates the most promising approaches and methods of informational modeling of high-rise construction as a basis for managing the full life cycle of this project. Reengineering of information interaction schemes for project participants is considered; formation of a unified digital environment for the life cycle of the project; the development of systems for integrating data management and project management.
Expert System Development Methodology (ESDM)
NASA Technical Reports Server (NTRS)
Sary, Charisse; Gilstrap, Lewey; Hull, Larry G.
1990-01-01
The Expert System Development Methodology (ESDM) provides an approach to developing expert system software. Because of the uncertainty associated with this process, an element of risk is involved. ESDM is designed to address the issue of risk and to acquire the information needed for this purpose in an evolutionary manner. ESDM presents a life cycle in which a prototype evolves through five stages of development. Each stage consists of five steps, leading to a prototype for that stage. Development may proceed to a conventional development methodology (CDM) at any time if enough has been learned about the problem to write requirements. ESDM produces requirements so that a product may be built with a CDM. ESDM is considered preliminary because is has not yet been applied to actual projects. It has been retrospectively evaluated by comparing the methods used in two ongoing expert system development projects that did not explicitly choose to use this methodology but which provided useful insights into actual expert system development practices and problems.
Development of a Unix/VME data acquisition system
NASA Astrophysics Data System (ADS)
Miller, M. C.; Ahern, S.; Clark, S. M.
1992-01-01
The current status of a Unix-based VME data acquisition development project is described. It is planned to use existing Fortran data collection software to drive the existing CAMAC electronics via a VME CAMAC branch driver card and associated Daresbury Unix driving software. The first usable Unix driver has been written and produces single-action CAMAC cycles from test software. The data acquisition code has been implemented in test mode under Unix with few problems and effort is now being directed toward finalizing calls to the CAMAC-driving software and ultimate evaluation of the complete system.
AIBench: a rapid application development framework for translational research in biomedicine.
Glez-Peña, D; Reboiro-Jato, M; Maia, P; Rocha, M; Díaz, F; Fdez-Riverola, F
2010-05-01
Applied research in both biomedical discovery and translational medicine today often requires the rapid development of fully featured applications containing both advanced and specific functionalities, for real use in practice. In this context, new tools are demanded that allow for efficient generation, deployment and reutilization of such biomedical applications as well as their associated functionalities. In this context this paper presents AIBench, an open-source Java desktop application framework for scientific software development with the goal of providing support to both fundamental and applied research in the domain of translational biomedicine. AIBench incorporates a powerful plug-in engine, a flexible scripting platform and takes advantage of Java annotations, reflection and various design principles in order to make it easy to use, lightweight and non-intrusive. By following a basic input-processing-output life cycle, it is possible to fully develop multiplatform applications using only three types of concepts: operations, data-types and views. The framework automatically provides functionalities that are present in a typical scientific application including user parameter definition, logging facilities, multi-threading execution, experiment repeatability and user interface workflow management, among others. The proposed framework architecture defines a reusable component model which also allows assembling new applications by the reuse of libraries from past projects or third-party software. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.
Navy Program Manager’s Guide, 1985 Edition
1985-01-01
1-7 Relationship of Development Cost in System Life -Cycle Cost (LCC) ......................... 1-7 Realistic Costing and Budgeting...Review (PROR)..... 4-53 x MI *) First-Article Configuration Inspection (FACI) ...... 4-54 Cost Management- Life -Cycle Costing (LCC) ..................... 4...innovation and minimize costs. 4. Consideration of life -cycle cost (LCC) such that affordability is put on an equal basis with system performance, schedule
ERIC Educational Resources Information Center
Cankaya, Serkan; Kuzu, Abdullah
2018-01-01
Mobile skill teaching software has been developed for the parents of the children with intellectual disability to be used in teaching daily life skills. The purpose of this research is to investigate the effectiveness of the mobile skill teaching software developed for the use of the parents of the children with intellectual disability. In…
Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Mount, Frances; Carreon, Patricia; Torney, Susan E.
2001-01-01
The Engineering and Mission Operations Directorates at NASA Johnson Space Center are combining laboratories and expertise to establish the Human Centered Autonomous and Assistant Systems Testbed for Exploration Operations. This is a testbed for human centered design, development and evaluation of intelligent autonomous and assistant systems that will be needed for human exploration and development of space. This project will improve human-centered analysis, design and evaluation methods for developing intelligent software. This software will support human-machine cognitive and collaborative activities in future interplanetary work environments where distributed computer and human agents cooperate. We are developing and evaluating prototype intelligent systems for distributed multi-agent mixed-initiative operations. The primary target domain is control of life support systems in a planetary base. Technical approaches will be evaluated for use during extended manned tests in the target domain, the Bioregenerative Advanced Life Support Systems Test Complex (BIO-Plex). A spinoff target domain is the International Space Station (ISS) Mission Control Center (MCC). Prodl}cts of this project include human-centered intelligent software technology, innovative human interface designs, and human-centered software development processes, methods and products. The testbed uses adjustable autonomy software and life support systems simulation models from the Adjustable Autonomy Testbed, to represent operations on the remote planet. Ground operations prototypes and concepts will be evaluated in the Exploration Planning and Operations Center (ExPOC) and Jupiter Facility.
Progress in Multi-Disciplinary Data Life Cycle Management
NASA Astrophysics Data System (ADS)
Jung, C.; Gasthuber, M.; Giesler, A.; Hardt, M.; Meyer, J.; Prabhune, A.; Rigoll, F.; Schwarz, K.; Streit, A.
2015-12-01
Modern science is most often driven by data. Improvements in state-of-the-art technologies and methods in many scientific disciplines lead not only to increasing data rates, but also to the need to improve or even completely overhaul their data life cycle management. Communities usually face two kinds of challenges: generic ones like federated authorization and authentication infrastructures and data preservation, and ones that are specific to their community and their respective data life cycle. In practice, the specific requirements often hinder the use of generic tools and methods. The German Helmholtz Association project ’’Large-Scale Data Management and Analysis” (LSDMA) addresses both challenges: its five Data Life Cycle Labs (DLCLs) closely collaborate with communities in joint research and development to optimize the communities data life cycle management, while its Data Services Integration Team (DSIT) provides generic data tools and services. We present most recent developments and results from the DLCLs covering communities ranging from heavy ion physics and photon science to high-throughput microscopy, and from DSIT.
feasibility analysis Environmental analysis Strategic planning for market development Research Interests Life -1991) Other Affiliations Executive Board, American Center for Life Cycle Assessment, 2004-present Advisory member of the North American Life Cycle Inventory Database Project Member, Society of
Power generation using sugar cane bagasse: A heat recovery analysis
NASA Astrophysics Data System (ADS)
Seguro, Jean Vittorio
The sugar industry is facing the need to improve its performance by increasing efficiency and developing profitable by-products. An important possibility is the production of electrical power for sale. Co-generation has been practiced in the sugar industry for a long time in a very inefficient way with the main purpose of getting rid of the bagasse. The goal of this research was to develop a software tool that could be used to improve the way that bagasse is used to generate power. Special focus was given to the heat recovery components of the co-generation plant (economizer, air pre-heater and bagasse dryer) to determine if one, or a combination, of them led to a more efficient co-generation cycle. An extensive review of the state of the art of power generation in the sugar industry was conducted and is summarized in this dissertation. Based on this models were developed. After testing the models and comparing the results with the data collected from the literature, a software application that integrated all these models was developed to simulate the complete co-generation plant. Seven different cycles, three different pressures, and sixty-eight distributions of the flue gas through the heat recovery components can be simulated. The software includes an economic analysis tool that can help the designer determine the economic feasibility of different options. Results from running the simulation are presented that demonstrate its effectiveness in evaluating and comparing the different heat recovery components and power generation cycles. These results indicate that the economizer is the most beneficial option for heat recovery and that the use of waste heat in a bagasse dryer is the least desirable option. Quantitative comparisons of several possible cycle options with the widely-used traditional back-pressure turbine cycle are given. These indicate that a double extraction condensing cycle is best for co-generation purposes. Power generation gains between 40 and 100% are predicted for some cycles with the addition of optimum heat recovery systems.
Autonomy and integration in complex parasite life cycles.
Benesh, Daniel P
2016-12-01
Complex life cycles are common in free-living and parasitic organisms alike. The adaptive decoupling hypothesis postulates that separate life cycle stages have a degree of developmental and genetic autonomy, allowing them to be independently optimized for dissimilar, competing tasks. That is, complex life cycles evolved to facilitate functional specialization. Here, I review the connections between the different stages in parasite life cycles. I first examine evolutionary connections between life stages, such as the genetic coupling of parasite performance in consecutive hosts, the interspecific correlations between traits expressed in different hosts, and the developmental and functional obstacles to stage loss. Then, I evaluate how environmental factors link life stages through carryover effects, where stressful larval conditions impact parasites even after transmission to a new host. There is evidence for both autonomy and integration across stages, so the relevant question becomes how integrated are parasite life cycles and through what mechanisms? By highlighting how genetics, development, selection and the environment can lead to interdependencies among successive life stages, I wish to promote a holistic approach to studying complex life cycle parasites and emphasize that what happens in one stage is potentially highly relevant for later stages.
Design and life-cycle considerations for unconventional-reservoir wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miskimins, J.L.
2009-05-15
This paper provides an overview of design and life-cycle considerations for certain unconventional-reservoir wells. An overview of unconventional-reservoir definitions is provided. Well design and life-cycle considerations are addressed from three aspects: upfront reservoir development, initial well completion, and well-life and long-term considerations. Upfront-reservoir-development issues discussed include well spacing, well orientation, reservoir stress orientations, and tubular metallurgy. Initial-well-completion issues include maximum treatment pressures and rates, treatment diversion, treatment staging, flowback and cleanup, and dewatering needs. Well-life and long-term discussions include liquid loading, corrosion, refracturing and associated fracture reorientation, and the cost of abandonment. These design considerations are evaluated with case studiesmore » for five unconventional-reservoir types: shale gas (Barnett shale), tight gas (Jonah feld), tight oil (Bakken play), coalbed methane (CBM) (San Juan basin), and tight heavy oil (Lost Hills field). In evaluating the life cycle and design of unconventional-reservoir wells, 'one size' does not fit all and valuable knowledge and a shortening of the learning curve can be achieved for new developments by studying similar, more-mature fields.« less
Life cycle management of analytical methods.
Parr, Maria Kristina; Schmidt, Alexander H
2018-01-05
In modern process management, the life cycle concept gains more and more importance. It focusses on the total costs of the process from invest to operation and finally retirement. Also for analytical procedures an increasing interest for this concept exists in the recent years. The life cycle of an analytical method consists of design, development, validation (including instrumental qualification, continuous method performance verification and method transfer) and finally retirement of the method. It appears, that also regulatory bodies have increased their awareness on life cycle management for analytical methods. Thus, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), as well as the United States Pharmacopeial Forum discuss the enrollment of new guidelines that include life cycle management of analytical methods. The US Pharmacopeia (USP) Validation and Verification expert panel already proposed a new General Chapter 〈1220〉 "The Analytical Procedure Lifecycle" for integration into USP. Furthermore, also in the non-regulated environment a growing interest on life cycle management is seen. Quality-by-design based method development results in increased method robustness. Thereby a decreased effort is needed for method performance verification, and post-approval changes as well as minimized risk of method related out-of-specification results. This strongly contributes to reduced costs of the method during its life cycle. Copyright © 2017 Elsevier B.V. All rights reserved.
The Production Data Approach for Full Lifecycle Management
NASA Astrophysics Data System (ADS)
Schopf, J.
2012-04-01
The amount of data generated by scientists is growing exponentially, and studies have shown [Koe04] that un-archived data sets have a resource half-life that is only a fraction of those resources that are electronically archived. Most groups still lack standard approaches and procedures for data management. Arguably, however, scientists know something about building software. A recent article in Nature [Mer10] stated that 45% of research scientists spend more time now developing software than they did 5 years ago, and 38% spent at least 1/5th of their time developing software. Fox argues [Fox10] that a simple release of data is not the correct approach to data curation. In addition, just as software is used in a wide variety of ways never initially envisioned by its developers, we're seeing this even to a greater extent with data sets. In order to address the need for better data preservation and access, we propose that data sets should be managed in a similar fashion to building production quality software. These production data sets are not simply published once, but go through a cyclical process, including phases such as design, development, verification, deployment, support, analysis, and then development again, thereby supporting the full lifecycle of a data set. The process involved in academically-produced software changes over time with respect to issues such as how much it is used outside the development group, but factors in aspects such as knowing who is using the code, enabling multiple developers to contribute to code development with common procedures, formal testing and release processes, developing documentation, and licensing. When we work with data, either as a collection source, as someone tagging data, or someone re-using it, many of the lessons learned in building production software are applicable. Table 1 shows a comparison of production software elements to production data elements. Table 1: Comparison of production software and production data. Production Software Production Data End-user considerations End-user considerations Multiple Coders: Repository with check-in procedures Coding standards Multiple producers/collectors Local archive with check-in procedure Metadata Standards Formal testing Formal testing Bug tracking and fixes Bug tracking and fixes, QA/QC Documentation Documentation Formal Release Process Formal release process to external archive License Citation/usage statement The full presentation of this abstract will include a detailed discussion of these issues so that researchers can produce usable and accessible data sets as a first step toward reproducible science. By creating production-quality data sets, we extend the potential of our data, both in terms of usability and usefulness to ourselves and other researchers. The more we treat data with formal processes and release cycles, the more relevant and useful it can be to the scientific community.
Schroeder, Jenna N.
2014-06-10
This report examines life cycle water consumption for various geothermal technologies to better understand factors that affect water consumption across the life cycle (e.g., power plant cooling, belowground fluid losses) and to assess the potential water challenges that future geothermal power generation projects may face. Previous reports in this series quantified the life cycle freshwater requirements of geothermal power-generating systems, explored operational and environmental concerns related to the geochemical composition of geothermal fluids, and assessed future water demand by geothermal power plants according to growth projections for the industry. This report seeks to extend those analyses by including EGS flash, both as part of the life cycle analysis and water resource assessment. A regional water resource assessment based upon the life cycle results is also presented. Finally, the legal framework of water with respect to geothermal resources in the states with active geothermal development is also analyzed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-23
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2011-N-0780] Bridging the Idea Development Evaluation Assessment and Long-Term Initiative and Total Product Life Cycle... Idea Development Evaluation Assessment and Long-Term Initiative and Total Product Life Cycle Approaches...
Vahidi, Ehsan; Zhao, Fu
2017-12-01
Over the past decade, Rare Earth Elements (REEs) have gained special interests due to their significance in many industrial applications, especially those related to clean energy. While REEs production is known to cause damage to the ecosystem, only a handful of Life Cycle Assessment (LCA) investigations have been conducted in recent years, mainly due to lack of data and information. This is especially true for the solvent extraction separation of REEs from aqueous solution which is a challenging step in the REEs production route. In the current investigation, an LCA is carried out on a typical REE solvent extraction process using P204/kerosene and the energy/material flows and emissions data were collected from two different solvent extraction facilities in Inner Mongolia and Fujian provinces in China. In order to develop life cycle inventories, Ecoinvent 3 and SimaPro 8 software together with energy/mass stoichiometry and balance were utilized. TRACI and ILCD were applied as impact assessment tools and LCA outcomes were employed to examine and determine ecological burdens of the REEs solvent extraction operation. Based on the results, in comparison with the production of generic organic solvent in the Ecoinvent dataset, P204 production has greater burdens on all TRACI impact categories. However, due to the small amount of consumption, the contribution of P204 remains minimal. Additionally, sodium hydroxide and hydrochloric acid are the two impactful chemicals on most environmental categories used in the solvent extraction operation. On average, the solvent extraction step accounts for 30% of the total environmental impacts associated with individual REOs. Finally, opportunities and challenges for an enhanced environmental performance of the REEs solvent extraction operation were investigated. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Naldesi, Luciano; Buttol, Patrizia; Masoni, Paolo; Misceo, Monica; Sára, Balázs
2004-12-01
"eLCA" is a European Commission financed project aimed at realising "On line green tools and services for Small and Medium-sized Enterprises (SMEs)". Knowledge and use of Life Cycle Assessment (LCA) by SMEs are strategic to introduce the Integrated Product Policy (IPP) in Europe, but methodology simplification is needed. LCA requires a large amount of validated general and sector specific data. Since their availability and cost can be insuperable barriers for SMEs, pre-elaborated data/meta-data, use of standards and low cost solutions are required. Within the framework of the eLCA project an LCA software - eVerdEE - based on a simplified methodology and specialised for SMEs has been developed. eVerdEE is a web-based tool with some innovative features. Its main feature is the adaptation of ISO 14040 requirements to offer easy-to-handle functions with solid scientific bases. Complex methodological problems, such as the system boundaries definition, the data quality estimation and documentation, the choice of impact categories, are simplified according to the SMEs" needs. Predefined "Goal and Scope definition" and "Inventory" forms, a user-friendly and well structured procedure are time and cost-effective. The tool is supported by a database containing pre-elaborated environmental indicators of substances and processes for different impact categories. The impact assessment is calculated automatically by using the user"s input and the database values. The results have different levels of interpretation in order to identify the life cycle critical points and the improvement options. The use of a target plot allows the direct comparison of different design alternatives.
Barclay, Katie
2011-01-01
Traditionally marriage has been treated as one step in the life cycle, between youth and old age, singleness and widowhood. Yet an approach to the life cycle that treats marriage as a single step in a person's life is overly simplistic. During the eighteenth century many marriages were of considerable longevity during which time couples aged together and power dynamics within the home were frequently renegotiated to reflect changing circumstances. This study explores how intimacy developed and changed over the life cycle of marriage and what this meant for power, through a study of the correspondence of two elite Scottish couples.
Section 508 Electronic Information Accessibility Requirements for Software Development
NASA Technical Reports Server (NTRS)
Ellis, Rebecca
2014-01-01
Section 508 Subpart B 1194.21 outlines requirements for operating system and software development in order to create a product that is accessible to users with various disabilities. This portion of Section 508 contains a variety of standards to enable those using assistive technology and with visual, hearing, cognitive and motor difficulties to access all information provided in software. The focus on requirements was limited to the Microsoft Windows® operating system as it is the predominant operating system used at this center. Compliance with this portion of the requirements can be obtained by integrating the requirements into the software development cycle early and by remediating issues in legacy software if possible. There are certain circumstances with software that may arise necessitating an exemption from these requirements, such as design or engineering software using dynamically changing graphics or numbers to convey information. These exceptions can be discussed with the Section 508 Coordinator and another method of accommodation used.
Understanding growth and development of forage plants
USDA-ARS?s Scientific Manuscript database
Understanding the developmental morphology of forage plants is important for making good management decisions. Many such decisions involve timing the initiation or termination of a management practice to a particular stage of development in the life cycle of the forage. The life cycles of forage pl...
Research requirements to reduce civil helicopter life cycle cost
NASA Technical Reports Server (NTRS)
Blewitt, S. J.
1978-01-01
The problem of the high cost of helicopter development, production, operation, and maintenance is defined and the cost drivers are identified. Helicopter life cycle costs would decrease by about 17 percent if currently available technology were applied. With advanced technology, a reduction of about 30 percent in helicopter life cycle costs is projected. Technological and managerial deficiencies which contribute to high costs are examined, basic research and development projects which can reduce costs include methods for reduced fuel consumption; improved turbine engines; airframe and engine production methods; safety; rotor systems; and advanced transmission systems.
A systematic composite service design modeling method using graph-based theory.
Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh
2015-01-01
The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.
A Systematic Composite Service Design Modeling Method Using Graph-Based Theory
Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh
2015-01-01
The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system. PMID:25928358
STAR-CCM+ Verification and Validation Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pointer, William David
2016-09-30
The commercial Computational Fluid Dynamics (CFD) code STAR-CCM+ provides general purpose finite volume method solutions for fluid dynamics and energy transport. This document defines plans for verification and validation (V&V) of the base code and models implemented within the code by the Consortium for Advanced Simulation of Light water reactors (CASL). The software quality assurance activities described herein are port of the overall software life cycle defined in the CASL Software Quality Assurance (SQA) Plan [Sieger, 2015]. STAR-CCM+ serves as the principal foundation for development of an advanced predictive multi-phase boiling simulation capability within CASL. The CASL Thermal Hydraulics Methodsmore » (THM) team develops advanced closure models required to describe the subgrid-resolution behavior of secondary fluids or fluid phases in multiphase boiling flows within the Eulerian-Eulerian framework of the code. These include wall heat partitioning models that describe the formation of vapor on the surface and the forces the define bubble/droplet dynamic motion. The CASL models are implemented as user coding or field functions within the general framework of the code. This report defines procedures and requirements for V&V of the multi-phase CFD capability developed by CASL THM. Results of V&V evaluations will be documented in a separate STAR-CCM+ V&V assessment report. This report is expected to be a living document and will be updated as additional validation cases are identified and adopted as part of the CASL THM V&V suite.« less
An automated methodology development. [software design for combat simulation
NASA Technical Reports Server (NTRS)
Hawley, L. R.
1985-01-01
The design methodology employed in testing the applicability of Ada in large-scale combat simulations is described. Ada was considered as a substitute for FORTRAN to lower life cycle costs and ease the program development efforts. An object-oriented approach was taken, which featured definitions of military targets, the capability of manipulating their condition in real-time, and one-to-one correlation between the object states and real world states. The simulation design process was automated by the problem statement language (PSL)/problem statement analyzer (PSA). The PSL/PSA system accessed the problem data base directly to enhance the code efficiency by, e.g., eliminating non-used subroutines, and provided for automated report generation, besides allowing for functional and interface descriptions. The ways in which the methodology satisfied the responsiveness, reliability, transportability, modifiability, timeliness and efficiency goals are discussed.
Automated Theorem Proving in High-Quality Software Design
NASA Technical Reports Server (NTRS)
Schumann, Johann; Swanson, Keith (Technical Monitor)
2001-01-01
The amount and complexity of software developed during the last few years has increased tremendously. In particular, programs are being used more and more in embedded systems (from car-brakes to plant-control). Many of these applications are safety-relevant, i.e. a malfunction of hardware or software can cause severe damage or loss. Tremendous risks are typically present in the area of aviation, (nuclear) power plants or (chemical) plant control. Here, even small problems can lead to thousands of casualties and huge financial losses. Large financial risks also exist when computer systems are used in the area of telecommunication (telephone, electronic commerce) or space exploration. Computer applications in this area are not only subject to safety considerations, but also security issues are important. All these systems must be designed and developed to guarantee high quality with respect to safety and security. Even in an industrial setting which is (or at least should be) aware of the high requirements in Software Engineering, many incidents occur. For example, the Warshaw Airbus crash, was caused by an incomplete requirements specification. Uncontrolled reuse of an Ariane 4 software module was the reason for the Ariane 5 disaster. Some recent incidents in the telecommunication area, like illegal "cloning" of smart-cards of D2GSM handies, or the extraction of (secret) passwords from German T-online users show that also in this area serious flaws can happen. Due to the inherent complexity of computer systems, most authors claim that only a rigorous application of formal methods in all stages of the software life cycle can ensure high quality of the software and lead to real safe and secure systems. In this paper, we will have a look, in how far automated theorem proving can contribute to a more widespread application of formal methods and their tools, and what automated theorem provers (ATPs) must provide in order to be useful.
Risk Management Considerations for Interoperable Acquisition
2006-08-01
Electronics Engineers (IEEE) to harmonize the standards for software (IEEE 12207 ) and system (IEEE 15288) life-cycle processes. A goal of this harmonization...management ( ISO /IEC 16085) is being generalized to apply to the systems level. The revised, generalized standard will add require- ments and guidance for the...risk management. The documents include the following: • ISO /IEC Guide 73: Risk Management—Vocabulary—Guidelines for use in stan- dards [ ISO 02
NASA Technical Reports Server (NTRS)
1993-01-01
Under a NASA Small Business Innovation Research (SBIR) contract, Axiomatics Corporation developed a shunting Dielectric Sensor to determine the nutrient level and analyze plant nutrient solutions in the CELSS, NASA's space life support program. (CELSS is an experimental facility investigating closed-cycle plant growth and food processing for long duration manned missions.) The DiComp system incorporates a shunt electrode and is especially sensitive to changes in dielectric property changes in materials at measurements much lower than conventional sensors. The analyzer has exceptional capabilities for predicting composition of liquid streams or reactions. It measures concentrations and solids content up to 100 percent in applications like agricultural products, petrochemicals, food and beverages. The sensor is easily installed; maintenance is low, and it can be calibrated on line. The software automates data collection and analysis.
A Dependable Massive Storage Service for Medical Imaging.
Núñez-Gaona, Marco Antonio; Marcelín-Jiménez, Ricardo; Gutiérrez-Martínez, Josefina; Aguirre-Meneses, Heriberto; Gonzalez-Compean, José Luis
2018-05-18
We present the construction of Babel, a distributed storage system that meets stringent requirements on dependability, availability, and scalability. Together with Babel, we developed an application that uses our system to store medical images. Accordingly, we show the feasibility of our proposal to provide an alternative solution for massive scientific storage and describe the software architecture style that manages the DICOM images life cycle, utilizing Babel like a virtual local storage component for a picture archiving and communication system (PACS-Babel Interface). Furthermore, we describe the communication interface in the Unified Modeling Language (UML) and show how it can be extended to manage the hard work associated with data migration processes on PACS in case of updates or disaster recovery.
Coping with Variability in Model-Based Systems Engineering: An Experience in Green Energy
NASA Astrophysics Data System (ADS)
Trujillo, Salvador; Garate, Jose Miguel; Lopez-Herrejon, Roberto Erick; Mendialdua, Xabier; Rosado, Albert; Egyed, Alexander; Krueger, Charles W.; de Sosa, Josune
Model-Based Systems Engineering (MBSE) is an emerging engineering discipline whose driving motivation is to provide support throughout the entire system life cycle. MBSE not only addresses the engineering of software systems but also their interplay with physical systems. Quite frequently, successful systems need to be customized to cater for the concrete and specific needs of customers, end-users, and other stakeholders. To effectively meet this demand, it is vital to have in place mechanisms to cope with the variability, the capacity to change, that such customization requires. In this paper we describe our experience in modeling variability using SysML, a leading MBSE language, for developing a product line of wind turbine systems used for the generation of electricity.
Long Life Nickel Electrodes for a Nickel-hydrogen Cell: Cycle Life Tests
NASA Technical Reports Server (NTRS)
Lim, H. S.; Verzwyvelt, S. A.
1984-01-01
In order to develop a long life nickel electrode for a Ni/H2 cell, cycle life tests of nickel electrodes were carried out in Hi/H2 boiler plate cells. A 19 test cell matrix was made of various nickel electrode designs including three levels each of plaque mechanical strength, median pore size of the plaque, and active material loading. Test cells were cycled to the end of their life (0.5v) in a 45-minute low earth orbit cycle regime at 80% depth-of-discharge. The results show that the active material loading level affects the cycle life the most with the optimum loading at 1.6 g/cc void. Mechanical strength did not affect the cycle life noticeably in the bend strength range of 400 to 700 psi. The best plaque type appears to be one which is made of INCO nickel powder type 287 and has a median pore size of 13 micron.
SDTM - SYSTEM DESIGN TRADEOFF MODEL FOR SPACE STATION FREEDOM RELEASE 1.1
NASA Technical Reports Server (NTRS)
Chamberlin, R. G.
1994-01-01
Although extensive knowledge of space station design exists, the information is widely dispersed. The Space Station Freedom Program (SSFP) needs policies and procedures that ensure the use of consistent design objectives throughout its organizational hierarchy. The System Design Tradeoff Model (SDTM) produces information that can be used for this purpose. SDTM is a mathematical model of a set of possible designs for Space Station Freedom. Using the SDTM program, one can find the particular design which provides specified amounts of resources to Freedom's users at the lowest total (or life cycle) cost. One can also compare alternative design concepts by changing the set of possible designs, while holding the specified user services constant, and then comparing costs. Finally, both costs and user services can be varied simultaneously when comparing different designs. SDTM selects its solution from a set of feasible designs. Feasibility constraints include safety considerations, minimum levels of resources required for station users, budget allocation requirements, time limitations, and Congressional mandates. The total, or life cycle, cost includes all of the U.S. costs of the station: design and development, purchase of hardware and software, assembly, and operations throughout its lifetime. The SDTM development team has identified, for a variety of possible space station designs, the subsystems that produce the resources to be modeled. The team has also developed formulas for the cross consumption of resources by other resources, as functions of the amounts of resources produced. SDTM can find the values of station resources, so that subsystem designers can choose new design concepts that further reduce the station's life cycle cost. The fundamental input to SDTM is a set of formulas that describe the subsystems which make up a reference design. Most of the formulas identify how the resources required by each subsystem depend upon the size of the subsystem. Some of the formulas describe how the subsystem costs depend on size. The formulas can be complicated and nonlinear (if nonlinearity is needed to describe how designs change with size). SDTM's outputs are amounts of resources, life-cycle costs, and marginal costs. SDTM will run on IBM PC/XTs, ATs, and 100% compatibles with 640K of RAM and at least 3Mb of fixed-disk storage. A printer which can print in 132-column mode is also required, and a mathematics co-processor chip is highly recommended. This code is written in Turbo C 2.0. However, since the developers used a modified version of the proprietary Vitamin C source code library, the complete source code is not available. The executable is provided, along with all non-proprietary source code. This program was developed in 1989.
LED street lighting evaluation -- phase II : LED specification and life-cycle cost analysis.
DOT National Transportation Integrated Search
2015-01-01
Phase II of this study focused on developing a draft specification for LED luminaires to be used by IDOT : and a life-cycle cost analysis (LCCA) tool for solid state lighting technologies. The team also researched the : latest developments related to...
Formal Methods for Life-Critical Software
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1993-01-01
The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited.
Detailed Life Cycle Assessment of Bounty Paper Towel ...
Life Cycle Assessment (LCA) is a well-established and informative method of understanding the environmental impacts of consumer products across the entire value chain. However, companies committed to sustainability are interested in more methods that examine their products and activities' impacts. Methods that build on LCA strengths and illuminate other connected but less understood facets, related to social and economic impacts, would provide greater value to decision-makers. This study is a LCA that calculates the potential impacts associated with Bounty® paper towels from two facilities with different production lines, an older one (Albany, Georgia) representing established technology and the other (Box Elder, Utah), a newer state-of-the-art platform. This is unique in that it includes use of Industrial Process Systems Assessment (IPSA), new electricity and pulp data, modeled in open source software, and is the basis for the development of new integrated sustainability metrics (published separately). The new metrics can guide supply chain and manufacturing enhancements, and product design related to environmental protection and resource sustainability. Results of the LCA indicate Box Elder had improvements on environmental impact scores related to air emission indicators, except for particulate matter. Albany had lower water use impacts. After normalization of the results, fossil fuel depletion is the most critical environmental indicator. Pulp production, e
NASA Technical Reports Server (NTRS)
Culbert, Chris; French, Scott W.; Hamilton, David
1994-01-01
Knowledge-based systems (KBS's) are in general use in a wide variety of domains, both commercial and government. As reliance on these types of systems grows, the need to assess their quality and validity reaches critical importance. As with any software, the reliability of a KBS can be directly attributed to the application of disciplined programming and testing practices throughout the development life-cycle. However, there are some essential differences between conventional software and KBSs, both in construction and use. The identification of these differences affect the verification and validation (V&V) process and the development of techniques to handle them. The recognition of these differences is the basis of considerable on-going research in this field. For the past three years IBM (Federal Systems Company - Houston) and the Software Technology Branch (STB) of NASA/Johnson Space Center have been working to improve the 'state of the practice' in V&V of Knowledge-based systems. This work was motivated by the need to maintain NASA's ability to produce high quality software while taking advantage of new KBS technology. To date, the primary accomplishment has been the development and teaching of a four-day workshop on KBS V&V. With the hope of improving the impact of these workshops, we also worked directly with NASA KBS projects to employ concepts taught in the workshop. This paper describes two projects that were part of this effort. In addition to describing each project, this paper describes problems encountered and solutions proposed in each case, with particular emphasis on implications for transferring KBS V&V technology beyond the NASA domain.
NASA Astrophysics Data System (ADS)
Tanci, Claudio; Tosti, Gino; Antolini, Elisa; Gambini, Giorgio F.; Bruno, Pietro; Canestrari, Rodolfo; Conforti, Vito; Lombardi, Saverio; Russo, Federico; Sangiorgi, Pierluca; Scuderi, Salvatore
2016-08-01
ASTRI is an on-going project developed in the framework of the Cherenkov Telescope Array (CTA). An end- to-end prototype of a dual-mirror small-size telescope (SST-2M) has been installed at the INAF observing station on Mt. Etna, Italy. The next step is the development of the ASTRI mini-array composed of nine ASTRI SST-2M telescopes proposed to be installed at the CTA southern site. The ASTRI mini-array is a collaborative and international effort carried on by Italy, Brazil and South-Africa and led by the Italian National Institute of Astrophysics, INAF. To control the ASTRI telescopes, a specific ASTRI Mini-Array Software System (MASS) was designed using a scalable and distributed architecture to monitor all the hardware devices for the telescopes. Using code generation we built automatically from the ASTRI Interface Control Documents a set of communication libraries and extensive Graphical User Interfaces that provide full access to the capabilities offered by the telescope hardware subsystems for testing and maintenance. Leveraging these generated libraries and components we then implemented a human designed, integrated, Engineering GUI for MASS to perform the verification of the whole prototype and test shared services such as the alarms, configurations, control systems, and scientific on-line outcomes. In our experience the use of code generation dramatically reduced the amount of effort in development, integration and testing of the more basic software components and resulted in a fast software release life cycle. This approach could be valuable for the whole CTA project, characterized by a large diversity of hardware components.
Behavior driven testing in ALMA telescope calibration software
NASA Astrophysics Data System (ADS)
Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang
2016-07-01
ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.
Assurance of Complex Electronics. What Path Do We Take?
NASA Technical Reports Server (NTRS)
Plastow, Richard A.
2007-01-01
Many of the methods used to develop software bare a close resemblance to Complex Electronics (CE) development. CE are now programmed to perform tasks that were previously handled in software, such as communication protocols. For instance, Field Programmable Gate Arrays (FPGAs) can have over a million logic gates while system-on-chip (SOC) devices can combine a microprocessor, input and output channels, and sometimes an FPGA for programmability. With this increased intricacy, the possibility of "software-like" bugs such as incorrect design, logic, and unexpected interactions within the logic is great. Since CE devices are obscuring the hardware/software boundary, we propose that mature software methodologies may be utilized with slight modifications to develop these devices. By using standardized S/W Engineering methods such as checklists, missing requirements and "bugs" can be detected earlier in the development cycle, thus creating a development process for CE that will be easily maintained and configurable based on the device used.
Revel8or: Model Driven Capacity Planning Tool Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Liming; Liu, Yan; Bui, Ngoc B.
2007-05-31
Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less
Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes.
Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J; Wang, Liliang; Lin, Jianguo
2016-12-13
The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions.
Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes
Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J.; Wang, Liliang; Lin, Jianguo
2016-01-01
The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions. PMID:28060298
Shortening tobacco life cycle accelerates functional gene identification in genomic research.
Ning, G; Xiao, X; Lv, H; Li, X; Zuo, Y; Bao, M
2012-11-01
Definitive allocation of function requires the introduction of genetic mutations and analysis of their phenotypic consequences. Novel, rapid and convenient techniques or materials are very important and useful to accelerate gene identification in functional genomics research. Here, over-expression of PmFT (Prunus mume), a novel FT orthologue, and PtFT (Populus tremula) lead to shortening of the tobacco life cycle. A series of novel short life cycle stable tobacco lines (30-50 days) were developed through repeated self-crossing selection breeding. Based on the second transformation via a gusA reporter gene, the promoter from BpFULL1 in silver birch (Betula pendula) and the gene (CPC) from Arabidopsis thaliana were effectively tested using short life cycle tobacco lines. Comparative analysis among wild type, short life cycle tobacco and Arabidopsis transformation system verified that it is optional to accelerate functional gene studies by shortening host plant material life cycle, at least in these short life cycle tobacco lines. The results verified that the novel short life cycle transgenic tobacco lines not only combine the advantages of economic nursery requirements and a simple transformation system, but also provide a robust, effective and stable host system to accelerate gene analysis. Thus, shortening tobacco life cycle strategy is feasible to accelerate heterologous or homologous functional gene identification in genomic research. © 2012 German Botanical Society and The Royal Botanical Society of the Netherlands.
Software Process Assurance for Complex Electronics (SPACE)
NASA Technical Reports Server (NTRS)
Plastow, Richard A.
2007-01-01
Complex Electronics (CE) are now programmed to perform tasks that were previously handled in software, such as communication protocols. Many of the methods used to develop software bare a close resemblance to CE development. For instance, Field Programmable Gate Arrays (FPGAs) can have over a million logic gates while system-on-chip (SOC) devices can combine a microprocessor, input and output channels, and sometimes an FPGA for programmability. With this increased intricacy, the possibility of software-like bugs such as incorrect design, logic, and unexpected interactions within the logic is great. Since CE devices are obscuring the hardware/software boundary, we propose that mature software methodologies may be utilized with slight modifications in the development of these devices. Software Process Assurance for Complex Electronics (SPACE) is a research project that looks at using standardized S/W Assurance/Engineering practices to provide an assurance framework for development activities. Tools such as checklists, best practices and techniques can be used to detect missing requirements and bugs earlier in the development cycle creating a development process for CE that will be more easily maintained, consistent and configurable based on the device used.