Sample records for software methodology operating

  1. A proven approach for more effective software development and maintenance

    NASA Technical Reports Server (NTRS)

    Pajerski, Rose; Hall, Dana; Sinclair, Craig

    1994-01-01

    Modern space flight mission operations and associated ground data systems are increasingly dependent upon reliable, quality software. Critical functions such as command load preparation, health and status monitoring, communications link scheduling and conflict resolution, and transparent gateway protocol conversion are routinely performed by software. Given budget constraints and the ever increasing capabilities of processor technology, the next generation of control centers and data systems will be even more dependent upon software across all aspects of performance. A key challenge now is to implement improved engineering, management, and assurance processes for the development and maintenance of that software; processes that cost less, yield higher quality products, and that self-correct for continual improvement evolution. The NASA Goddard Space Flight Center has a unique experience base that can be readily tapped to help solve the software challenge. Over the past eighteen years, the Software Engineering Laboratory within the code 500 Flight Dynamics Division has evolved a software development and maintenance methodology that accommodates the unique characteristics of an organization while optimizing and continually improving the organization's software capabilities. This methodology relies upon measurement, analysis, and feedback much analogous to that of control loop systems. It is an approach with a time-tested track record proven through repeated applications across a broad range of operational software development and maintenance projects. This paper describes the software improvement methodology employed by the Software Engineering Laboratory, and how it has been exploited within the Flight Dynamics Division with GSFC Code 500. Examples of specific improvement in the software itself and its processes are presented to illustrate the effectiveness of the methodology. Finally, the initial findings are given when this methodology was applied across the mission operations and ground data systems software domains throughout Code 500.

  2. Measurement and analysis of operating system fault tolerance

    NASA Technical Reports Server (NTRS)

    Lee, I.; Tang, D.; Iyer, R. K.

    1992-01-01

    This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.

  3. Cassini's Test Methodology for Flight Software Verification and Operations

    NASA Technical Reports Server (NTRS)

    Wang, Eric; Brown, Jay

    2007-01-01

    The Cassini spacecraft was launched on 15 October 1997 on a Titan IV-B launch vehicle. The spacecraft is comprised of various subsystems, including the Attitude and Articulation Control Subsystem (AACS). The AACS Flight Software (FSW) and its development has been an ongoing effort, from the design, development and finally operations. As planned, major modifications to certain FSW functions were designed, tested, verified and uploaded during the cruise phase of the mission. Each flight software upload involved extensive verification testing. A standardized FSW testing methodology was used to verify the integrity of the flight software. This paper summarizes the flight software testing methodology used for verifying FSW from pre-launch through the prime mission, with an emphasis on flight experience testing during the first 2.5 years of the prime mission (July 2004 through January 2007).

  4. COTS-based OO-component approach for software inter-operability and reuse (software systems engineering methodology)

    NASA Technical Reports Server (NTRS)

    Yin, J.; Oyaki, A.; Hwang, C.; Hung, C.

    2000-01-01

    The purpose of this research and study paper is to provide a summary description and results of rapid development accomplishments at NASA/JPL in the area of advanced distributed computing technology using a Commercial-Off--The-Shelf (COTS)-based object oriented component approach to open inter-operable software development and software reuse.

  5. State Analysis Database Tool

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Bennett, Matthew

    2006-01-01

    The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.

  6. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  7. The mission events graphic generator software: A small tool with big results

    NASA Technical Reports Server (NTRS)

    Lupisella, Mark; Leibee, Jack; Scaffidi, Charles

    1993-01-01

    Utilization of graphics has long been a useful methodology for many aspects of spacecraft operations. A personal computer based software tool that implements straight-forward graphics and greatly enhances spacecraft operations is presented. This unique software tool is the Mission Events Graphic Generator (MEGG) software which is used in support of the Hubble Space Telescope (HST) Project. MEGG reads the HST mission schedule and generates a graphical timeline.

  8. Cost benefits of advanced software: A review of methodology used at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla N.

    1993-01-01

    To assist rational investments in advanced software, a formal, explicit, and multi-perspective cost-benefit analysis methodology is proposed. The methodology can be implemented through a six-stage process which is described and explained. The current practice of cost-benefit analysis at KSC is reviewed in the light of this methodology. The review finds that there is a vicious circle operating. Unsound methods lead to unreliable cost-benefit estimates. Unreliable estimates convince management that cost-benefit studies should not be taken seriously. Then, given external demands for cost-benefit estimates, management encourages software enginees to somehow come up with the numbers for their projects. Lacking the expertise needed to do a proper study, courageous software engineers with vested interests use ad hoc and unsound methods to generate some estimates. In turn, these estimates are unreliable, and the cycle continues. The proposed methodology should help KSC to break out of this cycle.

  9. Development of a support software system for real-time HAL/S applications

    NASA Technical Reports Server (NTRS)

    Smith, R. S.

    1984-01-01

    Methodologies employed in defining and implementing a software support system for the HAL/S computer language for real-time operations on the Shuttle are detailed. Attention is also given to the management and validation techniques used during software development and software maintenance. Utilities developed to support the real-time operating conditions are described. With the support system being produced on Cyber computers and executable code then processed through Cyber or PDP machines, the support system has a production level status and can serve as a model for other software development projects.

  10. Advanced software development workstation: Object-oriented methodologies and applications for flight planning and mission operations

    NASA Technical Reports Server (NTRS)

    Izygon, Michel

    1993-01-01

    The work accomplished during the past nine months in order to help three different organizations involved in Flight Planning and in Mission Operations systems, to transition to Object-Oriented Technology, by adopting one of the currently most widely used Object-Oriented analysis and Design Methodology is summarized.

  11. Application and systems software in Ada: Development experiences

    NASA Technical Reports Server (NTRS)

    Kuschill, Jim

    1986-01-01

    In its most basic sense software development involves describing the tasks to be solved, including the given objects and the operations to be performed on those objects. Unfortunately, the way people describe objects and operations usually bears little resemblance to source code in most contemporary computer languages. There are two ways around this problem. One is to allow users to describe what they want the computer to do in everyday, typically imprecise English. The PRODOC methodology and software development environment is based on a second more flexible and possibly even easier to use approach. Rather than hiding program structure, PRODOC represents such structure graphically using visual programming techniques. In addition, the program terminology used in PRODOC may be customized so as to match the way human experts in any given application area naturally describe the relevant data and operations. The PRODOC methodology is described in detail.

  12. A Methodological Framework for Enterprise Information System Requirements Derivation

    NASA Astrophysics Data System (ADS)

    Caplinskas, Albertas; Paškevičiūtė, Lina

    Current information systems (IS) are enterprise-wide systems supporting strategic goals of the enterprise and meeting its operational business needs. They are supported by information and communication technologies (ICT) and other software that should be fully integrated. To develop software responding to real business needs, we need requirements engineering (RE) methodology that ensures the alignment of requirements for all levels of enterprise system. The main contribution of this chapter is a requirement-oriented methodological framework allowing to transform business requirements level by level into software ones. The structure of the proposed framework reflects the structure of Zachman's framework. However, it has other intentions and is purposed to support not the design but the RE issues.

  13. Corridor-based forecasts of work-zone impacts for freeways.

    DOT National Transportation Integrated Search

    2011-08-09

    This project developed an analysis methodology and associated software implementation for the evaluation of : significant work zone impacts on freeways in North Carolina. The FREEVAL-WZ software tool allows the analyst : to predict the operational im...

  14. Ensemble: an Architecture for Mission-Operations Software

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Powell, Mark; Fox, Jason; Rabe, Kenneth; Shu, IHsiang; McCurdy, Michael; Vera, Alonso

    2008-01-01

    Ensemble is the name of an open architecture for, and a methodology for the development of, spacecraft mission operations software. Ensemble is also potentially applicable to the development of non-spacecraft mission-operations- type software. Ensemble capitalizes on the strengths of the open-source Eclipse software and its architecture to address several issues that have arisen repeatedly in the development of mission-operations software: Heretofore, mission-operations application programs have been developed in disparate programming environments and integrated during the final stages of development of missions. The programs have been poorly integrated, and it has been costly to develop, test, and deploy them. Users of each program have been forced to interact with several different graphical user interfaces (GUIs). Also, the strategy typically used in integrating the programs has yielded serial chains of operational software tools of such a nature that during use of a given tool, it has not been possible to gain access to the capabilities afforded by other tools. In contrast, the Ensemble approach offers a low-risk path towards tighter integration of mission-operations software tools.

  15. The Holistic Targeting (HOT) Methodology as the Means to Improve Information Operations (IO) Target Development and Prioritization

    DTIC Science & Technology

    2008-09-01

    software facilitate targeting problem understanding and the network analysis tool, Palantir , as an efficient and tailored semi-automated means to...the use of compendium software facilitate targeting problem understanding and the network analysis tool, Palantir , as an efficient and tailored semi...OBJECTIVES USING COMPENDIUM SOFTWARE .....63 E. HOT TARGET PRIORITIZATION AND DEVELOPMENT USING PALANTIR SOFTWARE .................................69 1

  16. Building quality into medical product software design.

    PubMed

    Mallory, S R

    1993-01-01

    The software engineering and quality assurance disciplines are a requisite to the design of safe and effective software-based medical devices. It is in the areas of software methodology and process that the most beneficial application of these disciplines to software development can be made. Software is a product of complex operations and methodologies and is not amenable to the traditional electromechanical quality assurance processes. Software quality must be built in by the developers, with the software verification and validation engineers acting as the independent instruments for ensuring compliance with performance objectives and with development and maintenance standards. The implementation of a software quality assurance program is a complex process involving management support, organizational changes, and new skill sets, but the benefits are profound. Its rewards provide safe, reliable, cost-effective, maintainable, and manageable software, which may significantly speed the regulatory review process and therefore potentially shorten the overall time to market. The use of a trial project can greatly facilitate the learning process associated with the first-time application of a software quality assurance program.

  17. Track train dynamics analysis and test program: Methodology development for the derailment safety analysis of six-axle locomotives

    NASA Technical Reports Server (NTRS)

    Marcotte, P. P.; Mathewson, K. J. R.

    1982-01-01

    The operational safety of six axle locomotives is analyzed. A locomotive model with corresponding data on suspension characteristics, a method of track defect characterization, and a method of characterizing operational safety are used. A user oriented software package was developed as part of the methodology and was used to study the effect (on operational safety) of various locomotive parameters and operational conditions such as speed, tractive effort, and track curvature. The operational safety of three different locomotive designs was investigated.

  18. A software engineering approach to expert system design and verification

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.; Goodwin, Mary Ann

    1988-01-01

    Software engineering design and verification methods for developing expert systems are not yet well defined. Integration of expert system technology into software production environments will require effective software engineering methodologies to support the entire life cycle of expert systems. The software engineering methods used to design and verify an expert system, RENEX, is discussed. RENEX demonstrates autonomous rendezvous and proximity operations, including replanning trajectory events and subsystem fault detection, onboard a space vehicle during flight. The RENEX designers utilized a number of software engineering methodologies to deal with the complex problems inherent in this system. An overview is presented of the methods utilized. Details of the verification process receive special emphasis. The benefits and weaknesses of the methods for supporting the development life cycle of expert systems are evaluated, and recommendations are made based on the overall experiences with the methods.

  19. Software life cycle methodologies and environments

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest

    1991-01-01

    Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.

  20. Simulation of Attacks for Security in Wireless Sensor Network.

    PubMed

    Diaz, Alvaro; Sanchez, Pablo

    2016-11-18

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.

  1. Risk Assessment Methodology for Software Supportability (RAMSS): guidelines for Adapting Software Supportability Evaluations

    DTIC Science & Technology

    1986-04-14

    CONCIPT DIFINITION OIVILOPMINTITIST I OPERATION ANO ■ MAINTENANCE ■ TRACK MOifCTIO PROGRAMS • «VIIW CRITICAL ISSUIS . Mt PARI INPUTS TO PMO...development and beyond, evaluation criteria must Include quantitative goals (the desired value) and thresholds (the value beyond which the charac

  2. Towards a New Paradigm of Software Development: an Ambassador Driven Process in Distributed Software Companies

    NASA Astrophysics Data System (ADS)

    Kumlander, Deniss

    The globalization of companies operations and competitor between software vendors demand improving quality of delivered software and decreasing the overall cost. The same in fact introduce a lot of problem into software development process as produce distributed organization breaking the co-location rule of modern software development methodologies. Here we propose a reformulation of the ambassador position increasing its productivity in order to bridge communication and workflow gap by managing the entire communication process rather than concentrating purely on the communication result.

  3. Simulation of Attacks for Security in Wireless Sensor Network

    PubMed Central

    Diaz, Alvaro; Sanchez, Pablo

    2016-01-01

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node’s software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work. PMID:27869710

  4. The FoReVer Methodology: A MBSE Framework for Formal Verification

    NASA Astrophysics Data System (ADS)

    Baracchi, Laura; Mazzini, Silvia; Cimatti, Alessandro; Tonetta, Stefano; Garcia, Gerald

    2013-08-01

    The need for high level of confidence and operational integrity in critical space (software) systems is well recognized in the Space industry and has been addressed so far through rigorous System and Software Development Processes and stringent Verification and Validation regimes. The Model Based Space System Engineering process (MBSSE) derived in the System and Software Functional Requirement Techniques study (SSFRT) focused on the application of model based engineering technologies to support the space system and software development processes, from mission level requirements to software implementation through model refinements and translations. In this paper we report on our work in the ESA-funded FoReVer project where we aim at developing methodological, theoretical and technological support for a systematic approach to the space avionics system development, in phases 0/A/B/C. FoReVer enriches the MBSSE process with contract-based formal verification of properties, at different stages from system to software, through a step-wise refinement approach, with the support for a Software Reference Architecture.

  5. Adaptation of a software development methodology to the implementation of a large-scale data acquisition and control system. [for Deep Space Network

    NASA Technical Reports Server (NTRS)

    Madrid, G. A.; Westmoreland, P. T.

    1983-01-01

    A progress report is presented on a program to upgrade the existing NASA Deep Space Network in terms of a redesigned computer-controlled data acquisition system for channelling tracking, telemetry, and command data between a California-based control center and three signal processing centers in Australia, California, and Spain. The methodology for the improvements is oriented towards single subsystem development with consideration for a multi-system and multi-subsystem network of operational software. Details of the existing hardware configurations and data transmission links are provided. The program methodology includes data flow design, interface design and coordination, incremental capability availability, increased inter-subsystem developmental synthesis and testing, system and network level synthesis and testing, and system verification and validation. The software has been implemented thus far to a 65 percent completion level, and the methodology being used to effect the changes, which will permit enhanced tracking and communication with spacecraft, has been concluded to feature effective techniques.

  6. A Common Interface Real-Time Multiprocessor Operating System for Embedded Systems

    DTIC Science & Technology

    1991-03-04

    Pressman , a design methodology should show hierarchical organization, lead to modules exhibiting independent functional characteristics, and be derived...Boehm, Barry W. "Software Engineering," Tutorial: Software Design Strategies, 2nd Edition. 35-50. Los Angeles CA: IEEE Computer Society Press, 1981... Pressman , Roger S. Software Engineering: A Practitioner’s Approach, Second Edi- tion. McGraw-Hill Book Company, New York, 1988. 59. Quinn, Michael J

  7. An Introduction to Flight Software Development: FSW Today, FSW 2010

    NASA Technical Reports Server (NTRS)

    Gouvela, John

    2004-01-01

    Experience and knowledge gained from ongoing maintenance of Space Shuttle Flight Software and new development projects including Cockpit Avionics Upgrade are applied to projected needs of the National Space Exploration Vision through Spiral 2. Lessons learned from these current activities are applied to create a sustainable, reliable model for development of critical software to support Project Constellation. This presentation introduces the technologies, methodologies, and infrastructure needed to produce and sustain high quality software. It will propose what is needed to support a Vision for Space Exploration that places demands on the innovation and productivity needed to support future space exploration. The technologies in use today within FSW development include tools that provide requirements tracking, integrated change management, modeling and simulation software. Specific challenges that have been met include the introduction and integration of Commercial Off the Shelf (COTS) Real Time Operating System for critical functions. Though technology prediction has proved to be imprecise, Project Constellation requirements will need continued integration of new technology with evolving methodologies and changing project infrastructure. Targets for continued technology investment are integrated health monitoring and management, self healing software, standard payload interfaces, autonomous operation, and improvements in training. Emulation of the target hardware will also allow significant streamlining of development and testing. The methodologies in use today for FSW development are object oriented UML design, iterative development using independent components, as well as rapid prototyping . In addition, Lean Six Sigma and CMMI play a critical role in the quality and efficiency of the workforce processes. Over the next six years, we expect these methodologies to merge with other improvements into a consolidated office culture with all processes being guided by automated office assistants. The infrastructure in use today includes strict software development and configuration management procedures, including strong control of resource management and critical skills coverage. This will evolve to a fully integrated staff organization with efficient and effective communication throughout all levels guided by a Mission-Systems Architecture framework with focus on risk management and attention toward inevitable product obsolescence. This infrastructure of computing equipment, software and processes will itself be subject to technological change and need for management of change and improvement,

  8. Enhanced methods for determining operational capabilities and support costs of proposed space systems

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles

    1993-01-01

    This report documents the work accomplished during the first two years of research to provide support to NASA in predicting operational and support parameters and costs of proposed space systems. The first year's research developed a methodology for deriving reliability and maintainability (R & M) parameters based upon the use of regression analysis to establish empirical relationships between performance and design specifications and corresponding mean times of failure and repair. The second year focused on enhancements to the methodology, increased scope of the model, and software improvements. This follow-on effort expands the prediction of R & M parameters and their effect on the operations and support of space transportation vehicles to include other system components such as booster rockets and external fuel tanks. It also increases the scope of the methodology and the capabilities of the model as implemented by the software. The focus is on the failure and repair of major subsystems and their impact on vehicle reliability, turn times, maintenance manpower, and repairable spares requirements. The report documents the data utilized in this study, outlines the general methodology for estimating and relating R&M parameters, presents the analyses and results of application to the initial data base, and describes the implementation of the methodology through the use of a computer model. The report concludes with a discussion on validation and a summary of the research findings and results.

  9. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    USDA-ARS?s Scientific Manuscript database

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  10. ActiveTutor: Towards More Adaptive Features in an E-Learning Framework

    ERIC Educational Resources Information Center

    Fournier, Jean-Pierre; Sansonnet, Jean-Paul

    2008-01-01

    Purpose: This paper aims to sketch the emerging notion of auto-adaptive software when applied to e-learning software. Design/methodology/approach: The study and the implementation of the auto-adaptive architecture are based on the operational framework "ActiveTutor" that is used for teaching the topic of computer science programming in first-grade…

  11. Integrated fiducial sample mount and software for correlated microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timothy R McJunkin; Jill R. Scott; Tammy L. Trowbridge

    2014-02-01

    A novel design sample mount with integrated fiducials and software for assisting operators in easily and efficiently locating points of interest established in previous analytical sessions is described. The sample holder and software were evaluated with experiments to demonstrate the utility and ease of finding the same points of interest in two different microscopy instruments. Also, numerical analysis of expected errors in determining the same position with errors unbiased by a human operator was performed. Based on the results, issues related to acquiring reproducibility and best practices for using the sample mount and software were identified. Overall, the sample mountmore » methodology allows data to be efficiently and easily collected on different instruments for the same sample location.« less

  12. Evolution of the Scope and Capabilities of Uplink Support Software for Mars Surface Operations

    NASA Technical Reports Server (NTRS)

    Pack, Marc; Laubach, Sharon

    2014-01-01

    In January of 2004 both of the Mars Exploration Rover spacecraft landed safely, initiating daily surface operations at the Jet Propulsion Laboratory for what was anticipated to be approximately three months of mobile exploration. The longevity of this mission, still ongoing after ten years, has provided not only a tremendous return of scientific data but also the opportunity to refine and improve the methodology by which robotic Mars surface missions are commanded. Since the landing of the Mars Science Laboratory spacecraft in August of 2012, this methodology has been successfully applied to operate a Martian rover which is both similar to, and quite different from, its predecessors. For MER and MSL, daily uplink operations can be most broadly viewed as converting the combined interests of both the science and engineering teams into a spacecraft-safe set of transmittable command files. In order to accomplish these ends a discrete set of mission-critical software tools were developed which not only allowed for conformation to established JPL standards and practices but also enabled innovative technologies specific to each mission. Although these primary programs provided the requisite capabilities for meeting the high-level goals of each distinct phase of the uplink process, there was little in the way of secondary software to support the smooth flow of data from one phase to the next. In order to address this shortcoming a suite of small software tools was developed to aid in phase transitions, as well as to automate some of the more laborious and error-prone aspects of uplink operations. This paper describes the evolution of this software suite, from its initial attempts to merely shorten the duration of the operator's shift, to its current role as an indispensable tool enforcing workflow of the uplink operations process and agilely responding to the new and unexpected challenges of missions which can, and have, lasted many years longer than originally anticipated.

  13. Cassini Attitude Control Flight Software: from Development to In-Flight Operation

    NASA Technical Reports Server (NTRS)

    Brown, Jay

    2008-01-01

    The Cassini Attitude and Articulation Control Subsystem (AACS) Flight Software (FSW) has achieved its intended design goals by successfully guiding and controlling the Cassini-Huygens planetary mission to Saturn and its moons. This paper describes an overview of AACS FSW details from early design, development, implementation, and test to its fruition of operating and maintaining spacecraft control over an eleven year prime mission. Starting from phases of FSW development, topics expand to FSW development methodology, achievements utilizing in-flight autonomy, and summarize lessons learned during flight operations which can be useful to FSW in current and future spacecraft missions.

  14. Software System Architecture Modeling Methodology for Naval Gun Weapon Systems

    DTIC Science & Technology

    2010-12-01

    Weapon System HAR Hazard Action Report HERO Hazards of Electromagnetic Radiation to Ordnance IOC Initial Operational Capability... radiation to ordnance ; and combinations therein. Equipment, systems, or procedures and processes whose malfunction would hazard the safe manufacturing...NDI Non-Development Item OPEVAL Operational Evaluation ORDALTS Ordnance Alterations O&SHA Operating and Support Hazard Analysis PDA

  15. 77 FR 39521 - Applications and Amendments to Facility Operating Licenses and Combined Licenses Involving...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-03

    ... Methodology for Boiling Water Reactors, June 2011. To support use of Topical Report ANP-10307PA, Revision 0... the NRC's E-Filing system does not support unlisted software, and the NRC Meta System Help Desk will... Water Reactors with AREVA Topical Report ANP- 10307PA, Revision 0, ``AREVA MCPR Safety Limit Methodology...

  16. Software for imaging phase-shift interference microscope

    NASA Astrophysics Data System (ADS)

    Malinovski, I.; França, R. S.; Couceiro, I. B.

    2018-03-01

    In recent years absolute interference microscope was created at National Metrology Institute of Brazil (INMETRO). The instrument by principle of operation is imaging phase-shifting interferometer (PSI) equipped with two stabilized lasers of different colour as traceable reference wavelength sources. We report here some progress in development of the software for this instrument. The status of undergoing internal validation and verification of the software is also reported. In contrast with standard PSI method, different methodology of phase evaluation is applied. Therefore, instrument specific procedures for software validation and verification are adapted and discussed.

  17. FTDD973: A multimedia knowledge-based system and methodology for operator training and diagnostics

    NASA Technical Reports Server (NTRS)

    Hekmatpour, Amir; Brown, Gary; Brault, Randy; Bowen, Greg

    1993-01-01

    FTDD973 (973 Fabricator Training, Documentation, and Diagnostics) is an interactive multimedia knowledge based system and methodology for computer-aided training and certification of operators, as well as tool and process diagnostics in IBM's CMOS SGP fabrication line (building 973). FTDD973 is an example of what can be achieved with modern multimedia workstations. Knowledge-based systems, hypertext, hypergraphics, high resolution images, audio, motion video, and animation are technologies that in synergy can be far more useful than each by itself. FTDD973's modular and object-oriented architecture is also an example of how improvements in software engineering are finally making it possible to combine many software modules into one application. FTDD973 is developed in ExperMedia/2; and OS/2 multimedia expert system shell for domain experts.

  18. A Methodology and Implementation for Annotating Digital Images for Context-appropriate Use in an Academic Health Care Environment

    PubMed Central

    Goede, Patricia A.; Lauman, Jason R.; Cochella, Christopher; Katzman, Gregory L.; Morton, David A.; Albertine, Kurt H.

    2004-01-01

    Use of digital medical images has become common over the last several years, coincident with the release of inexpensive, mega-pixel quality digital cameras and the transition to digital radiology operation by hospitals. One problem that clinicians, medical educators, and basic scientists encounter when handling images is the difficulty of using business and graphic arts commercial-off-the-shelf (COTS) software in multicontext authoring and interactive teaching environments. The authors investigated and developed software-supported methodologies to help clinicians, medical educators, and basic scientists become more efficient and effective in their digital imaging environments. The software that the authors developed provides the ability to annotate images based on a multispecialty methodology for annotation and visual knowledge representation. This annotation methodology is designed by consensus, with contributions from the authors and physicians, medical educators, and basic scientists in the Departments of Radiology, Neurobiology and Anatomy, Dermatology, and Ophthalmology at the University of Utah. The annotation methodology functions as a foundation for creating, using, reusing, and extending dynamic annotations in a context-appropriate, interactive digital environment. The annotation methodology supports the authoring process as well as output and presentation mechanisms. The annotation methodology is the foundation for a Windows implementation that allows annotated elements to be represented as structured eXtensible Markup Language and stored separate from the image(s). PMID:14527971

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Lee H.; Laros, James H., III

    This paper describes a methodology for implementing disk-less cluster systems using the Network File System (NFS) that scales to thousands of nodes. This method has been successfully deployed and is currently in use on several production systems at Sandia National Labs. This paper will outline our methodology and implementation, discuss hardware and software considerations in detail and present cluster configurations with performance numbers for various management operations like booting.

  20. Mars Science Laboratory CHIMRA/IC/DRT Flight Software for Sample Acquisition and Processing

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Leger, Chris; Carsten, Joseph; Helmick, Daniel; Kuhn, Stephen; Redick, Richard; Trujillo, Diana

    2013-01-01

    The design methodologies of using sequence diagrams, multi-process functional flow diagrams, and hierarchical state machines were successfully applied in designing three MSL (Mars Science Laboratory) flight software modules responsible for handling actuator motions of the CHIMRA (Collection and Handling for In Situ Martian Rock Analysis), IC (Inlet Covers), and DRT (Dust Removal Tool) mechanisms. The methodologies were essential to specify complex interactions with other modules, support concurrent foreground and background motions, and handle various fault protections. Studying task scenarios with multi-process functional flow diagrams yielded great insight to overall design perspectives. Since the three modules require three different levels of background motion support, the methodologies presented in this paper provide an excellent comparison. All three modules are fully operational in flight.

  1. Software Engineering for Human Spaceflight

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.

    2014-01-01

    The Spacecraft Software Engineering Branch of NASA Johnson Space Center (JSC) provides world-class products, leadership, and technical expertise in software engineering, processes, technology, and systems management for human spaceflight. The branch contributes to major NASA programs (e.g. ISS, MPCV/Orion) with in-house software development and prime contractor oversight, and maintains the JSC Engineering Directorate CMMI rating for flight software development. Software engineering teams work with hardware developers, mission planners, and system operators to integrate flight vehicles, habitats, robotics, and other spacecraft elements. They seek to infuse automation and autonomy into missions, and apply new technologies to flight processor and computational architectures. This presentation will provide an overview of key software-related projects, software methodologies and tools, and technology pursuits of interest to the JSC Spacecraft Software Engineering Branch.

  2. The Environment for Application Software Integration and Execution (EASIE) version 1.0. Volume 1: Executive overview

    NASA Technical Reports Server (NTRS)

    Rowell, Lawrence F.; Davis, John S.

    1989-01-01

    The Environment for Application Software Integration and Execution (EASIE) provides a methodology and a set of software utility programs to ease the task of coordinating engineering design and analysis codes. EASIE was designed to meet the needs of conceptual design engineers that face the task of integrating many stand-alone engineering analysis programs. Using EASIE, programs are integrated through a relational database management system. Volume 1, Executive Overview, gives an overview of the functions provided by EASIE and describes their use. Three operational design systems based upon the EASIE software are briefly described.

  3. Transitioning Domain Analysis: An Industry Experience.

    DTIC Science & Technology

    1996-06-01

    References 6 Implementation 6.1 Analysis of Operator Services’ Requirements Process 21 6.2 Preliminary Planning for FODA Training by SEI 21...an academic and industry partnership took feature oriented domain analysis ( FODA ) from a methodology that is still being defined to a well-documented...to pilot the use of the Software Engineering Institute (SEI) domain analysis methodology known as feature-oriented domain analysis ( FODA ). Supported

  4. Methodology for object-oriented real-time systems analysis and design: Software engineering

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1991-01-01

    Successful application of software engineering methodologies requires an integrated analysis and design life-cycle in which the various phases flow smoothly 'seamlessly' from analysis through design to implementation. Furthermore, different analysis methodologies often lead to different structuring of the system so that the transition from analysis to design may be awkward depending on the design methodology to be used. This is especially important when object-oriented programming is to be used for implementation when the original specification and perhaps high-level design is non-object oriented. Two approaches to real-time systems analysis which can lead to an object-oriented design are contrasted: (1) modeling the system using structured analysis with real-time extensions which emphasizes data and control flows followed by the abstraction of objects where the operations or methods of the objects correspond to processes in the data flow diagrams and then design in terms of these objects; and (2) modeling the system from the beginning as a set of naturally occurring concurrent entities (objects) each having its own time-behavior defined by a set of states and state-transition rules and seamlessly transforming the analysis models into high-level design models. A new concept of a 'real-time systems-analysis object' is introduced and becomes the basic building block of a series of seamlessly-connected models which progress from the object-oriented real-time systems analysis and design system analysis logical models through the physical architectural models and the high-level design stages. The methodology is appropriate to the overall specification including hardware and software modules. In software modules, the systems analysis objects are transformed into software objects.

  5. 75 FR 23808 - Biweekly Notice; Applications and Amendments to Facility Operating Licenses Involving No...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-04

    ..., but should note that the NRC's E-Filing system does not support unlisted software, and the NRC Meta... support the physical fuel change. These methodologies do not use the total planar radial peaking factor (F... systems performance, operating mode and equipment out of service. The proposed change is supported by GEH...

  6. Expert system verification concerns in an operations environment

    NASA Technical Reports Server (NTRS)

    Goodwin, Mary Ann; Robertson, Charles C.

    1987-01-01

    The Space Shuttle community is currently developing a number of knowledge-based tools, primarily expert systems, to support Space Shuttle operations. It is proposed that anticipating and responding to the requirements of the operations environment will contribute to a rapid and smooth transition of expert systems from development to operations, and that the requirements for verification are critical to this transition. The paper identifies the requirements of expert systems to be used for flight planning and support and compares them to those of existing procedural software used for flight planning and support. It then explores software engineering concepts and methodology that can be used to satisfy these requirements, to aid the transition from development to operations and to support the operations environment during the lifetime of expert systems. Many of these are similar to those used for procedural hardware.

  7. Manual on performance of traffic signal systems: assessment of operations and maintenance : [summary].

    DOT National Transportation Integrated Search

    2017-05-01

    In this project, Florida Atlantic University researchers developed a methodology and software tools that allow objective, quantitative analysis of the performance of signal systems. : The researchers surveyed the state of practice for traffic signal ...

  8. Software Design Methodology Migration for a Distributed Ground System

    NASA Technical Reports Server (NTRS)

    Ritter, George; McNair, Ann R. (Technical Monitor)

    2002-01-01

    The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has been developed and has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes. The new Software processes still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Process have evolved highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project .

  9. Towards a general object-oriented software development methodology

    NASA Technical Reports Server (NTRS)

    Seidewitz, ED; Stark, Mike

    1986-01-01

    An object is an abstract software model of a problem domain entity. Objects are packages of both data and operations of that data (Goldberg 83, Booch 83). The Ada (tm) package construct is representative of this general notion of an object. Object-oriented design is the technique of using objects as the basic unit of modularity in systems design. The Software Engineering Laboratory at the Goddard Space Flight Center is currently involved in a pilot program to develop a flight dynamics simulator in Ada (approximately 40,000 statements) using object-oriented methods. Several authors have applied object-oriented concepts to Ada (e.g., Booch 83, Cherry 85). It was found that these methodologies are limited. As a result a more general approach was synthesized with allows a designer to apply powerful object-oriented principles to a wide range of applications and at all stages of design. An overview is provided of this approach. Further, how object-oriented design fits into the overall software life-cycle is considered.

  10. Flight software development for the isothermal dendritic growth experiment

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.; Winsa, Edward A.; Glicksman, Martin E.

    1989-01-01

    The Isothermal Dendritic Growth Experiment (IDGE) is a microgravity materials science experiment scheduled to fly in the cargo bay of the shuttle on the United States Microgravity Payload (USMP) carrier. The experiment will be operated by real-time control software which will not only monitor and control onboard experiment hardware, but will also communicate, via downlink data and uplink commands, with the Payload Operations Control Center (POCC) at NASA George C. Marshall Space Flight Center (MSFC). The software development approach being used to implement this system began with software functional requirements specification. This was accomplished using the Yourdon/DeMarco methodology as supplemented by the Ward/Mellor real-time extensions. The requirements specification in combination with software prototyping was then used to generate a detailed design consisting of structure charts, module prologues, and Program Design Language (PDL) specifications. This detailed design will next be used to code the software, followed finally by testing against the functional requirements. The result will be a modular real-time control software system with traceability through every phase of the development process.

  11. Flight software development for the isothermal dendritic growth experiment

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.; Winsa, Edward A.; Glicksman, M. E.

    1990-01-01

    The Isothermal Dendritic Growth Experiment (IDGE) is a microgravity materials science experiment scheduled to fly in the cargo bay of the shuttle on the United States Microgravity Payload (USMP) carrier. The experiment will be operated by real-time control software which will not only monitor and control onboard experiment hardware, but will also communicate, via downlink data and unlink commands, with the Payload Operations Control Center (POCC) at NASA George C. Marshall Space Flight Center (MSFC). The software development approach being used to implement this system began with software functional requirements specification. This was accomplished using the Yourdon/DeMarco methodology as supplemented by the Ward/Mellor real-time extensions. The requirements specification in combination with software prototyping was then used to generate a detailed design consisting of structure charts, module prologues, and Program Design Language (PDL) specifications. This detailed design will next be used to code the software, followed finally by testing against the functional requirements. The result will be a modular real-time control software system with traceability through every phase of the development process.

  12. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  13. Knowledge-based reusable software synthesis system

    NASA Technical Reports Server (NTRS)

    Donaldson, Cammie

    1989-01-01

    The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.

  14. Report of AAPM Task Group 162: Software for planar image quality metrology.

    PubMed

    Samei, Ehsan; Ikejimba, Lynda C; Harrawood, Brian P; Rong, John; Cunningham, Ian A; Flynn, Michael J

    2018-02-01

    The AAPM Task Group 162 aimed to provide a standardized approach for the assessment of image quality in planar imaging systems. This report offers a description of the approach as well as the details of the resultant software bundle to measure detective quantum efficiency (DQE) as well as its basis components and derivatives. The methodology and the associated software include the characterization of the noise power spectrum (NPS) from planar images acquired under specific acquisition conditions, modulation transfer function (MTF) using an edge test object, the DQE, and effective DQE (eDQE). First, a methodological framework is provided to highlight the theoretical basis of the work. Then, a step-by-step guide is included to assist in proper execution of each component of the code. Lastly, an evaluation of the method is included to validate its accuracy against model-based and experimental data. The code was built using a Macintosh OSX operating system. The software package contains all the source codes to permit an experienced user to build the suite on a Linux or other *nix type system. The package further includes manuals and sample images and scripts to demonstrate use of the software for new users. The results of the code are in close alignment with theoretical expectations and published results of experimental data. The methodology and the software package offered in AAPM TG162 can be used as baseline for characterization of inherent image quality attributes of planar imaging systems. © 2017 American Association of Physicists in Medicine.

  15. Space station operating system study

    NASA Technical Reports Server (NTRS)

    Horn, Albert E.; Harwell, Morris C.

    1988-01-01

    The current phase of the Space Station Operating System study is based on the analysis, evaluation, and comparison of the operating systems implemented on the computer systems and workstations in the software development laboratory. Primary emphasis has been placed on the DEC MicroVMS operating system as implemented on the MicroVax II computer, with comparative analysis of the SUN UNIX system on the SUN 3/260 workstation computer, and to a limited extent, the IBM PC/AT microcomputer running PC-DOS. Some benchmark development and testing was also done for the Motorola MC68010 (VM03 system) before the system was taken from the laboratory. These systems were studied with the objective of determining their capability to support Space Station software development requirements, specifically for multi-tasking and real-time applications. The methodology utilized consisted of development, execution, and analysis of benchmark programs and test software, and the experimentation and analysis of specific features of the system or compilers in the study.

  16. Unique Approach to Threat Analysis Mapping: A Malware Centric Methodology for Better Understanding the Adversary Landscape

    DTIC Science & Technology

    2016-04-05

    Unlimited http://www.sei.cmu.edu CMU/SEI-2016-TR-004 | SOFTWARE ENGINEERING INSTITUTE | CARNEGIE MELLON UNIVERSITY Distribution Statement A...Approved for Public Release; Distribution is Unlimited Copyright 2016 Carnegie Mellon University

 This material is based upon work funded and supported...by Department of Homeland Security under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software

  17. Methodology for automating software systems. Task 1 of the foundations for automating software systems

    NASA Technical Reports Server (NTRS)

    Moseley, Warren

    1989-01-01

    The early stages of a research program designed to establish an experimental research platform for software engineering are described. Major emphasis is placed on Computer Assisted Software Engineering (CASE). The Poor Man's CASE Tool is based on the Apple Macintosh system, employing available software including Focal Point II, Hypercard, XRefText, and Macproject. These programs are functional in themselves, but through advanced linking are available for operation from within the tool being developed. The research platform is intended to merge software engineering technology with artificial intelligence (AI). In the first prototype of the PMCT, however, the sections of AI are not included. CASE tools assist the software engineer in planning goals, routes to those goals, and ways to measure progress. The method described allows software to be synthesized instead of being written or built.

  18. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  19. Analysis of methods of processing of expert information by optimization of administrative decisions

    NASA Astrophysics Data System (ADS)

    Churakov, D. Y.; Tsarkova, E. G.; Marchenko, N. D.; Grechishnikov, E. V.

    2018-03-01

    In the real operation the measure definition methodology in case of expert estimation of quality and reliability of application-oriented software products is offered. In operation methods of aggregation of expert estimates on the example of a collective choice of an instrumental control projects in case of software development of a special purpose for needs of institutions are described. Results of operation of dialogue decision making support system are given an algorithm of the decision of the task of a choice on the basis of a method of the analysis of hierarchies and also. The developed algorithm can be applied by development of expert systems to the solution of a wide class of the tasks anyway connected to a multicriteria choice.

  20. Software Development and Test Methodology for a Distributed Ground System

    NASA Technical Reports Server (NTRS)

    Ritter, George; Guillebeau, Pat; McNair, Ann R. (Technical Monitor)

    2002-01-01

    The Marshall Space Flight Center's (MSFC) Payload Operations Center (POC) ground system has evolved over a period of about 10 years. During this time the software processes have migrated from more traditional to more contemporary development processes in an effort to minimize unnecessary overhead while maximizing process benefits. The Software processes that have evolved still emphasize requirements capture, software configuration management, design documenting, and making sure the products that have been developed are accountable to initial requirements. This paper will give an overview of how the Software Processes have evolved, highlighting the positives as well as the negatives. In addition, we will mention the COTS tools that have been integrated into the processes and how the COTS have provided value to the project.

  1. Software requirements flow-down and preliminary software design for the G-CLEF spectrograph

    NASA Astrophysics Data System (ADS)

    Evans, Ian N.; Budynkiewicz, Jamie A.; DePonte Evans, Janet; Miller, Joseph B.; Onyuksel, Cem; Paxson, Charles; Plummer, David A.

    2016-08-01

    The Giant Magellan Telescope (GMT)-Consortium Large Earth Finder (G-CLEF) is a fiber-fed, precision radial velocity (PRV) optical echelle spectrograph that will be the first light instrument on the GMT. The G-CLEF instrument device control subsystem (IDCS) provides software control of the instrument hardware, including the active feedback loops that are required to meet the G-CLEF PRV stability requirements. The IDCS is also tasked with providing operational support packages that include data reduction pipelines and proposal preparation tools. A formal, but ultimately pragmatic approach is being used to establish a complete and correct set of requirements for both the G-CLEF device control and operational support packages. The device control packages must integrate tightly with the state-machine driven software and controls reference architecture designed by the GMT Organization. A model-based systems engineering methodology is being used to develop a preliminary design that meets these requirements. Through this process we have identified some lessons that have general applicability to the development of software for ground-based instrumentation. For example, tasking an individual with overall responsibility for science/software/hardware integration is a key step to ensuring effective integration between these elements. An operational concept document that includes detailed routine and non- routine operational sequences should be prepared in parallel with the hardware design process to tie together these elements and identify any gaps. Appropriate time-phasing of the hardware and software design phases is important, but revisions to driving requirements that impact software requirements and preliminary design are inevitable. Such revisions must be carefully managed to ensure efficient use of resources.

  2. Implementing Kanban for agile process management within the ALMA Software Operations Group

    NASA Astrophysics Data System (ADS)

    Reveco, Johnny; Mora, Matias; Shen, Tzu-Chiang; Soto, Ruben; Sepulveda, Jorge; Ibsen, Jorge

    2014-07-01

    After the inauguration of the Atacama Large Millimeter/submillimeter Array (ALMA), the Software Operations Group in Chile has refocused its objectives to: (1) providing software support to tasks related to System Integration, Scientific Commissioning and Verification, as well as Early Science observations; (2) testing the remaining software features, still under development by the Integrated Computing Team across the world; and (3) designing and developing processes to optimize and increase the level of automation of operational tasks. Due to their different stakeholders, each of these tasks presents a wide diversity of importances, lifespans and complexities. Aiming to provide the proper priority and traceability for every task without stressing our engineers, we introduced the Kanban methodology in our processes in order to balance the demand on the team against the throughput of the delivered work. The aim of this paper is to share experiences gained during the implementation of Kanban in our processes, describing the difficulties we have found, solutions and adaptations that led us to our current but still evolving implementation, which has greatly improved our throughput, prioritization and problem traceability.

  3. Intelligent Command and Control Systems for Satellite Ground Operations

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1999-01-01

    This grant, Intelligent Command and Control Systems for Satellite Ground Operations, funded by NASA Goddard Space Flight Center, has spanned almost a decade. During this time, it has supported a broad range of research addressing the changing needs of NASA operations. It is important to note that many of NASA's evolving needs, for example, use of automation to drastically reduce (e.g., 70%) operations costs, are similar requirements in both government and private sectors. Initially the research addressed the appropriate use of emerging and inexpensive computational technologies, such as X Windows, graphics, and color, together with COTS (commercial-off-the-shelf) hardware and software such as standard Unix workstations to re-engineer satellite operations centers. The first phase of research supported by this grant explored the development of principled design methodologies to make effective use of emerging and inexpensive technologies. The ultimate performance measures for new designs were whether or not they increased system effectiveness while decreasing costs. GT-MOCA (The Georgia Tech Mission Operations Cooperative Associate) and GT-VITA (Georgia Tech Visual and Inspectable Tutor and Assistant), whose latter stages were supported by this research, explored model-based design of collaborative operations teams and the design of intelligent tutoring systems, respectively. Implemented in proof-of-concept form for satellite operations, empirical evaluations of both, using satellite operators for the former and personnel involved in satellite control operations for the latter, demonstrated unequivocally the feasibility and effectiveness of the proposed modeling and design strategy underlying both research efforts. The proof-of-concept implementation of GT-MOCA showed that the methodology could specify software requirements that enabled a human-computer operations team to perform without any significant performance differences from the standard two-person satellite operations team. GT-VITA, using the same underlying methodology, the operator function model (OFM), and its computational implementation, OFMspert, successfully taught satellite control knowledge required by flight operations team members. The tutor structured knowledge in three ways: declarative knowledge (e.g., What is this? What does it do?), procedural knowledge, and operational skill. Operational skill is essential in real-time operations. It combines the two former knowledge types, assisting a student to use them effectively in a dynamic, multi-tasking, real-time operations environment. A high-fidelity simulator of the operator interface to the ground control system, including an almost full replication of both the human-computer interface and human interaction with the dynamic system, was used in the GT-MOCA and GT-VITA evaluations. The GT-VITA empirical evaluation, conducted with a range of'novices' that included GSFC operations management, GSFC operations software developers, and new flight operations team members, demonstrated that GT-VITA effectively taught a wide range of knowledge in a succinct and engaging manner.

  4. Demonstration of a Safety Analysis on a Complex System

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy; Alfaro, Liliana; Alvarado, Christine; Brown, Molly; Hunt, Earl B.; Jaffe, Matt; Joslyn, Susan; Pinnell, Denise; Reese, Jon; Samarziya, Jeffrey; hide

    1997-01-01

    For the past 17 years, Professor Leveson and her graduate students have been developing a theoretical foundation for safety in complex systems and building a methodology upon that foundation. The methodology includes special management structures and procedures, system hazard analyses, software hazard analysis, requirements modeling and analysis for completeness and safety, special software design techniques including the design of human-machine interaction, verification, operational feedback, and change analysis. The Safeware methodology is based on system safety techniques that are extended to deal with software and human error. Automation is used to enhance our ability to cope with complex systems. Identification, classification, and evaluation of hazards is done using modeling and analysis. To be effective, the models and analysis tools must consider the hardware, software, and human components in these systems. They also need to include a variety of analysis techniques and orthogonal approaches: There exists no single safety analysis or evaluation technique that can handle all aspects of complex systems. Applying only one or two may make us feel satisfied, but will produce limited results. We report here on a demonstration, performed as part of a contract with NASA Langley Research Center, of the Safeware methodology on the Center-TRACON Automation System (CTAS) portion of the air traffic control (ATC) system and procedures currently employed at the Dallas/Fort Worth (DFW) TRACON (Terminal Radar Approach CONtrol). CTAS is an automated system to assist controllers in handling arrival traffic in the DFW area. Safety is a system property, not a component property, so our safety analysis considers the entire system and not simply the automated components. Because safety analysis of a complex system is an interdisciplinary effort, our team included system engineers, software engineers, human factors experts, and cognitive psychologists.

  5. Operator function modeling: An approach to cognitive task analysis in supervisory control systems

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1987-01-01

    In a study of models of operators in complex, automated space systems, an operator function model (OFM) methodology was extended to represent cognitive as well as manual operator activities. Development continued on a software tool called OFMdraw, which facilitates construction of an OFM by permitting construction of a heterarchic network of nodes and arcs. Emphasis was placed on development of OFMspert, an expert system designed both to model human operation and to assist real human operators. The system uses a blackboard method of problem solving to make an on-line representation of operator intentions, called ACTIN (actions interpreter).

  6. Four applications of a software data collection and analysis methodology

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.

    1985-01-01

    The evaluation of software technologies suffers because of the lack of quantitative assessment of their effect on software development and modification. A seven-step data collection and analysis methodology couples software technology evaluation with software measurement. Four in-depth applications of the methodology are presented. The four studies represent each of the general categories of analyses on the software product and development process: blocked subject-project studies, replicated project studies, multi-project variation studies, and single project strategies. The four applications are in the areas of, respectively, software testing, cleanroom software development, characteristic software metric sets, and software error analysis.

  7. Distribution Feeder Modeling for Time-Series Simulation of Voltage Management Strategies: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giraldez Miner, Julieta I; Gotseff, Peter; Nagarajan, Adarsh

    This paper presents techniques to create baseline distribution models using a utility feeder from Hawai'ian Electric Company. It describes the software-to-software conversion, steady-state, and time-series validations of a utility feeder model. It also presents a methodology to add secondary low-voltage circuit models to accurately capture the voltage at the customer meter level. This enables preparing models to perform studies that simulate how customer-sited resources integrate into legacy utility distribution system operations.

  8. Learning In networks

    NASA Technical Reports Server (NTRS)

    Buntine, Wray L.

    1995-01-01

    Intelligent systems require software incorporating probabilistic reasoning, and often times learning. Networks provide a framework and methodology for creating this kind of software. This paper introduces network models based on chain graphs with deterministic nodes. Chain graphs are defined as a hierarchical combination of Bayesian and Markov networks. To model learning, plates on chain graphs are introduced to model independent samples. The paper concludes by discussing various operations that can be performed on chain graphs with plates as a simplification process or to generate learning algorithms.

  9. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2005-01-01

    NASA (National Aeronautics and Space Administration) relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft (manned or unmanned) launched that did not have a computer on board that provided vital command and control services. Despite this growing dependence on software control and monitoring, there has been no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Led by the NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard (STD-18l9.13B) has recently undergone a significant update in an attempt to provide that consistency. This paper will discuss the key features of the new NASA Software Safety Standard. It will start with a brief history of the use and development of software in safety critical applications at NASA. It will then give a brief overview of the NASA Software Working Group and the approach it took to revise the software engineering process across the Agency.

  10. Software Risk Identification for Interplanetary Probes

    NASA Technical Reports Server (NTRS)

    Dougherty, Robert J.; Papadopoulos, Periklis E.

    2005-01-01

    The need for a systematic and effective software risk identification methodology is critical for interplanetary probes that are using increasingly complex and critical software. Several probe failures are examined that suggest more attention and resources need to be dedicated to identifying software risks. The direct causes of these failures can often be traced to systemic problems in all phases of the software engineering process. These failures have lead to the development of a practical methodology to identify risks for interplanetary probes. The proposed methodology is based upon the tailoring of the Software Engineering Institute's (SEI) method of taxonomy-based risk identification. The use of this methodology will ensure a more consistent and complete identification of software risks in these probes.

  11. Analysis of Software Development Methodologies to Build Safety Software Applications for the SATEX-II: A Mexican Experimental Satellite

    NASA Astrophysics Data System (ADS)

    Aguilar Cisneros, Jorge; Vargas Martinez, Hector; Pedroza Melendez, Alejandro; Alonso Arevalo, Miguel

    2013-09-01

    Mexico is a country where the experience to build software for satellite applications is beginning. This is a delicate situation because in the near future we will need to develop software for the SATEX-II (Mexican Experimental Satellite). SATEX- II is a SOMECyTA's project (the Mexican Society of Aerospace Science and Technology). We have experienced applying software development methodologies, like TSP (Team Software Process) and SCRUM in other areas. Then, we analyzed these methodologies and we concluded: these can be applied to develop software for the SATEX-II, also, we supported these methodologies with SSP-05-0 Standard in particular with ESA PSS-05-11. Our analysis was focusing on main characteristics of each methodology and how these methodologies could be used with the ESA PSS 05-0 Standards. Our outcomes, in general, may be used by teams who need to build small satellites, but, in particular, these are going to be used when we will build the on board software applications for the SATEX-II.

  12. Methodology and Software for Gross Defect Detection of Spent Nuclear Fuel at the Atucha-I Reactor [Novel Methodology and Software for Spent Fuel Gross Defect Detection at the Atucha-I Reactor

    DOE PAGES

    Sitaraman, Shivakumar; Ham, Young S.; Gharibyan, Narek; ...

    2017-03-27

    Here, fuel assemblies in the spent fuel pool are stored by suspending them in two vertically stacked layers at the Atucha Unit 1 nuclear power plant (Atucha-I). This introduces the unique problem of verifying the presence of fuel in either layer without physically moving the fuel assemblies. Given that the facility uses both natural uranium and slightly enriched uranium at 0.85 wt% 235U and has been in operation since 1974, a wide range of burnups and cooling times can exist in any given pool. A gross defect detection tool, the spent fuel neutron counter (SFNC), has been used at themore » site to verify the presence of fuel up to burnups of 8000 MWd/t. At higher discharge burnups, the existing signal processing software of the tool was found to fail due to nonlinearity of the source term with burnup.« less

  13. Methodology and Software for Gross Defect Detection of Spent Nuclear Fuel at the Atucha-I Reactor [Novel Methodology and Software for Spent Fuel Gross Defect Detection at the Atucha-I Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, Shivakumar; Ham, Young S.; Gharibyan, Narek

    Here, fuel assemblies in the spent fuel pool are stored by suspending them in two vertically stacked layers at the Atucha Unit 1 nuclear power plant (Atucha-I). This introduces the unique problem of verifying the presence of fuel in either layer without physically moving the fuel assemblies. Given that the facility uses both natural uranium and slightly enriched uranium at 0.85 wt% 235U and has been in operation since 1974, a wide range of burnups and cooling times can exist in any given pool. A gross defect detection tool, the spent fuel neutron counter (SFNC), has been used at themore » site to verify the presence of fuel up to burnups of 8000 MWd/t. At higher discharge burnups, the existing signal processing software of the tool was found to fail due to nonlinearity of the source term with burnup.« less

  14. Ada and the rapid development lifecycle

    NASA Technical Reports Server (NTRS)

    Deforrest, Lloyd; Gref, Lynn

    1991-01-01

    JPL is under contract, through NASA, with the US Army to develop a state-of-the-art Command Center System for the US European Command (USEUCOM). The Command Center System will receive, process, and integrate force status information from various sources and provide this integrated information to staff officers and decision makers in a format designed to enhance user comprehension and utility. The system is based on distributed workstation class microcomputers, VAX- and SUN-based data servers, and interfaces to existing military mainframe systems and communication networks. JPL is developing the Command Center System utilizing an incremental delivery methodology called the Rapid Development Methodology with adherence to government and industry standards including the UNIX operating system, X Windows, OSF/Motif, and the Ada programming language. Through a combination of software engineering techniques specific to the Ada programming language and the Rapid Development Approach, JPL was able to deliver capability to the military user incrementally, with comparable quality and improved economies of projects developed under more traditional software intensive system implementation methodologies.

  15. Standardized development of computer software. Part 1: Methods

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

  16. Integrating automated support for a software management cycle into the TAME system

    NASA Technical Reports Server (NTRS)

    Sunazuka, Toshihiko; Basili, Victor R.

    1989-01-01

    Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system.

  17. 75 FR 81667 - Biweekly Notice; Applications and Amendments to Facility Operating Licenses Involving No...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ...-Filing system does not support unlisted software, and the NRC Meta System Help Desk will not be able to... Setpoint Methodology for LSSS [Limiting Safety System Setting] Functions,'' which included the instrument... System Instrumentation,'' Function 3, Condensate Storage Tank Level--Low. The supporting TS Bases will...

  18. A Teamwork-Oriented Air Traffic Control Simulator

    DTIC Science & Technology

    2006-06-01

    the software development methodology of this work , this chapter is viewed as the acquisition phase of this model. The end of the ...Maintenance phase Changed Verification Retirement Development Maintenance 37 because the different controllers working in these phases usually...traditional operation such as scaling the airport and personalizing the working environment. 4. Pilot Specification The

  19. Imperfection and Thickness Measurement of Panels Using a Coordinate Measurement Machine

    NASA Technical Reports Server (NTRS)

    Thornburgh, Robert P.

    2006-01-01

    This paper summarizes the methodology used to measure imperfection and thickness variation for flat and curved panels using a Coordinate Measurement Machine (CMM) and the software program MeasPanel. The objective is to provide a reference document so that someone with a basic understanding of CMM operation can measure a panel with minimal training. Detailed information about both the measurement system setup and computer software is provided. Information is also provided about the format of the raw data, as well as how it is post-processed for use in finite-element analysis.

  20. Methodology for Designing Fault-Protection Software

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin; Levison, Jeffrey; Kan, Edwin

    2006-01-01

    A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.

  1. Rational Design Methodology.

    DTIC Science & Technology

    1978-09-01

    This report describes an effort to specify a software design methodology applicable to the Air Force software environment . Available methodologies...of techniques for proof of correctness, design specification, and performance assessment of static designs. The rational methodology selected is a

  2. A Matrix Approach to Software Process Definition

    NASA Technical Reports Server (NTRS)

    Schultz, David; Bachman, Judith; Landis, Linda; Stark, Mike; Godfrey, Sally; Morisio, Maurizio; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The Software Engineering Laboratory (SEL) is currently engaged in a Methodology and Metrics program for the Information Systems Center (ISC) at Goddard Space Flight Center (GSFC). This paper addresses the Methodology portion of the program. The purpose of the Methodology effort is to assist a software team lead in selecting and tailoring a software development or maintenance process for a specific GSFC project. It is intended that this process will also be compliant with both ISO 9001 and the Software Engineering Institute's Capability Maturity Model (CMM). Under the Methodology program, we have defined four standard ISO-compliant software processes for the ISC, and three tailoring criteria that team leads can use to categorize their projects. The team lead would select a process and appropriate tailoring factors, from which a software process tailored to the specific project could be generated. Our objective in the Methodology program is to present software process information in a structured fashion, to make it easy for a team lead to characterize the type of software engineering to be performed, and to apply tailoring parameters to search for an appropriate software process description. This will enable the team lead to follow a proven, effective software process and also satisfy NASA's requirement for compliance with ISO 9001 and the anticipated requirement for CMM assessment. This work is also intended to support the deployment of sound software processes across the ISC.

  3. Toward a Formal Model of the Design and Evolution of Software

    DTIC Science & Technology

    1988-12-20

    should have the flezibiity to support a variety of design methodologies, be compinhenaive enough to encompass the gamut of software lifecycle...the future. It should have the flezibility to support a variety of design methodologies, be comprehensive enough to encompass the gamut of software...variety of design methodologies, be comprehensive enough to encompass the gamut of software lifecycle activities, and be precise enough to provide the

  4. Abstraction, ethics and software: Why don`t the rules work?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warwick, S.

    1994-12-31

    A theory is presented that one of the reasons why the use of unlicensed software is so widespread and unstigmatized is that legislatures, courts and other bodies which create policy operate at a higher level of abstraction than do individuals, and that abstraction is a key factor in the divergence of societal behavior from that condoned by legal statute. This theory is explored through a pilot study consisting of medium depth interviews with two volunteers who had used unlicensed software. Their attitudes, understanding of the law, and characterization of the their use of unlicensed software as based on {open_quotes}need{close_quotes} ismore » reported. In addition, the concept of face is examined, and how it is maintained while violating law. It is suggested that further studies, using multiple methodologies, (in-depth interview, focus groups, and surveys) be conducted prior to developing further policy or legislation regarding intellectual property protection for software.« less

  5. An Agile Constructionist Mentoring Methodology for Software Projects in the High School

    ERIC Educational Resources Information Center

    Meerbaum-Salant, Orni; Hazzan, Orit

    2010-01-01

    This article describes the construction process and evaluation of the Agile Constructionist Mentoring Methodology (ACMM), a mentoring method for guiding software development projects in the high school. The need for such a methodology has arisen due to the complexity of mentoring software project development in the high school. We introduce the…

  6. 78 FR 49298 - Applications and Amendments to Facility Operating Licenses and Combined Licenses Involving...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-13

    ... the NRC's E-Filing system does not support unlisted software, and the NRC Meta System Help Desk will... methodology and performance criteria for licensees to identify fire protection systems and features that are... System (ADAMS): You may access publicly-available documents online in the NRC Library at http://www.nrc...

  7. Closing the Certification Gaps in Adaptive Flight Control Software

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    2008-01-01

    Over the last five decades, extensive research has been performed to design and develop adaptive control systems for aerospace systems and other applications where the capability to change controller behavior at different operating conditions is highly desirable. Although adaptive flight control has been partially implemented through the use of gain-scheduled control, truly adaptive control systems using learning algorithms and on-line system identification methods have not seen commercial deployment. The reason is that the certification process for adaptive flight control software for use in national air space has not yet been decided. The purpose of this paper is to examine the gaps between the state-of-the-art methodologies used to certify conventional (i.e., non-adaptive) flight control system software and what will likely to be needed to satisfy FAA airworthiness requirements. These gaps include the lack of a certification plan or process guide, the need to develop verification and validation tools and methodologies to analyze adaptive controller stability and convergence, as well as the development of metrics to evaluate adaptive controller performance at off-nominal flight conditions. This paper presents the major certification gap areas, a description of the current state of the verification methodologies, and what further research efforts will likely be needed to close the gaps remaining in current certification practices. It is envisioned that closing the gap will require certain advances in simulation methods, comprehensive methods to determine learning algorithm stability and convergence rates, the development of performance metrics for adaptive controllers, the application of formal software assurance methods, the application of on-line software monitoring tools for adaptive controller health assessment, and the development of a certification case for adaptive system safety of flight.

  8. Simulation Enabled Safeguards Assessment Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert Bean; Trond Bjornard; Thomas Larson

    2007-09-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements inmore » functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.« less

  9. Estimating Software-Development Costs With Greater Accuracy

    NASA Technical Reports Server (NTRS)

    Baker, Dan; Hihn, Jairus; Lum, Karen

    2008-01-01

    COCOMOST is a computer program for use in estimating software development costs. The goal in the development of COCOMOST was to increase estimation accuracy in three ways: (1) develop a set of sensitivity software tools that return not only estimates of costs but also the estimation error; (2) using the sensitivity software tools, precisely define the quantities of data needed to adequately tune cost estimation models; and (3) build a repository of software-cost-estimation information that NASA managers can retrieve to improve the estimates of costs of developing software for their project. COCOMOST implements a methodology, called '2cee', in which a unique combination of well-known pre-existing data-mining and software-development- effort-estimation techniques are used to increase the accuracy of estimates. COCOMOST utilizes multiple models to analyze historical data pertaining to software-development projects and performs an exhaustive data-mining search over the space of model parameters to improve the performances of effort-estimation models. Thus, it is possible to both calibrate and generate estimates at the same time. COCOMOST is written in the C language for execution in the UNIX operating system.

  10. An Interoperability Framework and Capability Profiling for Manufacturing Software

    NASA Astrophysics Data System (ADS)

    Matsuda, M.; Arai, E.; Nakano, N.; Wakai, H.; Takeda, H.; Takata, M.; Sasaki, H.

    ISO/TC184/SC5/WG4 is working on ISO16100: Manufacturing software capability profiling for interoperability. This paper reports on a manufacturing software interoperability framework and a capability profiling methodology which were proposed and developed through this international standardization activity. Within the context of manufacturing application, a manufacturing software unit is considered to be capable of performing a specific set of function defined by a manufacturing software system architecture. A manufacturing software interoperability framework consists of a set of elements and rules for describing the capability of software units to support the requirements of a manufacturing application. The capability profiling methodology makes use of the domain-specific attributes and methods associated with each specific software unit to describe capability profiles in terms of unit name, manufacturing functions, and other needed class properties. In this methodology, manufacturing software requirements are expressed in terns of software unit capability profiles.

  11. A methodology for producing reliable software, volume 1

    NASA Technical Reports Server (NTRS)

    Stucki, L. G.; Moranda, P. B.; Foshee, G.; Kirchoff, M.; Omre, R.

    1976-01-01

    An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software.

  12. Space station data system analysis/architecture study. Task 3: Trade studies, DR-5, volume 1

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The primary objective of Task 3 is to provide additional analysis and insight necessary to support key design/programmatic decision for options quantification and selection for system definition. This includes: (1) the identification of key trade study topics; (2) the definition of a trade study procedure for each topic (issues to be resolved, key inputs, criteria/weighting, methodology); (3) conduct tradeoff and sensitivity analysis; and (4) the review/verification of results within the context of evolving system design and definition. The trade study topics addressed in this volume include space autonomy and function automation, software transportability, system network topology, communications standardization, onboard local area networking, distributed operating system, software configuration management, and the software development environment facility.

  13. Software Requirements Engineering Methodology (Development)

    DTIC Science & Technology

    1979-06-01

    Higher Order Software [20]; and the Michael Jackson Design Methodology [21]. Although structured programming constructs have proven to be more useful...reviewed here. Similarly, the manual techniques for software design (e.g., HIPO Diagrams, Nassi-Schneidermann charts, Top-Down Design, the Michael ... Jackson Design Methodology, Yourdon’s Structured Design) are not addressed. 6.1.3 Research Programs There are a number of research programs underway

  14. Demonstration of a software design and statistical analysis methodology with application to patient outcomes data sets

    PubMed Central

    Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard

    2013-01-01

    Purpose: With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. Methods: A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. Results: The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. Conclusions: The work demonstrates the viability of the design approach and the software tool for analysis of large data sets. PMID:24320426

  15. Demonstration of a software design and statistical analysis methodology with application to patient outcomes data sets.

    PubMed

    Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard

    2013-11-01

    With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. The work demonstrates the viability of the design approach and the software tool for analysis of large data sets.

  16. Integrating Formal Methods and Testing 2002

    NASA Technical Reports Server (NTRS)

    Cukic, Bojan

    2002-01-01

    Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.

  17. Multidisciplinary Concurrent Design Optimization via the Internet

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Kelkar, Atul G.; Koganti, Gopichand

    2001-01-01

    A methodology is presented which uses commercial design and analysis software and the Internet to perform concurrent multidisciplinary optimization. The methodology provides a means to develop multidisciplinary designs without requiring that all software be accessible from the same local network. The procedures are amenable to design and development teams whose members, expertise and respective software are not geographically located together. This methodology facilitates multidisciplinary teams working concurrently on a design problem of common interest. Partition of design software to different machines allows each constituent software to be used on the machine that provides the most economy and efficiency. The methodology is demonstrated on the concurrent design of a spacecraft structure and attitude control system. Results are compared to those derived from performing the design with an autonomous FORTRAN program.

  18. SIMSAT: An object oriented architecture for real-time satellite simulation

    NASA Technical Reports Server (NTRS)

    Williams, Adam P.

    1993-01-01

    Real-time satellite simulators are vital tools in the support of satellite missions. They are used in the testing of ground control systems, the training of operators, the validation of operational procedures, and the development of contingency plans. The simulators must provide high-fidelity modeling of the satellite, which requires detailed system information, much of which is not available until relatively near launch. The short time-scales and resulting high productivity required of such simulator developments culminates in the need for a reusable infrastructure which can be used as a basis for each simulator. This paper describes a major new simulation infrastructure package, the Software Infrastructure for Modelling Satellites (SIMSAT). It outlines the object oriented design methodology used, describes the resulting design, and discusses the advantages and disadvantages experienced in applying the methodology.

  19. Software Size Estimation Using Expert Estimation: A Fuzzy Logic Approach

    ERIC Educational Resources Information Center

    Stevenson, Glenn A.

    2012-01-01

    For decades software managers have been using formal methodologies such as the Constructive Cost Model and Function Points to estimate the effort of software projects during the early stages of project development. While some research shows these methodologies to be effective, many software managers feel that they are overly complicated to use and…

  20. An NAFP Project: Use of Object Oriented Methodologies and Design Patterns to Refactor Software Design

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali; Baggs, Rhoda

    2007-01-01

    In the early problem-solution era of software programming, functional decompositions were mainly used to design and implement software solutions. In functional decompositions, functions and data are introduced as two separate entities during the design phase, and are followed as such in the implementation phase. Functional decompositions make use of refactoring through optimizing the algorithms, grouping similar functionalities into common reusable functions, and using abstract representations of data where possible; all these are done during the implementation phase. This paper advocates the usage of object-oriented methodologies and design patterns as the centerpieces of refactoring software solutions. Refactoring software is a method of changing software design while explicitly preserving its external functionalities. The combined usage of object-oriented methodologies and design patterns to refactor should also benefit the overall software life cycle cost with improved software.

  1. Methodology to Assess No Touch Audit Software Using Simulated Building Utility Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Howard; Braun, James E.; Langner, M. Rois

    This report describes a methodology developed for assessing the performance of no touch building audit tools and presents results for an available tool. Building audits are conducted in many commercial buildings to reduce building energy costs and improve building operation. Because the audits typically require significant input obtained by building engineers, they are usually only affordable for larger commercial building owners. In an effort to help small building and business owners gain the benefits of an audit at a lower cost, no touch building audit tools have been developed to remotely analyze a building's energy consumption.

  2. A framework for assessing the adequacy and effectiveness of software development methodologies

    NASA Technical Reports Server (NTRS)

    Arthur, James D.; Nance, Richard E.

    1990-01-01

    Tools, techniques, environments, and methodologies dominate the software engineering literature, but relatively little research in the evaluation of methodologies is evident. This work reports an initial attempt to develop a procedural approach to evaluating software development methodologies. Prominent in this approach are: (1) an explication of the role of a methodology in the software development process; (2) the development of a procedure based on linkages among objectives, principles, and attributes; and (3) the establishment of a basis for reduction of the subjective nature of the evaluation through the introduction of properties. An application of the evaluation procedure to two Navy methodologies has provided consistent results that demonstrate the utility and versatility of the evaluation procedure. Current research efforts focus on the continued refinement of the evaluation procedure through the identification and integration of product quality indicators reflective of attribute presence, and the validation of metrics supporting the measure of those indicators. The consequent refinement of the evaluation procedure offers promise of a flexible approach that admits to change as the field of knowledge matures. In conclusion, the procedural approach presented in this paper represents a promising path toward the end goal of objectively evaluating software engineering methodologies.

  3. A methodology for collecting valid software engineering data

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Weiss, David M.

    1983-01-01

    An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To insure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50% of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful.

  4. Using Modern Methodologies with Maintenance Software

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; Francis, Laurie K.; Smith, Benjamin D.

    2014-01-01

    Jet Propulsion Laboratory uses multi-mission software produced by the Mission Planning and Sequencing (MPS) team to process, simulate, translate, and package the commands that are sent to a spacecraft. MPS works under the auspices of the Multi-Mission Ground Systems and Services (MGSS). This software consists of nineteen applications that are in maintenance. The MPS software is classified as either class B (mission critical) or class C (mission important). The scheduling of tasks is difficult because mission needs must be addressed prior to performing any other tasks and those needs often spring up unexpectedly. Keeping track of the tasks that everyone is working on is also difficult because each person is working on a different software component. Recently the group adopted the Scrum methodology for planning and scheduling tasks. Scrum is one of the newer methodologies typically used in agile development. In the Scrum development environment, teams pick their tasks that are to be completed within a sprint based on priority. The team specifies the sprint length usually a month or less. Scrum is typically used for new development of one application. In the Scrum methodology there is a scrum master who is a facilitator who tries to make sure that everything moves smoothly, a product owner who represents the user(s) of the software and the team. MPS is not the traditional environment for the Scrum methodology. MPS has many software applications in maintenance, team members who are working on disparate applications, many users, and is interruptible based on mission needs, issues and requirements. In order to use scrum, the methodology needed adaptation to MPS. Scrum was chosen because it is adaptable. This paper is about the development of the process for using scrum, a new development methodology, with a team that works on disparate interruptible tasks on multiple software applications.

  5. Software production methodology tested project

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    The history and results of a 3 1/2-year study in software development methodology are reported. The findings of this study have become the basis for DSN software development guidelines and standard practices. The article discusses accomplishments, discoveries, problems, recommendations and future directions.

  6. Software engineering methodologies and tools

    NASA Technical Reports Server (NTRS)

    Wilcox, Lawrence M.

    1993-01-01

    Over the years many engineering disciplines have developed, including chemical, electronic, etc. Common to all engineering disciplines is the use of rigor, models, metrics, and predefined methodologies. Recently, a new engineering discipline has appeared on the scene, called software engineering. For over thirty years computer software has been developed and the track record has not been good. Software development projects often miss schedules, are over budget, do not give the user what is wanted, and produce defects. One estimate is there are one to three defects per 1000 lines of deployed code. More and more systems are requiring larger and more complex software for support. As this requirement grows, the software development problems grow exponentially. It is believed that software quality can be improved by applying engineering principles. Another compelling reason to bring the engineering disciplines to software development is productivity. It has been estimated that productivity of producing software has only increased one to two percent a year in the last thirty years. Ironically, the computer and its software have contributed significantly to the industry-wide productivity, but computer professionals have done a poor job of using the computer to do their job. Engineering disciplines and methodologies are now emerging supported by software tools that address the problems of software development. This paper addresses some of the current software engineering methodologies as a backdrop for the general evaluation of computer assisted software engineering (CASE) tools from actual installation of and experimentation with some specific tools.

  7. Methodology evaluation: Effects of independent verification and intergration on one class of application

    NASA Technical Reports Server (NTRS)

    Page, J.

    1981-01-01

    The effects of an independent verification and integration (V and I) methodology on one class of application are described. Resource profiles are discussed. The development environment is reviewed. Seven measures are presented to test the hypothesis that V and I improve the development and product. The V and I methodology provided: (1) a decrease in requirements ambiguities and misinterpretation; (2) no decrease in design errors; (3) no decrease in the cost of correcting errors; (4) a decrease in the cost of system and acceptance testing; (5) an increase in early discovery of errors; (6) no improvement in the quality of software put into operation; and (7) a decrease in productivity and an increase in cost.

  8. Programming support environment issues in the Byron programming environment

    NASA Technical Reports Server (NTRS)

    Larsen, Matthew J.

    1986-01-01

    Issues are discussed which programming support environments need to address in order to successfully support software engineering. These concerns are divided into two categories. The first category, issues of how software development is supported by an environment, includes support of the full life cycle, methodology flexibility, and support of software reusability. The second category contains issues of how environments should operate, such as tool reusability and integration, user friendliness, networking, and use of a central data base. This discussion is followed by an examination of Byron, an Ada based programming support environment developed at Intermetrics, focusing on the solutions Byron offers to these problems, including the support provided for software reusability and the test and maintenance phases of the life cycle. The use of Byron in project development is described briefly, and some suggestions for future Byron tools and user written tools are presented.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hugo, Jacques

    The software application is called "HFE-Trace". This is an integrated method and tool for the management of Human Factors Engineering analyses and related data. Its primary purpose is to support the coherent and consistent application of the nuclear industry's best practices for human factors engineering work. The software is a custom Microsoft® Access® application. The application is used (in conjunction with other tools such as spreadsheets, checklists and normal documents where necessary) to collect data on the design of a new nuclear power plant from subject matter experts and other sources. This information is then used to identify potential systemmore » and functional breakdowns of the intended power plant design. This information is expanded by developing extensive descriptions of all functions, as well as system performance parameters, operating limits and constraints, and operational conditions. Once these have been verified, the human factors elements are added to each function, including intended operator role, function allocation considerations, prohibited actions, primary task categories, and primary work station. In addition, the application includes a computational method to assess a number of factors such as system and process complexity, workload, environmental conditions, procedures, regulations, etc.) that may shape operator performance. This is a unique methodology based upon principles described in NUREG/CR-3331 ("A methodology for allocating nuclear power plant control functions to human or automatic control") and it results in a semi-quantified allocation of functions to three or more levels of automation for a conceptual automation system. The aggregate of all this information is then linked to the Task Analysis section of the application where the existing information on all operator functions is transformed into task information and ultimately into design requirements for Human-System Interfaces and Control Rooms. This final step includes assessment of methods to prevent potential operator errors.« less

  10. A software-based sensor for combined sewer overflows.

    PubMed

    Leonhardt, G; Fach, S; Engelhard, C; Kinzel, H; Rauch, W

    2012-01-01

    A new methodology for online estimation of excess flow from combined sewer overflow (CSO) structures based on simulation models is presented. If sufficient flow and water level data from the sewer system is available, no rainfall data are needed to run the model. An inverse rainfall-runoff model was developed to simulate net rainfall based on flow and water level data. Excess flow at all CSO structures in a catchment can then be simulated with a rainfall-runoff model. The method is applied to a case study and results show that the inverse rainfall-runoff model can be used instead of missing rain gauges. Online operation is ensured by software providing an interface to the SCADA-system of the operator and controlling the model. A water quality model could be included to simulate also pollutant concentrations in the excess flow.

  11. Software Engineering Laboratory Ada performance study: Results and implications

    NASA Technical Reports Server (NTRS)

    Booth, Eric W.; Stark, Michael E.

    1992-01-01

    The SEL is an organization sponsored by NASA/GSFC to investigate the effectiveness of software engineering technologies applied to the development of applications software. The SEL was created in 1977 and has three organizational members: NASA/GSFC, Systems Development Branch; The University of Maryland, Computer Sciences Department; and Computer Sciences Corporation, Systems Development Operation. The goals of the SEL are as follows: (1) to understand the software development process in the GSFC environments; (2) to measure the effect of various methodologies, tools, and models on this process; and (3) to identify and then to apply successful development practices. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that include the Ada Performance Study Report. This paper describes the background of Ada in the Flight Dynamics Division (FDD), the objectives and scope of the Ada Performance Study, the measurement approach used, the performance tests performed, the major test results, and the implications for future FDD Ada development efforts.

  12. A design and implementation methodology for diagnostic systems

    NASA Technical Reports Server (NTRS)

    Williams, Linda J. F.

    1988-01-01

    A methodology for design and implementation of diagnostic systems is presented. Also discussed are the advantages of embedding a diagnostic system in a host system environment. The methodology utilizes an architecture for diagnostic system development that is hierarchical and makes use of object-oriented representation techniques. Additionally, qualitative models are used to describe the host system components and their behavior. The methodology architecture includes a diagnostic engine that utilizes a combination of heuristic knowledge to control the sequence of diagnostic reasoning. The methodology provides an integrated approach to development of diagnostic system requirements that is more rigorous than standard systems engineering techniques. The advantages of using this methodology during various life cycle phases of the host systems (e.g., National Aerospace Plane (NASP)) include: the capability to analyze diagnostic instrumentation requirements during the host system design phase, a ready software architecture for implementation of diagnostics in the host system, and the opportunity to analyze instrumentation for failure coverage in safety critical host system operations.

  13. From Goal-Oriented Requirements to Event-B Specifications

    NASA Technical Reports Server (NTRS)

    Aziz, Benjamin; Arenas, Alvaro E.; Bicarregui, Juan; Ponsard, Christophe; Massonet, Philippe

    2009-01-01

    In goal-oriented requirements engineering methodologies, goals are structured into refinement trees from high-level system-wide goals down to fine-grained requirements assigned to specific software/ hardware/human agents that can realise them. Functional goals assigned to software agents need to be operationalised into specification of services that the agent should provide to realise those requirements. In this paper, we propose an approach for operationalising requirements into specifications expressed in the Event-B formalism. Our approach has the benefit of aiding software designers by bridging the gap between declarative requirements and operational system specifications in a rigorous manner, enabling powerful correctness proofs and allowing further refinements down to the implementation level. Our solution is based on verifying that a consistent Event-B machine exhibits properties corresponding to requirements.

  14. Secure Embedded System Design Methodologies for Military Cryptographic Systems

    DTIC Science & Technology

    2016-03-31

    Fault- Tree Analysis (FTA); Built-In Self-Test (BIST) Introduction Secure access-control systems restrict operations to authorized users via methods...failures in the individual software/processor elements, the question of exactly how unlikely is difficult to answer. Fault- Tree Analysis (FTA) has a...Collins of Sandia National Laboratories for years of sharing his extensive knowledge of Fail-Safe Design Assurance and Fault- Tree Analysis

  15. Ensuring critical event sequences in high consequence computer based systems as inspired by path expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidd, M.E.C.

    1997-02-01

    The goal of our work is to provide a high level of confidence that critical software driven event sequences are maintained in the face of hardware failures, malevolent attacks and harsh or unstable operating environments. This will be accomplished by providing dynamic fault management measures directly to the software developer and to their varied development environments. The methodology employed here is inspired by previous work in path expressions. This paper discusses the perceived problems, a brief overview of path expressions, the proposed methods, and a discussion of the differences between the proposed methods and traditional path expression usage and implementation.

  16. Contribution of PCR Denaturing Gradient Gel Electrophoresis Combined with Mixed Chromatogram Software Separation for Complex Urinary Sample Analysis.

    PubMed

    Kotásková, Iva; Mališová, Barbora; Obručová, Hana; Holá, Veronika; Peroutková, Tereza; Růžička, Filip; Freiberger, Tomáš

    2017-01-01

    Complex samples are a challenge for sequencing-based broad-range diagnostics. We analysed 19 urinary catheter, ureteral Double-J catheter, and urine samples using 3 methodological approaches. Out of the total 84 operational taxonomic units, 37, 61, and 88% were identified by culture, PCR-DGGE-SS (PCR denaturing gradient gel electrophoresis followed by Sanger sequencing), and PCR-DGGE-RM (PCR- DGGE combined with software chromatogram separation by RipSeq Mixed tool), respectively. The latter approach was shown to be an efficient tool to complement culture in complex sample assessment. © 2017 S. Karger AG, Basel.

  17. A methodology for testing fault-tolerant software

    NASA Technical Reports Server (NTRS)

    Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.

    1985-01-01

    A methodology for testing fault tolerant software is presented. There are problems associated with testing fault tolerant software because many errors are masked or corrected by voters, limiter, or automatic channel synchronization. This methodology illustrates how the same strategies used for testing fault tolerant hardware can be applied to testing fault tolerant software. For example, one strategy used in testing fault tolerant hardware is to disable the redundancy during testing. A similar testing strategy is proposed for software, namely, to move the major emphasis on testing earlier in the development cycle (before the redundancy is in place) thus reducing the possibility that undetected errors will be masked when limiters and voters are added.

  18. Data Centric Development Methodology

    ERIC Educational Resources Information Center

    Khoury, Fadi E.

    2012-01-01

    Data centric applications, an important effort of software development in large organizations, have been mostly adopting a software methodology, such as a waterfall or Rational Unified Process, as the framework for its development. These methodologies could work on structural, procedural, or object oriented based applications, but fails to capture…

  19. Support for life-cycle product reuse in NASA's SSE

    NASA Technical Reports Server (NTRS)

    Shotton, Charles

    1989-01-01

    The Software Support Environment (SSE) is a software factory for the production of Space Station Freedom Program operational software. The SSE is to be centrally developed and maintained and used to configure software production facilities in the field. The PRC product TTCQF provides for an automated qualification process and analysis of existing code that can be used for software reuse. The interrogation subsystem permits user queries of the reusable data and components which have been identified by an analyzer and qualified with associated metrics. The concept includes reuse of non-code life-cycle components such as requirements and designs. Possible types of reusable life-cycle components include templates, generics, and as-is items. Qualification of reusable elements requires analysis (separation of candidate components into primitives), qualification (evaluation of primitives for reusability according to reusability criteria) and loading (placing qualified elements into appropriate libraries). There can be different qualifications for different installations, methodologies, applications and components. Identifying reusable software and related components is labor-intensive and is best carried out as an integrated function of an SSE.

  20. Recent Improvement Of The Institutional Radioactive Waste Management System In Slovenia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sueiae, S.; Fabjan, M.; Hrastar, U.

    2008-07-01

    The task of managing institutional radioactive waste was assigned to the Slovenian National Agency for Radwaste Management by the Governmental Decree of May 1999. This task ranges from the collection of waste at users' premises to the storage in the Central Storage Facility in (CSF) and afterwards to the planned Low and Intermediate Level Waste (LILW) repository. By this Decree ARAO also became the operator of the CSF. The CSF has been in operation since 1986. Recent improvements of the institutional radioactive waste management system in Slovenia are presented in this paper. ARAO has been working on the reestablishment ofmore » institutional radioactive waste management since 1999. The Agency has managed to prepare the most important documents and carry out the basic activities required by the legislation to assure a safe and environmentally acceptable management of the institutional radioactive waste. With the aim to achieve a better organized operational system, ARAO took the advantage of the European Union Transition Facility (EU TF) financing support and applied for the project named 'Improvement of the management of institutional radioactive waste in Slovenia via the design and implementation of an Information Business System'. Through a public invitation for tenders one of the Slovenian largest software company gained the contract. Two international radwaste experts from Belgium were part of their project team. The optimization of the operational system has been carried out in 2007. The project was executed in ten months and it was divided into two phases. The first phase of the project was related with the detection of weaknesses and implementation of the necessary improvements in the current ARAO operational system. With the evaluation of the existing system, possible improvements were identified. In the second phase of the project the software system Information Business System (IBS) was developed and implemented by the group of IT experts. As a software development life-cycle methodology the Waterfall methodology was used. The reason for choosing this methodology lied in its simple approach: analyze the problem, design the solution, implement the code, test the code, integrate and deploy. ARAO's institutional radioactive waste management process was improved in the way that it is more efficient, better organized, allowing traceability and availability of all documents and operational procedures within the field of institutional radioactive waste. The tailored made IBS system links all activities of the institutional radioactive waste management process: collection, transportation, takeover, acceptance, storing, treatment, radiation protection, etc. into one management system. All existing and newly designed evidences, operational procedures and other documents can be searched and viewed via secured Internet access from different locations. (authors)« less

  1. Improvement of Steam Turbine Operational Performance and Reliability with using Modern Information Technologies

    NASA Astrophysics Data System (ADS)

    Brezgin, V. I.; Brodov, Yu M.; Kultishev, A. Yu

    2017-11-01

    The report presents improvement methods review in the fields of the steam turbine units design and operation based on modern information technologies application. In accordance with the life cycle methodology support, a conceptual model of the information support system during life cycle main stages (LC) of steam turbine unit is suggested. A classifying system, which ensures the creation of sustainable information links between the engineer team (manufacture’s plant) and customer organizations (power plants), is proposed. Within report, the principle of parameterization expansion beyond the geometric constructions at the design and improvement process of steam turbine unit equipment is proposed, studied and justified. The report presents the steam turbine unit equipment design methodology based on the brand new oil-cooler design system that have been developed and implemented by authors. This design system combines the construction subsystem, which is characterized by extensive usage of family tables and templates, and computation subsystem, which includes a methodology for the thermal-hydraulic zone-by-zone oil coolers design calculations. The report presents data about the developed software for operational monitoring, assessment of equipment parameters features as well as its implementation on five power plants.

  2. Software Requirements Specification for an Ammunition Management System

    DTIC Science & Technology

    1986-09-01

    thesis takes the form of a software requirements specification. Such a specification, according to Pressman [Ref. 7], establishes a complete...defined by Pressman , is depicted in Figure 1.1. 11 Figure 1.1 Generalized Software Life Cycle The common thread which binds the various phases together...application of software engineering principles requires an established methodology. This methodology, according to Pressman [Ref. 8:p. 151 is an

  3. Adapting a standardised international 24 h dietary recall methodology (GloboDiet software) for research and dietary surveillance in Korea.

    PubMed

    Park, Min Kyung; Park, Jin Young; Nicolas, Geneviève; Paik, Hee Young; Kim, Jeongseon; Slimani, Nadia

    2015-06-14

    During the past decades, a rapid nutritional transition has been observed along with economic growth in the Republic of Korea. Since this dramatic change in diet has been frequently associated with cancer and other non-communicable diseases, dietary monitoring is essential to understand the association. Benefiting from pre-existing standardised dietary methodologies, the present study aimed to evaluate the feasibility and describe the development of a Korean version of the international computerised 24 h dietary recall method (GloboDiet software) and its complementary tools, developed at the International Agency for Research on Cancer (IARC), WHO. Following established international Standard Operating Procedures and guidelines, about seventy common and country-specific databases on foods, recipes, dietary supplements, quantification methods and coefficients were customised and translated. The main results of the present study highlight the specific adaptations made to adapt the GloboDiet software for research and dietary surveillance in Korea. New (sub-) subgroups were added into the existing common food classification, and new descriptors were added to the facets to classify and describe specific Korean foods. Quantification methods were critically evaluated and adapted considering the foods and food packages available in the Korean market. Furthermore, a picture book of foods/dishes was prepared including new pictures and food portion sizes relevant to Korean diet. The development of the Korean version of GloboDiet demonstrated that it was possible to adapt the IARC-WHO international dietary tool to an Asian context without compromising its concept of standardisation and software structure. It, thus, confirms that this international dietary methodology, used so far only in Europe, is flexible and robust enough to be customised for other regions worldwide.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detilleux, Michel; Centner, Baudouin

    The paper describes different methodologies and tools developed in-house by Tractebel Engineering to facilitate the engineering works to be carried out especially in the frame of decommissioning projects. Three examples of tools with their corresponding results are presented: - The LLWAA-DECOM code, a software developed for the radiological characterization of contaminated systems and equipment. The code constitutes a specific module of more general software that was originally developed to characterize radioactive waste streams in order to be able to declare the radiological inventory of critical nuclides, in particular difficult-to-measure radionuclides, to the Authorities. In the case of LLWAA-DECOM, deposited activitiesmore » inside contaminated equipment (piping, tanks, heat exchangers...) and scaling factors between nuclides, at any given time of the decommissioning time schedule, are calculated on the basis of physical characteristics of the systems and of operational parameters of the nuclear power plant. This methodology was applied to assess decommissioning costs of Belgian NPPs, to characterize the primary system of Trino NPP in Italy, to characterize the equipment of miscellaneous circuits of Ignalina NPP and of Kozloduy unit 1 and, to calculate remaining dose rates around equipment in the frame of the preparation of decommissioning activities; - The VISIMODELLER tool, a user friendly CAD interface developed to ease the introduction of lay-out areas in a software named VISIPLAN. VISIPLAN is a 3D dose rate assessment tool for ALARA work planning, developed by the Belgian Nuclear Research Centre SCK.CEN. Both softwares were used for projects such as the steam generators replacements in Belgian NPPs or the preparation of the decommissioning of units 1 and 2 of Kozloduy NPP; - The DBS software, a software developed to manage the different kinds of activities that are part of the general time schedule of a decommissioning project. For each activity, when relevant, algorithms allow to estimate, on the basis of local inputs, radiological exposures of the operators (collective and individual doses), production of primary, secondary and tertiary waste and their characterization, production of conditioned waste, release of effluents,... and enable the calculation and the presentation (histograms) of the global results for all activities together. An example of application in the frame of the Ignalina decommissioning project is given. (authors)« less

  5. Evaluating software development by analysis of changes: The data from the software engineering laboratory

    NASA Technical Reports Server (NTRS)

    1982-01-01

    An effective data collection methodology for evaluating software development methodologies was applied to four different software development projects. Goals of the data collection included characterizing changes and errors, characterizing projects and programmers, identifying effective error detection and correction techniques, and investigating ripple effects. The data collected consisted of changes (including error corrections) made to the software after code was written and baselined, but before testing began. Data collection and validation were concurrent with software development. Changes reported were verified by interviews with programmers.

  6. Laboratory test methodology for evaluating the effects of electromagnetic disturbances on fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.

    1989-01-01

    Control systems for advanced aircraft, especially those with relaxed static stability, will be critical to flight and will, therefore, have very high reliability specifications which must be met for adverse as well as nominal operating conditions. Adverse conditions can result from electromagnetic disturbances caused by lightning, high energy radio frequency transmitters, and nuclear electromagnetic pulses. Tools and techniques must be developed to verify the integrity of the control system in adverse operating conditions. The most difficult and illusive perturbations to computer based control systems caused by an electromagnetic environment (EME) are functional error modes that involve no component damage. These error modes are collectively known as upset, can occur simultaneously in all of the channels of a redundant control system, and are software dependent. A methodology is presented for performing upset tests on a multichannel control system and considerations are discussed for the design of upset tests to be conducted in the lab on fault tolerant control systems operating in a closed loop with a simulated plant.

  7. A Proven Methodology for Developing Secure Software and Applying It to Ground Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Brandon

    2016-01-01

    Part Two expands upon Part One in an attempt to translate the methodology for ground system personnel. The goal is to build upon the methodology presented in Part One by showing examples and details on how to implement the methodology. Section 1: Ground Systems Overview; Section 2: Secure Software Development; Section 3: Defense in Depth for Ground Systems; Section 4: What Now?

  8. Software Requirements Engineering Methodology

    DTIC Science & Technology

    1976-09-01

    common speech, so that the specification can be read by managers, systems enginetrs, and others who are not specially trained in the language. To...of the system and its DPS. They are usually implicit in the wording of the originating specifications, although the new SREM user must train ...to the name of the ENTITYjCLASS, the operation is applicable only to a single instance. This concentration of the requirements for creation and

  9. Application Development Methodology Appropriateness: An Exploratory Case Study Bridging the Gap between Framework Characteristics and Selection

    ERIC Educational Resources Information Center

    Williams, Lawrence H., Jr.

    2013-01-01

    This qualitative study analyzed experiences of twenty software developers. The research showed that all software development methodologies are distinct from each other. While some, such as waterfall, focus on traditional, plan-driven approaches that allow software requirements and design to evolve; others facilitate ambiguity and uncertainty by…

  10. Multiobjective optimization of hybrid regenerative life support technologies. Topic D: Technology Assessment

    NASA Technical Reports Server (NTRS)

    Manousiouthakis, Vasilios

    1995-01-01

    We developed simple mathematical models for many of the technologies constituting the water reclamation system in a space station. These models were employed for subsystem optimization and for the evaluation of the performance of individual water reclamation technologies, by quantifying their operational 'cost' as a linear function of weight, volume, and power consumption. Then we performed preliminary investigations on the performance improvements attainable by simple hybrid systems involving parallel combinations of technologies. We are developing a software tool for synthesizing a hybrid water recovery system (WRS) for long term space missions. As conceptual framework, we are employing the state space approach. Given a number of available technologies and the mission specifications, the state space approach would help design flowsheets featuring optimal process configurations, including those that feature stream connections in parallel, series, or recycles. We visualize this software tool to function as follows: given the mission duration, the crew size, water quality specifications, and the cost coefficients, the software will synthesize a water recovery system for the space station. It should require minimal user intervention. The following tasks need to be solved for achieving this goal: (1) formulate a problem statement that will be used to evaluate the advantages of a hybrid WRS over a single technology WBS; (2) model several WRS technologies that can be employed in the space station; (3) propose a recycling network design methodology (since the WRS synthesis task is a recycling network design problem, it is essential to employ a systematic method in synthesizing this network); (4) develop a software implementation for this design methodology, design a hybrid system using this software, and compare the resulting WRS with a base-case WRS; and (5) create a user-friendly interface for this software tool.

  11. Using Mach threads to control DSN operational sequences

    NASA Technical Reports Server (NTRS)

    Urista, Juan

    1993-01-01

    The Link Monitor and Control Operator Assistant prototype (LMCOA) is a state-of-the-art, semiautomated monitor and control system based on an object-oriented design. The purpose of the LMCOA prototyping effort is to both investigate new technology (such as artificial intelligence) to support automation and to evaluate advances in information systems toward developing systems that take advantage of the technology. The emergence of object-oriented design methodology has enabled a major change in how software is designed and developed. This paper describes how the object-oriented approach was used to design and implement the LMCOA and the results of operational testing. The LMCOA is implemented on a NeXT workstation using the Mach operating system and the Objective-C programming language.

  12. FINDS: A fault inferring nonlinear detection system programmers manual, version 3.0

    NASA Technical Reports Server (NTRS)

    Lancraft, R. E.

    1985-01-01

    Detailed software documentation of the digital computer program FINDS (Fault Inferring Nonlinear Detection System) Version 3.0 is provided. FINDS is a highly modular and extensible computer program designed to monitor and detect sensor failures, while at the same time providing reliable state estimates. In this version of the program the FINDS methodology is used to detect, isolate, and compensate for failures in simulated avionics sensors used by the Advanced Transport Operating Systems (ATOPS) Transport System Research Vehicle (TSRV) in a Microwave Landing System (MLS) environment. It is intended that this report serve as a programmers guide to aid in the maintenance, modification, and revision of the FINDS software.

  13. Operations management system advanced automation: Fault detection isolation and recovery prototyping

    NASA Technical Reports Server (NTRS)

    Hanson, Matt

    1990-01-01

    The purpose of this project is to address the global fault detection, isolation and recovery (FDIR) requirements for Operation's Management System (OMS) automation within the Space Station Freedom program. This shall be accomplished by developing a selected FDIR prototype for the Space Station Freedom distributed processing systems. The prototype shall be based on advanced automation methodologies in addition to traditional software methods to meet the requirements for automation. A secondary objective is to expand the scope of the prototyping to encompass multiple aspects of station-wide fault management (SWFM) as discussed in OMS requirements documentation.

  14. Effect and interaction study of acetamiprid photodegradation using experimental design.

    PubMed

    Tassalit, Djilali; Chekir, Nadia; Benhabiles, Ouassila; Mouzaoui, Oussama; Mahidine, Sarah; Merzouk, Nachida Kasbadji; Bentahar, Fatiha; Khalil, Abbas

    2016-10-01

    The methodology of experimental research was carried out using the MODDE 6.0 software to study the acetamiprid photodegradation depending on the operating parameters, such as the initial concentration of acetamiprid, concentration and type of the used catalyst and the initial pH of the medium. The results showed the importance of the pollutant concentration effect on the acetamiprid degradation rate. On the other hand, the amount and type of the used catalyst have a considerable influence on the elimination kinetics of this pollutant. The degradation of acetamiprid as an environmental pesticide pollutant via UV irradiation in the presence of titanium dioxide was assessed and optimized using response surface methodology with a D-optimal design. The acetamiprid degradation ratio was found to be sensitive to the different studied factors. The maximum value of discoloration under the optimum operating conditions was determined to be 99% after 300 min of UV irradiation.

  15. Performance assessments of Android-powered military applications operating on tactical handheld devices

    NASA Astrophysics Data System (ADS)

    Weiss, Brian A.; Fronczek, Lisa; Morse, Emile; Kootbally, Zeid; Schlenoff, Craig

    2013-05-01

    Transformative Apps (TransApps) is a Defense Advanced Research Projects Agency (DARPA) funded program whose goal is to develop a range of militarily-relevant software applications ("apps") to enhance the operational-effectiveness of military personnel on (and off) the battlefield. TransApps is also developing a military apps marketplace to facilitate rapid development and dissemination of applications to address user needs by connecting engaged communities of endusers with development groups. The National Institute of Standards and Technology's (NIST) role in the TransApps program is to design and implement evaluation procedures to assess the performance of: 1) the various software applications, 2) software-hardware interactions, and 3) the supporting online application marketplace. Specifically, NIST is responsible for evaluating 50+ tactically-relevant applications operating on numerous Android™-powered platforms. NIST efforts include functional regression testing and quantitative performance testing. This paper discusses the evaluation methodologies employed to assess the performance of three key program elements: 1) handheld-based applications and their integration with various hardware platforms, 2) client-based applications and 3) network technologies operating on both the handheld and client systems along with their integration into the application marketplace. Handheld-based applications are assessed using a combination of utility and usability-based checklists and quantitative performance tests. Client-based applications are assessed to replicate current overseas disconnected (i.e. no network connectivity between handhelds) operations and to assess connected operations envisioned for later use. Finally, networked applications are assessed on handhelds to establish baselines of performance for when connectivity will be common usage.

  16. Multiattribute selection of acute stroke imaging software platform for Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) clinical trial.

    PubMed

    Churilov, Leonid; Liu, Daniel; Ma, Henry; Christensen, Soren; Nagakane, Yoshinari; Campbell, Bruce; Parsons, Mark W; Levi, Christopher R; Davis, Stephen M; Donnan, Geoffrey A

    2013-04-01

    The appropriateness of a software platform for rapid MRI assessment of the amount of salvageable brain tissue after stroke is critical for both the validity of the Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) Clinical Trial of stroke thrombolysis beyond 4.5 hours and for stroke patient care outcomes. The objective of this research is to develop and implement a methodology for selecting the acute stroke imaging software platform most appropriate for the setting of a multi-centre clinical trial. A multi-disciplinary decision making panel formulated the set of preferentially independent evaluation attributes. Alternative Multi-Attribute Value Measurement methods were used to identify the best imaging software platform followed by sensitivity analysis to ensure the validity and robustness of the proposed solution. Four alternative imaging software platforms were identified. RApid processing of PerfusIon and Diffusion (RAPID) software was selected as the most appropriate for the needs of the EXTEND trial. A theoretically grounded generic multi-attribute selection methodology for imaging software was developed and implemented. The developed methodology assured both a high quality decision outcome and a rational and transparent decision process. This development contributes to stroke literature in the area of comprehensive evaluation of MRI clinical software. At the time of evaluation, RAPID software presented the most appropriate imaging software platform for use in the EXTEND clinical trial. The proposed multi-attribute imaging software evaluation methodology is based on sound theoretical foundations of multiple criteria decision analysis and can be successfully used for choosing the most appropriate imaging software while ensuring both robust decision process and outcomes. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.

  17. Accounting for Uncertainties in Strengths of SiC MEMS Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel; Evans, Laura; Beheim, Glen; Trapp, Mark; Jadaan, Osama; Sharpe, William N., Jr.

    2007-01-01

    A methodology has been devised for accounting for uncertainties in the strengths of silicon carbide structural components of microelectromechanical systems (MEMS). The methodology enables prediction of the probabilistic strengths of complexly shaped MEMS parts using data from tests of simple specimens. This methodology is intended to serve as a part of a rational basis for designing SiC MEMS, supplementing methodologies that have been borrowed from the art of designing macroscopic brittle material structures. The need for this or a similar methodology arises as a consequence of the fundamental nature of MEMS and the brittle silicon-based materials of which they are typically fabricated. When tested to fracture, MEMS and structural components thereof show wide part-to-part scatter in strength. The methodology involves the use of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) software in conjunction with the ANSYS Probabilistic Design System (PDS) software to simulate or predict the strength responses of brittle material components while simultaneously accounting for the effects of variability of geometrical features on the strength responses. As such, the methodology involves the use of an extended version of the ANSYS/CARES/PDS software system described in Probabilistic Prediction of Lifetimes of Ceramic Parts (LEW-17682-1/4-1), Software Tech Briefs supplement to NASA Tech Briefs, Vol. 30, No. 9 (September 2006), page 10. The ANSYS PDS software enables the ANSYS finite-element-analysis program to account for uncertainty in the design-and analysis process. The ANSYS PDS software accounts for uncertainty in material properties, dimensions, and loading by assigning probabilistic distributions to user-specified model parameters and performing simulations using various sampling techniques.

  18. Novel graphical environment for virtual and real-world operations of tracked mobile manipulators

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.

    1993-08-01

    A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  19. Some design constraints required for the assembly of software components: The incorporation of atomic abstract types into generically structured abstract types

    NASA Technical Reports Server (NTRS)

    Johnson, Charles S.

    1986-01-01

    It is nearly axiomatic, that to take the greatest advantage of the useful features available in a development system, and to avoid the negative interactions of those features, requires the exercise of a design methodology which constrains their use. A major design support feature of the Ada language is abstraction: for data, functions processes, resources, and system elements in general. Atomic abstract types can be created in packages defining those private types and all of the overloaded operators, functions, and hidden data required for their use in an application. Generically structured abstract types can be created in generic packages defining those structured private types, as buildups from the user-defined data types which are input as parameters. A study is made of the design constraints required for software incorporating either atomic or generically structured abstract types, if the integration of software components based on them is to be subsequently performed. The impact of these techniques on the reusability of software and the creation of project-specific software support environments is also discussed.

  20. Modifications to give HOPE/MDC 2.0 the capability to solve for or consider vent forces: Mission planning, mission analysis, and software formulation

    NASA Technical Reports Server (NTRS)

    Zyla, L. V.

    1979-01-01

    The modifications are described as necessary to give the Houston Operations Predictor/Estimator (HOPE) program the capability to solve for or consider vent forces for orbit determination. The model implemented in solving for vent forces is described along with the integrator problems encountered. A summary derivation of the mathematical principles applicable to solve/consider methodology is provided.

  1. Implementation of Phased Array Antenna Technology Providing a Wireless Local Area Network to Enhance Port Security and Maritime Interdiction Operations

    DTIC Science & Technology

    2009-09-01

    boarding team, COTS, WLAN, smart antenna, OpenVPN application, wireless base station, OFDM, latency, point-to-point wireless link. 16. PRICE CODE 17...16 c. SSL/TLS .................................17 2. OpenVPN ......................................17 III. EXPERIMENT METHODOLOGY...network frame at Layer 2 has already been secured by encryption at a higher level. 2. OpenVPN OpenVPN is open source software that provides a VPN

  2. Development of a methodology for assessing the safety of embedded software systems

    NASA Technical Reports Server (NTRS)

    Garrett, C. J.; Guarro, S. B.; Apostolakis, G. E.

    1993-01-01

    A Dynamic Flowgraph Methodology (DFM) based on an integrated approach to modeling and analyzing the behavior of software-driven embedded systems for assessing and verifying reliability and safety is discussed. DFM is based on an extension of the Logic Flowgraph Methodology to incorporate state transition models. System models which express the logic of the system in terms of causal relationships between physical variables and temporal characteristics of software modules are analyzed to determine how a certain state can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating the system parameters at different point in time. The resulting information concerning the hardware and software states can be used to eliminate unsafe execution paths and identify testing criteria for safety critical software functions.

  3. A methodology for model-based development and automated verification of software for aerospace systems

    NASA Astrophysics Data System (ADS)

    Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.

    Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.

  4. Sensors and systems for space applications: a methodology for developing fault detection, diagnosis, and recovery

    NASA Astrophysics Data System (ADS)

    Edwards, John L.; Beekman, Randy M.; Buchanan, David B.; Farner, Scott; Gershzohn, Gary R.; Khuzadi, Mbuyi; Mikula, D. F.; Nissen, Gerry; Peck, James; Taylor, Shaun

    2007-04-01

    Human space travel is inherently dangerous. Hazardous conditions will exist. Real time health monitoring of critical subsystems is essential for providing a safe abort timeline in the event of a catastrophic subsystem failure. In this paper, we discuss a practical and cost effective process for developing critical subsystem failure detection, diagnosis and response (FDDR). We also present the results of a real time health monitoring simulation of a propellant ullage pressurization subsystem failure. The health monitoring development process identifies hazards, isolates hazard causes, defines software partitioning requirements and quantifies software algorithm development. The process provides a means to establish the number and placement of sensors necessary to provide real time health monitoring. We discuss how health monitoring software tracks subsystem control commands, interprets off-nominal operational sensor data, predicts failure propagation timelines, corroborate failures predictions and formats failure protocol.

  5. Methodology to model the energy and greenhouse gas emissions of electronic software distributions.

    PubMed

    Williams, Daniel R; Tang, Yinshan

    2012-01-17

    A new electronic software distribution (ESD) life cycle analysis (LCA) methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative, physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO(2)e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO(2)e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.

  6. A stochastic optimal feedforward and feedback control methodology for superagility

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Direskeneli, Haldun; Taylor, Deborah B.

    1992-01-01

    A new control design methodology is developed: Stochastic Optimal Feedforward and Feedback Technology (SOFFT). Traditional design techniques optimize a single cost function (which expresses the design objectives) to obtain both the feedforward and feedback control laws. This approach places conflicting demands on the control law such as fast tracking versus noise atttenuation/disturbance rejection. In the SOFFT approach, two cost functions are defined. The feedforward control law is designed to optimize one cost function, the feedback optimizes the other. By separating the design objectives and decoupling the feedforward and feedback design processes, both objectives can be achieved fully. A new measure of command tracking performance, Z-plots, is also developed. By analyzing these plots at off-nominal conditions, the sensitivity or robustness of the system in tracking commands can be predicted. Z-plots provide an important tool for designing robust control systems. The Variable-Gain SOFFT methodology was used to design a flight control system for the F/A-18 aircraft. It is shown that SOFFT can be used to expand the operating regime and provide greater performance (flying/handling qualities) throughout the extended flight regime. This work was performed under the NASA SBIR program. ICS plans to market the software developed as a new module in its commercial CACSD software package: ACET.

  7. Statistics and Informatics in Space Astrophysics

    NASA Astrophysics Data System (ADS)

    Feigelson, E.

    2017-12-01

    The interest in statistical and computational methodology has seen rapid growth in space-based astrophysics, parallel to the growth seen in Earth remote sensing. There is widespread agreement that scientific interpretation of the cosmic microwave background, discovery of exoplanets, and classifying multiwavelength surveys is too complex to be accomplished with traditional techniques. NASA operates several well-functioning Science Archive Research Centers providing 0.5 PBy datasets to the research community. These databases are integrated with full-text journal articles in the NASA Astrophysics Data System (200K pageviews/day). Data products use interoperable formats and protocols established by the International Virtual Observatory Alliance. NASA supercomputers also support complex astrophysical models of systems such as accretion disks and planet formation. Academic researcher interest in methodology has significantly grown in areas such as Bayesian inference and machine learning, and statistical research is underway to treat problems such as irregularly spaced time series and astrophysical model uncertainties. Several scholarly societies have created interest groups in astrostatistics and astroinformatics. Improvements are needed on several fronts. Community education in advanced methodology is not sufficiently rapid to meet the research needs. Statistical procedures within NASA science analysis software are sometimes not optimal, and pipeline development may not use modern software engineering techniques. NASA offers few grant opportunities supporting research in astroinformatics and astrostatistics.

  8. Software for Probabilistic Risk Reduction

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Michel, Thierry; Madsen, Soren; Chapin, Elaine; Rodriguez, Ernesto

    2004-01-01

    A computer program implements a methodology, denoted probabilistic risk reduction, that is intended to aid in planning the development of complex software and/or hardware systems. This methodology integrates two complementary prior methodologies: (1) that of probabilistic risk assessment and (2) a risk-based planning methodology, implemented in a prior computer program known as Defect Detection and Prevention (DDP), in which multiple requirements and the beneficial effects of risk-mitigation actions are taken into account. The present methodology and the software are able to accommodate both process knowledge (notably of the efficacy of development practices) and product knowledge (notably of the logical structure of a system, the development of which one seeks to plan). Estimates of the costs and benefits of a planned development can be derived. Functional and non-functional aspects of software can be taken into account, and trades made among them. It becomes possible to optimize the planning process in the sense that it becomes possible to select the best suite of process steps and design choices to maximize the expectation of success while remaining within budget.

  9. Optimal reproducibility of gated sestamibi and thallium myocardial perfusion study left ventricular ejection fractions obtained on a solid-state CZT cardiac camera requires operator input.

    PubMed

    Cherk, Martin H; Ky, Jason; Yap, Kenneth S K; Campbell, Patrina; McGrath, Catherine; Bailey, Michael; Kalff, Victor

    2012-08-01

    To evaluate the reproducibility of serial re-acquisitions of gated Tl-201 and Tc-99m sestamibi left ventricular ejection fraction (LVEF) measurements obtained on a new generation solid-state cardiac camera system during myocardial perfusion imaging and the importance of manual operator optimization of left ventricular wall tracking. Resting blinded automated (auto) and manual operator optimized (opt) LVEF measurements were measured using ECT toolbox (ECT) and Cedars-Sinai QGS software in two separate cohorts of 55 Tc-99m sestamibi (MIBI) and 50 thallium (Tl-201) myocardial perfusion studies (MPS) acquired in both supine and prone positions on a cadmium zinc telluride (CZT) solid-state camera system. Resting supine and prone automated LVEF measurements were similarly obtained in a further separate cohort of 52 gated cardiac blood pool scans (GCBPS) for validation of methodology and comparison. Appropriate use of Bland-Altman, chi-squared and Levene's equality of variance tests was used to analyse the resultant data comparisons. For all radiotracer and software combinations, manual checking and optimization of valve planes (+/- centre radius with ECT software) resulted in significant improvement in MPS LVEF reproducibility that approached that of planar GCBPS. No difference was demonstrated between optimized MIBI/Tl-201 QGS and planar GCBPS LVEF reproducibility (P = .17 and P = .48, respectively). ECT required significantly more manual optimization compared to QGS software in both supine and prone positions independent of radiotracer used (P < .02). Reproducibility of gated sestamibi and Tl-201 LVEF measurements obtained during myocardial perfusion imaging with ECT toolbox or QGS software packages using a new generation solid-state cardiac camera with improved image quality approaches that of planar GCBPS however requires visual quality control and operator optimization of left ventricular wall tracking for best results. Using this superior cardiac technology, Tl-201 reproducibility also appears at least equivalent to sestamibi for measuring LVEF.

  10. Investigating the application of AOP methodology in development of Financial Accounting Software using Eclipse-AJDT Environment

    NASA Astrophysics Data System (ADS)

    Sharma, Amita; Sarangdevot, S. S.

    2010-11-01

    Aspect-Oriented Programming (AOP) methodology has been investigated in development of real world business application software—Financial Accounting Software. Eclipse-AJDT environment has been used as open source enhanced IDE support for programming in AOP language—Aspect J. Crosscutting concerns have been identified and modularized as aspects. This reduces the complexity of the design considerably due to elimination of code scattering and tangling. Improvement in modularity, quality and performance is achieved. The study concludes that AOP methodology in Eclipse-AJDT environment offers powerful support for modular design and implementation of real world quality business software.

  11. Application of an integrated multi-criteria decision making AHP-TOPSIS methodology for ETL software selection.

    PubMed

    Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik

    2016-01-01

    Actually, a set of ETL software (Extract, Transform and Load) is available to constitute a major investment market. Each ETL uses its own techniques for extracting, transforming and loading data into data warehouse, which makes the task of evaluating ETL software very difficult. However, choosing the right software of ETL is critical to the success or failure of any Business Intelligence project. As there are many impacting factors in the selection of ETL software, the same process is considered as a complex multi-criteria decision making (MCDM) problem. In this study, an application of decision-making methodology that employs the two well-known MCDM techniques, namely Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods is designed. In this respect, the aim of using AHP is to analyze the structure of the ETL software selection problem and obtain weights of the selected criteria. Then, TOPSIS technique is used to calculate the alternatives' ratings. An example is given to illustrate the proposed methodology. Finally, a software prototype for demonstrating both methods is implemented.

  12. Software engineering and Ada in design

    NASA Technical Reports Server (NTRS)

    Oneill, Don

    1986-01-01

    Modern software engineering promises significant reductions in software costs and improvements in software quality. The Ada language is the focus for these software methodology and tool improvements. The IBM FSD approach, including the software engineering practices that guide the systematic design and development of software products and the management of the software process are examined. The revised Ada design language adaptation is revealed. This four level design methodology is detailed including the purpose of each level, the management strategy that integrates the software design activity with the program milestones, and the technical strategy that maps the Ada constructs to each level of design. A complete description of each design level is provided along with specific design language recording guidelines for each level. Finally, some testimony is offered on education, tools, architecture, and metrics resulting from project use of the four level Ada design language adaptation.

  13. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  14. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  15. Intelligent systems/software engineering methodology - A process to manage cost and risk

    NASA Technical Reports Server (NTRS)

    Friedlander, Carl; Lehrer, Nancy

    1991-01-01

    A systems development methodology is discussed that has been successfully applied to the construction of a number of intelligent systems. This methodology is a refinement of both evolutionary and spiral development methodologies. It is appropriate for development of intelligent systems. The application of advanced engineering methodology to the development of software products and intelligent systems is an important step toward supporting the transition of AI technology into aerospace applications. A description of the methodology and the process model from which it derives is given. Associated documents and tools are described which are used to manage the development process and record and report the emerging design.

  16. Expert system development methodology and the transition from prototyping to operations: FIESTA, a case study

    NASA Technical Reports Server (NTRS)

    Happell, Nadine; Miksell, Steve; Carlisle, Candace

    1989-01-01

    A major barrier in taking expert systems from prototype to operational status involves instilling end user confidence in the operational system. The software of different life cycle models is examined and the advantages and disadvantages of each when applied to expert system development are explored. The Fault Isolation Expert System for Tracking and data relay satellite system Applications (FIESTA) is presented as a case study of development of an expert system. The end user confidence necessary for operational use of this system is accentuated by the fact that it will handle real-time data in a secure environment, allowing little tolerance for errors. How FIESTA is dealing with transition problems as it moves from an off-line standalone prototype to an on-line real-time system is discussed.

  17. Expert system development methodology and the transition from prototyping to operations - Fiesta, a case study

    NASA Technical Reports Server (NTRS)

    Happell, Nadine; Miksell, Steve; Carlisle, Candace

    1989-01-01

    A major barrier in taking expert systems from prototype to operational status involves instilling end user confidence in the operational system. The software of different life cycle models is examined and the advantages and disadvantages of each when applied to expert system development are explored. The Fault Isolation Expert System for Tracking and data relay satellite system Applications (FIESTA) is presented as a case study of development of an expert system. The end user confidence necessary for operational use of this system is accentuated by the fact that it will handle real-time data in a secure environment, allowing little tolerance for errors. How FIESTA is dealing with transition problems as it moves from an off-line standalone prototype to an on-line real-time system is discussed.

  18. A Case Study of Measuring Process Risk for Early Insights into Software Safety

    NASA Technical Reports Server (NTRS)

    Layman, Lucas; Basili, Victor; Zelkowitz, Marvin V.; Fisher, Karen L.

    2011-01-01

    In this case study, we examine software safety risk in three flight hardware systems in NASA's Constellation spaceflight program. We applied our Technical and Process Risk Measurement (TPRM) methodology to the Constellation hazard analysis process to quantify the technical and process risks involving software safety in the early design phase of these projects. We analyzed 154 hazard reports and collected metrics to measure the prevalence of software in hazards and the specificity of descriptions of software causes of hazardous conditions. We found that 49-70% of 154 hazardous conditions could be caused by software or software was involved in the prevention of the hazardous condition. We also found that 12-17% of the 2013 hazard causes involved software, and that 23-29% of all causes had a software control. The application of the TPRM methodology identified process risks in the application of the hazard analysis process itself that may lead to software safety risk.

  19. Enterprise resource planning (ERP) implementation using the value engineering methodology and Six Sigma tools

    NASA Astrophysics Data System (ADS)

    Leu, Jun-Der; Lee, Larry Jung-Hsing

    2017-09-01

    Enterprise resource planning (ERP) is a software solution that integrates the operational processes of the business functions of an enterprise. However, implementing ERP systems is a complex process. In addition to the technical issues, companies must address problems associated with business process re-engineering, time and budget control, and organisational change. Numerous industrial studies have shown that the failure rate of ERP implementation is high, even for well-designed systems. Thus, ERP projects typically require a clear methodology to support the project execution and effectiveness. In this study, we propose a theoretical model for ERP implementation. The value engineering (VE) method forms the basis of the proposed framework, which integrates Six Sigma tools. The proposed framework encompasses five phases: knowledge generation, analysis, creation, development and execution. In the VE method, potential ERP problems related to software, hardware, consultation and organisation are analysed in a group-decision manner and in relation to value, and Six Sigma tools are applied to avoid any project defects. We validate the feasibility of the proposed model by applying it to an international manufacturing enterprise in Taiwan. The results show improvements in customer response time and operational efficiency in terms of work-in-process and turnover of materials. Based on the evidence from the case study, the theoretical framework is discussed together with the study's limitations and suggestions for future research.

  20. An approach to software cost estimation

    NASA Technical Reports Server (NTRS)

    Mcgarry, F.; Page, J.; Card, D.; Rohleder, M.; Church, V.

    1984-01-01

    A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented.

  1. Integrated Test Approach

    NASA Technical Reports Server (NTRS)

    Cotton, Will; Liechty, John

    2015-01-01

    This paper describes a testing methodology undertaken on the Facilities Development and Operations Contract (FDOC) by Lockheed Martin. The methodology was defined with the intent of reducing project schedule time to enable NASA's Johnson Space Center (JSC) to be able to deliver the Mission Control Center (MCC) 21 project as quickly as possible. 21 represents the 21st century where NASA JSC is updating its control center with new technology and operational concepts in order to support NASA customers wanting to use control center assets to support space vehicle operations. In collaboration with the NASA customer, a new test concept was conceived early during MCC21 project planning with the goal of reducing project delivery time. One enabler that could help reduce delivery time was testing. Within the project, testing was performed by two entities, software development responsible for subsystem testing and system test responsible for system integration testing. The MCC21 project took a deliberate review of testing to determine how it could be performed differently to realize an overall reduction in test time to support the goal of a more rapid project delivery.

  2. The need for a comprehensive expert system development methodology

    NASA Technical Reports Server (NTRS)

    Baumert, John; Critchfield, Anna; Leavitt, Karen

    1988-01-01

    In a traditional software development environment, the introduction of standardized approaches has led to higher quality, maintainable products on the technical side and greater visibility into the status of the effort on the management side. This study examined expert system development to determine whether it differed enough from traditional systems to warrant a reevaluation of current software development methodologies. Its purpose was to identify areas of similarity with traditional software development and areas requiring tailoring to the unique needs of expert systems. A second purpose was to determine whether existing expert system development methodologies meet the needs of expert system development, management, and maintenance personnel. The study consisted of a literature search and personal interviews. It was determined that existing methodologies and approaches to developing expert systems are not comprehensive nor are they easily applied, especially to cradle to grave system development. As a result, requirements were derived for an expert system development methodology and an initial annotated outline derived for such a methodology.

  3. A Real-Time Telemetry Simulator of the IUS Spacecraft

    NASA Technical Reports Server (NTRS)

    Drews, Michael E.; Forman, Douglas A.; Baker, Damon M.; Khazoyan, Louis B.; Viazzo, Danilo

    1998-01-01

    A real-time telemetry simulator of the IUS spacecraft has recently entered operation to train Flight Control Teams for the launch of the AXAF telescope from the Shuttle. The simulator has proven to be a successful higher fidelity implementation of its predecessor, while affirming the rapid development methodology used in its design. Although composed of COTS hardware and software, the system simulates the full breadth of the mission: Launch, Pre-Deployment-Checkout, Burn Sequence, and AXAF/IUS separation. Realism is increased through patching the system into the operations facility to simulate IUS telemetry, Shuttle telemetry, and the Tracking Station link (commands and status message).

  4. Towards an operational fault isolation expert system for French telecommunication satellite Telecom 2

    NASA Astrophysics Data System (ADS)

    Haziza, M.

    1990-10-01

    The DIAMS satellite fault isolation expert system shell concept is described. The project, initiated in 1985, has led to the development of a prototype Expert System (ES) dedicated to the Telecom 1 attitude and orbit control system. The prototype ES has been installed in the Telecom 1 satellite control center and evaluated by Telecom 1 operations. The development of a fault isolation ES covering a whole spacecraft (the French telecommunication satellite Telecom 2) is currently being undertaken. Full scale industrial applications raise stringent requirements in terms of knowledge management and software development methodology. The approach used by MATRA ESPACE to face this challenge is outlined.

  5. Thermophysics modeling of an infrared detector cryochamber for transient operational scenario

    NASA Astrophysics Data System (ADS)

    Singhal, Mayank; Singhal, Gaurav; Verma, Avinash C.; Kumar, Sushil; Singh, Manmohan

    2016-05-01

    An infrared detector (IR) is essentially a transducer capable of converting radiant energy in the infrared regime into a measurable form. The benefit of infrared radiation is that it facilitates viewing objects in dark or through obscured conditions by detecting the infrared energy emitted by them. One of the most significant applications of IR detector systems is for target acquisition and tracking of projectile systems. IR detectors also find widespread applications in the industry and commercial market. The performance of infrared detector is sensitive to temperatures and performs best when cooled to cryogenic temperatures in the range of nearly 120 K. However, the necessity to operate in such cryogenic regimes increases the complexity in the application of IR detectors. This entails a need for detailed thermophysics analysis to be able to determine the actual cooling load specific to the application and also due to its interaction with the environment. This will enable design of most appropriate cooling methodologies suitable for specific scenarios. The focus of the present work is to develop a robust thermo-physical numerical methodology for predicting IR cryochamber behavior under transient conditions, which is the most critical scenario, taking into account all relevant heat loads including radiation in its original form. The advantage of the developed code against existing commercial software (COMSOL, ANSYS, etc.), is that it is capable of handling gas conduction together with radiation terms effectively, employing a ubiquitous software such as MATLAB. Also, it requires much smaller computational resources and is significantly less time intensive. It provides physically correct results enabling thermal characterization of cryochamber geometry in conjunction with appropriate cooling methodology. The code has been subsequently validated experimentally as the observed cooling characteristics are found to be in close agreement with the results predicted using the developed model thereby proving its efficacy.

  6. Human factor engineering based design and modernization of control rooms with new I and C systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larraz, J.; Rejas, L.; Ortega, F.

    2012-07-01

    Instrumentation and Control (I and C) systems of the latest nuclear power plants are based on the use of digital technology, distributed control systems and the integration of information in data networks (Distributed Control and Instrumentation Systems). This has a repercussion on Control Rooms (CRs), where the operations and monitoring interfaces correspond to these systems. These technologies are also used in modernizing I and C systems in currently operative nuclear power plants. The new interfaces provide additional capabilities for operation and supervision, as well as a high degree of flexibility, versatility and reliability. An example of this is the implementationmore » of solutions such as compact stations, high level supervision screens, overview displays, computerized procedures, new operational support systems or intelligent alarms processing systems in the modernized Man-Machine Interface (MMI). These changes in the MMI are accompanied by newly added Software (SW) controls and new solutions in automation. Tecnatom has been leading various projects in this area for several years, both in Asian countries and in the United States, using in all cases international standards from which Tecnatom own methodologies have been developed and optimized. The experience acquired in applying this methodology to the design of new control rooms is to a large extent applicable also to the modernization of current control rooms. An adequate design of the interface between the operator and the systems will facilitate safe operation, contribute to the prompt identification of problems and help in the distribution of tasks and communications between the different members of the operating shift. Based on Tecnatom experience in the field, this article presents the methodological approach used as well as the most relevant aspects of this kind of project. (authors)« less

  7. Virtual- and real-world operation of mobile robotic manipulators: integrated simulation, visualization, and control environment

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.

    1992-03-01

    This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  8. Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2003-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.

  9. Implementing Software Safety in the NASA Environment

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha S.; Radley, Charles F.

    1994-01-01

    Until recently, NASA did not consider allowing computers total control of flight systems. Human operators, via hardware, have constituted the ultimate safety control. In an attempt to reduce costs, NASA has come to rely more and more heavily on computers and software to control space missions. (For example. software is now planned to control most of the operational functions of the International Space Station.) Thus the need for systematic software safety programs has become crucial for mission success. Concurrent engineering principles dictate that safety should be designed into software up front, not tested into the software after the fact. 'Cost of Quality' studies have statistics and metrics to prove the value of building quality and safety into the development cycle. Unfortunately, most software engineers are not familiar with designing for safety, and most safety engineers are not software experts. Software written to specifications which have not been safety analyzed is a major source of computer related accidents. Safer software is achieved step by step throughout the system and software life cycle. It is a process that includes requirements definition, hazard analyses, formal software inspections, safety analyses, testing, and maintenance. The greatest emphasis is placed on clearly and completely defining system and software requirements, including safety and reliability requirements. Unfortunately, development and review of requirements are the weakest link in the process. While some of the more academic methods, e.g. mathematical models, may help bring about safer software, this paper proposes the use of currently approved software methodologies, and sound software and assurance practices to show how, to a large degree, safety can be designed into software from the start. NASA's approach today is to first conduct a preliminary system hazard analysis (PHA) during the concept and planning phase of a project. This determines the overall hazard potential of the system to be built. Shortly thereafter, as the system requirements are being defined, the second iteration of hazard analyses takes place, the systems hazard analysis (SHA). During the systems requirements phase, decisions are made as to what functions of the system will be the responsibility of software. This is the most critical time to affect the safety of the software. From this point, software safety analyses as well as software engineering practices are the main focus for assuring safe software. While many of the steps proposed in this paper seem like just sound engineering practices, they are the best technical and most cost effective means to assure safe software within a safe system.

  10. A Framework of the Use of Information in Software Testing

    ERIC Educational Resources Information Center

    Kaveh, Payman

    2010-01-01

    With the increasing role that software systems play in our daily lives, software quality has become extremely important. Software quality is impacted by the efficiency of the software testing process. There are a growing number of software testing methodologies, models, and initiatives to satisfy the need to improve software quality. The main…

  11. Formal specification and mechanical verification of SIFT - A fault-tolerant flight control system

    NASA Technical Reports Server (NTRS)

    Melliar-Smith, P. M.; Schwartz, R. L.

    1982-01-01

    The paper describes the methodology being employed to demonstrate rigorously that the SIFT (software-implemented fault-tolerant) computer meets its requirements. The methodology uses a hierarchy of design specifications, expressed in the mathematical domain of multisorted first-order predicate calculus. The most abstract of these, from which almost all details of mechanization have been removed, represents the requirements on the system for reliability and intended functionality. Successive specifications in the hierarchy add design and implementation detail until the PASCAL programs implementing the SIFT executive are reached. A formal proof that a SIFT system in a 'safe' state operates correctly despite the presence of arbitrary faults has been completed all the way from the most abstract specifications to the PASCAL program.

  12. Master Pump Shutdown MPS Software Quality Assurance Plan (SQAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BEVINS, R.R.

    2000-09-20

    The MPSS Software Quality Assurance (SQAP) describes the tools and strategy used in the development of the MPSS software. The document also describes the methodology for controlling and managing changes to the software.

  13. Has computational creativity successfully made it "Beyond the Fence" in musical theatre?

    NASA Astrophysics Data System (ADS)

    Jordanous, Anna

    2017-10-01

    A significant test for software is to task it with replicating human performance, as done recently with creative software and the commercial project Beyond the Fence (undertaken for a television documentary Computer Says Show). The remit of this project was to use computer software as much as possible to produce "the world's first computer-generated musical". Several creative systems were used to generate this musical, which was performed in London's West End in 2016. This paper considers the challenge of evaluating this project. Current computational creativity evaluation methods are ill-suited to evaluating projects that involve creative input from multiple systems and people. Following recent inspiration within computational creativity research from interaction design, here the DECIDE evaluation framework is applied to evaluate the Beyond the Fence project. Evaluation finds that the project was reasonably successful at achieving the task of using computational generation to produce a credible musical. Lessons have been learned for future computational creativity projects though, particularly for affording creative software more agency and enabling software to interact with other creative partners. Upon reflection, the DECIDE framework emerges as a useful evaluation "checklist" (if not a tangible operational methodology) for evaluating multiple creative systems participating in a creative task.

  14. A Software Planning and Development Methodology with Resource Allocation Capability

    DTIC Science & Technology

    1986-01-01

    vll ACKNOWLEDGEMENTS There are many people who must be acknowledged for the support they provided during my graduate program at Texas A&M Dr. Lee ...acquisition, research/development, and operations/ maintenance sources. The concept of a resource mm >^"^*»T’i»"<Wt"> i PH D« mm^ ivi i t-il^’lfn" i^ I...James, Unpublished ICAM Industry Days address. New Orleans, Louisiana, May 1982. IllllHUIIIIVf 127 46. Ledbetter , William N., et al., "Education

  15. Embedded control system for computerized franking machine

    NASA Astrophysics Data System (ADS)

    Shi, W. M.; Zhang, L. B.; Xu, F.; Zhan, H. W.

    2007-12-01

    This paper presents a novel control system for franking machine. A methodology for operating a franking machine using the functional controls consisting of connection, configuration and franking electromechanical drive is studied. A set of enabling technologies to synthesize postage management software architectures driven microprocessor-based embedded systems is proposed. The cryptographic algorithm that calculates mail items is analyzed to enhance the postal indicia accountability and security. The study indicated that the franking machine is reliability, performance and flexibility in printing mail items.

  16. Constraint-Driven Software Design: An Escape from the Waterfall Model.

    ERIC Educational Resources Information Center

    de Hoog, Robert; And Others

    1994-01-01

    Presents the principles of a development methodology for software design based on a nonlinear, product-driven approach that integrates quality aspects. Two examples are given to show that the flexibility needed for building high quality systems leads to integrated development environments in which methodology, product, and tools are closely…

  17. Reusable Rack Interface Controller Common Software for Various Science Research Racks on the International Space Station

    NASA Technical Reports Server (NTRS)

    Lu, George C.

    2003-01-01

    The purpose of the EXPRESS (Expedite the PRocessing of Experiments to Space Station) rack project is to provide a set of predefined interfaces for scientific payloads which allow rapid integration into a payload rack on International Space Station (ISS). VxWorks' was selected as the operating system for the rack and payload resource controller, primarily based on the proliferation of VME (Versa Module Eurocard) products. These products provide needed flexibility for future hardware upgrades to meet everchanging science research rack configuration requirements. On the International Space Station, there are multiple science research rack configurations, including: 1) Human Research Facility (HRF); 2) EXPRESS ARIS (Active Rack Isolation System); 3) WORF (Window Observational Research Facility); and 4) HHR (Habitat Holding Rack). The RIC (Rack Interface Controller) connects payloads to the ISS bus architecture for data transfer between the payload and ground control. The RIC is a general purpose embedded computer which supports multiple communication protocols, including fiber optic communication buses, Ethernet buses, EIA-422, Mil-Std-1553 buses, SMPTE (Society Motion Picture Television Engineers)-170M video, and audio interfaces to payloads and the ISS. As a cost saving and software reliability strategy, the Boeing Payload Software Organization developed reusable common software where appropriate. These reusable modules included a set of low-level driver software interfaces to 1553B. RS232, RS422, Ethernet buses, HRDL (High Rate Data Link), video switch functionality, telemetry processing, and executive software hosted on the FUC computer. These drivers formed the basis for software development of the HRF, EXPRESS, EXPRESS ARIS, WORF, and HHR RIC executable modules. The reusable RIC common software has provided extensive benefits, including: 1) Significant reduction in development flow time; 2) Minimal rework and maintenance; 3) Improved reliability; and 4) Overall reduction in software life cycle cost. Due to the limited number of crew hours available on ISS for science research, operational efficiency is a critical customer concern. The current method of upgrading RIC software is a time consuming process; thus, an improved methodology for uploading RIC software is currently under evaluation.

  18. Reservoir management strategy for East Randolph Field, Randolph Township, Portage County, Ohio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safley, L.E.; Salamy, S.P.; Young, M.A.

    1998-07-01

    The primary objective of the Reservoir Management Field Demonstration Program is to demonstrate that multidisciplinary reservoir management teams using appropriate software and methodologies with efforts scaled to the size of the resource are a cost-effective method for: Increasing current profitability of field operations; Forestalling abandonment of the reservoir; and Improving long-term economic recovery for the company. The primary objective of the Reservoir Management Demonstration Project with Belden and Blake Corporation is to develop a comprehensive reservoir management strategy to improve the operational economics and optimize oil production from East Randolph field, Randolph Township, Portage County, Ohio. This strategy identifies themore » viable improved recovery process options and defines related operational and facility requirements. In addition, strategies are addressed for field operation problems, such as paraffin buildup, hydraulic fracture stimulation, pumping system optimization, and production treatment requirements, with the goal of reducing operating costs and improving oil recovery.« less

  19. Experiences Supporting the Lunar Reconnaissance Orbiter Camera: the Devops Model

    NASA Astrophysics Data System (ADS)

    Licht, A.; Estes, N. M.; Bowman-Cisnesros, E.; Hanger, C. D.

    2013-12-01

    Introduction: The Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) is responsible for instrument targeting, product processing, and archiving [1]. The LROC SOC maintains over 1,000,000 observations with over 300 TB of released data. Processing challenges compound with the acquisition of over 400 Gbits of observations daily creating the need for a robust, efficient, and reliable suite of specialized software. Development Environment: The LROC SOC's software development methodology has evolved over time. Today, the development team operates in close cooperation with the systems administration team in a model known in the IT industry as DevOps. The DevOps model enables a highly productive development environment that facilitates accomplishment of key goals within tight schedules[2]. The LROC SOC DevOps model incorporates industry best practices including prototyping, continuous integration, unit testing, code coverage analysis, version control, and utilizing existing open source software. Scientists and researchers at LROC often prototype algorithms and scripts in a high-level language such as MATLAB or IDL. After the prototype is functionally complete the solution is implemented as production ready software by the developers. Following this process ensures that all controls and requirements set by the LROC SOC DevOps team are met. The LROC SOC also strives to enhance the efficiency of the operations staff by way of weekly presentations and informal mentoring. Many small scripting tasks are assigned to the cognizant operations personnel (end users), allowing for the DevOps team to focus on more complex and mission critical tasks. In addition to leveraging open source software the LROC SOC has also contributed to the open source community by releasing Lunaserv [3]. Findings: The DevOps software model very efficiently provides smooth software releases and maintains team momentum. Scientists prototyping their work has proven to be very efficient as developers do not need to spend time iterating over small changes. Instead, these changes are realized in early prototypes and implemented before the task is seen by developers. The development practices followed by the LROC SOC DevOps team help facilitate a high level of software quality that is necessary for LROC SOC operations. Application to the Scientific Community: There is no replacement for having software developed by professional developers. While it is beneficial for scientists to write software, this activity should be seen as prototyping, which is then made production ready by professional developers. When constructed properly, even a small development team has the ability to increase the rate of software development for a research group while creating more efficient, reliable, and maintainable products. This strategy allows scientists to accomplish more, focusing on teamwork, rather than software development, which may not be their primary focus. 1. Robinson et al. (2010) Space Sci. Rev. 150, 81-124 2. DeGrandis. (2011) Cutter IT Journal. Vol 24, No. 8, 34-39 3. Estes, N.M.; Hanger, C.D.; Licht, A.A.; Bowman-Cisneros, E.; Lunaserv Web Map Service: History, Implementation Details, Development, and Uses, http://adsabs.harvard.edu/abs/2013LPICo1719.2609E.

  20. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  1. A Mathematical Model for the Exhaust Gas Temperature Profile of a Diesel Engine

    NASA Astrophysics Data System (ADS)

    Brito, C. H. G.; Maia, C. B.; Sodré, J. R.

    2015-09-01

    This work presents a heat transfer model for the exhaust gas of a diesel power generator to determine the gas temperature profile in the exhaust pipe. The numerical methodology to solve the mathematical model was developed using a finite difference method approach for energy equation resolution and determination of temperature profiles considering turbulent fluid flow and variable fluid properties. The simulation was carried out for engine operation under loads from 0 kW to 40 kW. The model was compared with results obtained using the multidimensional Ansys CFX software, which was applied to solve the governor equations of turbulent fluid flow. The results for the temperature profiles in the exhaust pipe show a good proximity between the mathematical model developed and the multidimensional software.

  2. Artificial intelligence and the space station software support environment

    NASA Technical Reports Server (NTRS)

    Marlowe, Gilbert

    1986-01-01

    In a software system the size of the Space Station Software Support Environment (SSE), no one software development or implementation methodology is presently powerful enough to provide safe, reliable, maintainable, cost effective real time or near real time software. In an environment that must survive one of the most harsh and long life times, software must be produced that will perform as predicted, from the first time it is executed to the last. Many of the software challenges that will be faced will require strategies borrowed from Artificial Intelligence (AI). AI is the only development area mentioned as an example of a legitimate reason for a waiver from the overall requirement to use the Ada programming language for software development. The limits are defined of the applicability of the Ada language Ada Programming Support Environment (of which the SSE is a special case), and software engineering to AI solutions by describing a scenario that involves many facets of AI methodologies.

  3. Onboard Sensor Data Qualification in Human-Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Melcher, Kevin J.; Maul, William A.; Chicatelli, Amy K.; Sowers, Thomas S.; Fulton, Christopher; Bickford, Randall

    2012-01-01

    The avionics system software for human-rated launch vehicles requires an implementation approach that is robust to failures, especially the failure of sensors used to monitor vehicle conditions that might result in an abort determination. Sensor measurements provide the basis for operational decisions on human-rated launch vehicles. This data is often used to assess the health of system or subsystem components, to identify failures, and to take corrective action. An incorrect conclusion and/or response may result if the sensor itself provides faulty data, or if the data provided by the sensor has been corrupted. Operational decisions based on faulty sensor data have the potential to be catastrophic, resulting in loss of mission or loss of crew. To prevent these later situations from occurring, a Modular Architecture and Generalized Methodology for Sensor Data Qualification in Human-rated Launch Vehicles has been developed. Sensor Data Qualification (SDQ) is a set of algorithms that can be implemented in onboard flight software, and can be used to qualify data obtained from flight-critical sensors prior to the data being used by other flight software algorithms. Qualified data has been analyzed by SDQ and is determined to be a true representation of the sensed system state; that is, the sensor data is determined not to be corrupted by sensor faults or signal transmission faults. Sensor data can become corrupted by faults at any point in the signal path between the sensor and the flight computer. Qualifying the sensor data has the benefit of ensuring that erroneous data is identified and flagged before otherwise being used for operational decisions, thus increasing confidence in the response of the other flight software processes using the qualified data, and decreasing the probability of false alarms or missed detections.

  4. Analyzing structural changes in SNOMED CT's Bacterial infectious diseases using a visual semantic delta.

    PubMed

    Ochs, Christopher; Case, James T; Perl, Yehoshua

    2017-03-01

    Thousands of changes are applied to SNOMED CT's concepts during each release cycle. These changes are the result of efforts to improve or expand the coverage of health domains in the terminology. Understanding which concepts changed, how they changed, and the overall impact of a set of changes is important for editors and end users. Each SNOMED CT release comes with delta files, which identify all of the individual additions and removals of concepts and relationships. These files typically contain tens of thousands of individual entries, overwhelming users. They also do not identify the editorial processes that were applied to individual concepts and they do not capture the overall impact of a set of changes on a subhierarchy of concepts. In this paper we introduce a methodology and accompanying software tool called a SNOMED CT Visual Semantic Delta ("semantic delta" for short) to enable a comprehensive review of changes in SNOMED CT. The semantic delta displays a graphical list of editing operations that provides semantics and context to the additions and removals in the delta files. However, there may still be thousands of editing operations applied to a set of concepts. To address this issue, a semantic delta includes a visual summary of changes that affected sets of structurally and semantically similar concepts. The software tool for creating semantic deltas offers views of various granularities, allowing a user to control how much change information they view. In this tool a user can select a set of structurally and semantically similar concepts and review the editing operations that affected their modeling. The semantic delta methodology is demonstrated on SNOMED CT's Bacterial infectious disease subhierarchy, which has undergone a significant remodeling effort over the last two years. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Analyzing Structural Changes in SNOMED CT’s Bacterial Infectious Diseases Using a Visual Semantic Delta

    PubMed Central

    Ochs, Christopher; Case, James T.; Perl, Yehoshua

    2017-01-01

    Thousands of changes are applied to SNOMED CT’s concepts during each release cycle. These changes are the result of efforts to improve or expand the coverage of health domains in the terminology. Understanding which concepts changed, how they changed, and the overall impact of a set of changes is important for editors and end users. Each SNOMED CT release comes with delta files, which identify all of the individual additions and removals of concepts and relationships. These files typically contain tens of thousands of individual entries, overwhelming users. They also do not identify the editorial processes that were applied to individual concepts and they do not capture the overall impact of a set of changes on a subhierarchy of concepts. In this paper we introduce a methodology and accompanying software tool called a SNOMED CT Visual Semantic Delta (“semantic delta” for short) to enable a comprehensive review of changes in SNOMED CT. The semantic delta displays a graphical list of editing operations that provides semantics and context to the additions and removals in the delta files. However, there may still be thousands of editing operations applied to a set of concepts. To address this issue, a semantic delta includes a visual summary of changes that affected sets of structurally and semantically similar concepts. The software tool for creating semantic deltas offers views of various granularities, allowing a user to control how much change information they view. In this tool a user can select a set of structurally and semantically similar concepts and review the editing operations that affected their modeling. The semantic delta methodology is demonstrated on SNOMED CT’s Bacterial infectious disease subhierarchy, which has undergone a significant remodeling effort over the last two years. PMID:28215561

  6. NASA PC software evaluation project

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kuan, Julie C.

    1986-01-01

    The USL NASA PC software evaluation project is intended to provide a structured framework for facilitating the development of quality NASA PC software products. The project will assist NASA PC development staff to understand the characteristics and functions of NASA PC software products. Based on the results of the project teams' evaluations and recommendations, users can judge the reliability, usability, acceptability, maintainability and customizability of all the PC software products. The objective here is to provide initial, high-level specifications and guidelines for NASA PC software evaluation. The primary tasks to be addressed in this project are as follows: to gain a strong understanding of what software evaluation entails and how to organize a structured software evaluation process; to define a structured methodology for conducting the software evaluation process; to develop a set of PC software evaluation criteria and evaluation rating scales; and to conduct PC software evaluations in accordance with the identified methodology. Communication Packages, Network System Software, Graphics Support Software, Environment Management Software, General Utilities. This report represents one of the 72 attachment reports to the University of Southwestern Louisiana's Final Report on NASA Grant NGT-19-010-900. Accordingly, appropriate care should be taken in using this report out of context of the full Final Report.

  7. Development of a State Machine Sequencer for the Keck Interferometer: Evolution, Development and Lessons Learned using a CASE Tool Approach

    NASA Technical Reports Server (NTRS)

    Rede, Leonard J.; Booth, Andrew; Hsieh, Jonathon; Summer, Kellee

    2004-01-01

    This paper presents a discussion of the evolution of a sequencer from a simple EPICS (Experimental Physics and Industrial Control System) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a CASE (Computer Aided Software Engineering) tool approach. The main purpose of the sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii. The IF Sequencer is a high-level, multi-threaded, Hare1 finite state machine, software program designed to orchestrate several lower-level hardware and software hard real time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORB A, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation. The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.

  8. Development of a state machine sequencer for the Keck Interferometer: evolution, development, and lessons learned using a CASE tool approach

    NASA Astrophysics Data System (ADS)

    Reder, Leonard J.; Booth, Andrew; Hsieh, Jonathan; Summers, Kellee R.

    2004-09-01

    This paper presents a discussion of the evolution of a sequencer from a simple Experimental Physics and Industrial Control System (EPICS) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a Computer Aided Software Engineering (CASE) tool approach. The main purpose of the Interferometer Sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations to be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii. The IF Sequencer is a high-level, multi-threaded, Harel finite state machine software program designed to orchestrate several lower-level hardware and software hard real-time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORBA, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation. The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.

  9. Discovering objects in a blood recipient information system.

    PubMed

    Qiu, D; Junghans, G; Marquardt, K; Kroll, H; Mueller-Eckhardt, C; Dudeck, J

    1995-01-01

    Application of object-oriented (OO) methodologies has been generally considered as a solution to the problem of improving the software development process and managing the so-called software crisis. Among them, object-oriented analysis (OOA) is the most essential and is a vital prerequisite for the successful use of other OO methodologies. Though there are already a good deal of OOA methods published, the most important aspect common to all these methods: discovering objects classes truly relevant to the given problem domain, has remained a subject to be intensively researched. In this paper, using the successful development of a blood recipient information system as an example, we present our approach which is based on the conceptual framework of responsibility-driven OOA. In the discussion, we also suggest that it may be inadequate to simply attribute the software crisis to the waterfall model of the software development life-cycle. We are convinced that the real causes for the failure of some software and information systems should be sought in the methodologies used in some crucial phases of the software development process. Furthermore, a software system can also fail if object classes essential to the problem domain are not discovered, implemented and visualized, so that the real-world situation cannot be faithfully traced by it.

  10. Multirate Flutter Suppression System Design for the Benchmark Active Controls Technology Wing. Part 2; Methodology Application Software Toolbox

    NASA Technical Reports Server (NTRS)

    Mason, Gregory S.; Berg, Martin C.; Mukhopadhyay, Vivek

    2002-01-01

    To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies were applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing. This report describes the user's manual and software toolbox developed at the University of Washington to design a multirate flutter suppression control law for the BACT wing.

  11. Effective Software Engineering Leadership for Development Programs

    ERIC Educational Resources Information Center

    Cagle West, Marsha

    2010-01-01

    Software is a critical component of systems ranging from simple consumer appliances to complex health, nuclear, and flight control systems. The development of quality, reliable, and effective software solutions requires the incorporation of effective software engineering processes and leadership. Processes, approaches, and methodologies for…

  12. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1987-01-01

    The Software Automation, Generation and Administration (SAGA) project is investigating the design and construction of practical software engineering environments for developing and maintaining aerospace systems and applications software. The research includes the practical organization of the software lifecycle, configuration management, software requirements specifications, executable specifications, design methodologies, programming, verification, validation and testing, version control, maintenance, the reuse of software, software libraries, documentation, and automated management.

  13. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    NASA Astrophysics Data System (ADS)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  14. Success Rates by Software Development Methodology in Information Technology Project Management: A Quantitative Analysis

    ERIC Educational Resources Information Center

    Wright, Gerald P.

    2013-01-01

    Despite over half a century of Project Management research, project success rates are still too low. Organizations spend a tremendous amount of valuable resources on Information Technology projects and seek to maximize the utility gained from their efforts. The author investigated the impact of software development methodology choice on ten…

  15. Application of Real Options Theory to DoD Software Acquisitions

    DTIC Science & Technology

    2009-02-20

    Future Combat Systems Program. Washington, DC. U.S. Government Printing Office. Damodaran , A. (2007). Investment Valuation : The Options To Expand... valuation methodology, when enhanced and properly formulated around a proposed or existing software investment employing the spiral development approach...THIS PAGE INTENTIONALLY LEFT BLANK iii ABSTRACT The traditional real options valuation methodology, when enhanced and properly formulated

  16. Design and implementation of the tree-based fuzzy logic controller.

    PubMed

    Liu, B D; Huang, C Y

    1997-01-01

    In this paper, a tree-based approach is proposed to design the fuzzy logic controller. Based on the proposed methodology, the fuzzy logic controller has the following merits: the fuzzy control rule can be extracted automatically from the input-output data of the system and the extraction process can be done in one-pass; owing to the fuzzy tree inference structure, the search spaces of the fuzzy inference process are largely reduced; the operation of the inference process can be simplified as a one-dimensional matrix operation because of the fuzzy tree approach; and the controller has regular and modular properties, so it is easy to be implemented by hardware. Furthermore, the proposed fuzzy tree approach has been applied to design the color reproduction system for verifying the proposed methodology. The color reproduction system is mainly used to obtain a color image through the printer that is identical to the original one. In addition to the software simulation, an FPGA is used to implement the prototype hardware system for real-time application. Experimental results show that the effect of color correction is quite good and that the prototype hardware system can operate correctly under the condition of 30 MHz clock rate.

  17. Proceedings of the Ninth Annual Software Engineering Workshop

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Experiences in measurement, utilization, and evaluation of software methodologies, models, and tools are discussed. NASA's involvement in ever larger and more complex systems, like the space station project, provides a motive for the support of software engineering research and the exchange of ideas in such forums. The topics of current SEL research are software error studies, experiments with software development, and software tools.

  18. Guidelines for software inspections

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Quality control inspections are software problem finding procedures which provide defect removal as well as improvements in software functionality, maintenance, quality, and development and testing methodology is discussed. The many side benefits include education, documentation, training, and scheduling.

  19. Water Quality Projects Summary for the Mid-Columbia and Cumberland River Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Kevin M.; Witt, Adam M.; Hadjerioua, Boualem

    Scheduling and operational control of hydropower systems is accompanied with a keen awareness of the management of water use, environmental effects, and policy, especially within the context of strict water rights policy and generation maximization. This is a multi-objective problem for many hydropower systems, including the Cumberland and Mid-Columbia river systems. Though each of these two systems have distinct operational philosophies, hydrologic characteristics, and system dynamics, they both share a responsibility to effectively manage hydropower and the environment, which requires state-of-the art improvements in the approaches and applications for water quality modeling. The Department of Energy and Oak Ridge Nationalmore » Laboratory have developed tools for total dissolved gas (TDG) prediction on the Mid-Columbia River and a decision-support system used for hydropower generation and environmental optimization on the Cumberland River. In conjunction with IIHR - Hydroscience & Engineering, The University of Iowa and University of Colorado s Center for Advanced Decision Support for Water and Environmental Systems (CADSWES), ORNL has managed the development of a TDG predictive methodology at seven dams along the Mid-Columbia River and has enabled the ability to utilize this methodology for optimization of operations at these projects with the commercially available software package Riverware. ORNL has also managed the collaboration with Vanderbilt University and Lipscomb University to develop a state-of-the art method for reducing high-fidelity water quality modeling results into surrogate models which can be used effectively within the context of optimization efforts to maximize generation for a reservoir system based on environmental and policy constraints. The novel contribution of these efforts is the ability to predict water quality conditions with simplified methodologies at the same level of accuracy as more complex and resource intensive computing methods. These efforts were designed to incorporate well into existing hydropower and reservoir system scheduling models, with runtimes that are comparable to existing software tools. In addition, the transferability of these tools to assess other systems is enhanced due the use of simplistic and easily attainable values for inputs, straight-forward calibration of predictive equation coefficients, and standardized comparison of traditionally familiar outputs.« less

  20. Selected Tether Applications Cost Model

    NASA Technical Reports Server (NTRS)

    Keeley, Michael G.

    1988-01-01

    Diverse cost-estimating techniques and data combined into single program. Selected Tether Applications Cost Model (STACOM 1.0) is interactive accounting software tool providing means for combining several independent cost-estimating programs into fully-integrated mathematical model capable of assessing costs, analyzing benefits, providing file-handling utilities, and putting out information in text and graphical forms to screen, printer, or plotter. Program based on Lotus 1-2-3, version 2.0. Developed to provide clear, concise traceability and visibility into methodology and rationale for estimating costs and benefits of operations of Space Station tether deployer system.

  1. Proceedings of the IDA Workshop on Formal Specification and Verification of Ada (Trade Name) (3rd) Held in Research Triangle Park, North Carolina on 14-16 May 1986

    DTIC Science & Technology

    1986-08-01

    sensitivity to software or hardware failures (bit transformation, register perversion, interface failures, etc .) which could cause the system to operate in a...of systems . She pointed to the need for 40 safety concerns in a continually growing number of computer applications (e.g., monitor and/or control of...informal, definition. Finally, the definition is based on the SMoLCS (Structured Monitored Linear Concurrent Systems ) methodology, an approach to the

  2. Agile Software Development in the Department of Defense Environment

    DTIC Science & Technology

    2017-03-31

    Research Methodology .............................................................................................. 17 Research Hypothesis...acquisition framework to enable greater adoption of Agile methodologies . Overview of the Research Methodology The strategy for this study was to...guidance. 17 Chapter 3 – Research Methodology This chapter defines the research methodology and processes used in the study, in an effort to

  3. Advanced telemetry systems for payloads. Technology needs, objectives and issues

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The current trends in advanced payload telemetry are the new developments in advanced modulation/coding, the applications of intelligent techniques, data distribution processing, and advanced signal processing methodologies. Concerted efforts will be required to design ultra-reliable man-rated software to cope with these applications. The intelligence embedded and distributed throughout various segments of the telemetry system will need to be overridden by an operator in case of life-threatening situations, making it a real-time integration issue. Suitable MIL standards on physical interfaces and protocols will be adopted to suit the payload telemetry system. New technologies and techniques will be developed for fast retrieval of mass data. Currently, these technology issues are being addressed to provide more efficient, reliable, and reconfigurable systems. There is a need, however, to change the operation culture. The current role of NASA as a leader in developing all the new innovative hardware should be altered to save both time and money. We should use all the available hardware/software developed by the industry and use the existing standards rather than inventing our own.

  4. Brahms Mobile Agents: Architecture and Field Tests

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2002-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.

  5. IMSF: Infinite Methodology Set Framework

    NASA Astrophysics Data System (ADS)

    Ota, Martin; Jelínek, Ivan

    Software development is usually an integration task in enterprise environment - few software applications work autonomously now. It is usually a collaboration of heterogeneous and unstable teams. One serious problem is lack of resources, a popular result being outsourcing, ‘body shopping’, and indirectly team and team member fluctuation. Outsourced sub-deliveries easily become black boxes with no clear development method used, which has a negative impact on supportability. Such environments then often face the problems of quality assurance and enterprise know-how management. The used methodology is one of the key factors. Each methodology was created as a generalization of a number of solved projects, and each methodology is thus more or less connected with a set of task types. When the task type is not suitable, it causes problems that usually result in an undocumented ad-hoc solution. This was the motivation behind formalizing a simple process for collaborative software engineering. Infinite Methodology Set Framework (IMSF) defines the ICT business process of adaptive use of methods for classified types of tasks. The article introduces IMSF and briefly comments its meta-model.

  6. Data-Driven Simulation-Enhanced Optimization of People-Based Print Production Service

    NASA Astrophysics Data System (ADS)

    Rai, Sudhendu

    This paper describes a systematic six-step data-driven simulation-based methodology for optimizing people-based service systems on a large distributed scale that exhibit high variety and variability. The methodology is exemplified through its application within the printing services industry where it has been successfully deployed by Xerox Corporation across small, mid-sized and large print shops generating over 250 million in profits across the customer value chain. Each step of the methodology consisting of innovative concepts co-development and testing in partnership with customers, development of software and hardware tools to implement the innovative concepts, establishment of work-process and practices for customer-engagement and service implementation, creation of training and infrastructure for large scale deployment, integration of the innovative offering within the framework of existing corporate offerings and lastly the monitoring and deployment of the financial and operational metrics for estimating the return-on-investment and the continual renewal of the offering are described in detail.

  7. a Prompt Methodology to Georeference Complex Hypogea Environments

    NASA Astrophysics Data System (ADS)

    Troisi, S.; Baiocchi, V.; Del Pizzo, S.; Giannone, F.

    2017-02-01

    Actually complex underground structures and facilities occupy a wide space in our cities, most of them are often unsurveyed; cable duct, drainage system are not exception. Furthermore, several inspection operations are performed in critical air condition, that do not allow or make more difficult a conventional survey. In this scenario a prompt methodology to survey and georeferencing such facilities is often indispensable. A visual based approach was proposed in this paper; such methodology provides a 3D model of the environment and the path followed by the camera using the conventional photogrammetric/Structure from motion software tools. The key-role is played by the lens camera; indeed, a fisheye system was employed to obtain a very wide field of view (FOV) and therefore high overlapping among the frames. The camera geometry is in according to a forward motion along the axis camera. Consequently, to avoid instability of bundle adjustment algorithm a preliminary calibration of camera was carried out. A specific case study was reported and the accuracy achieved.

  8. Design and development of a prototypical software for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small- and medium-sized enterprises (SME)

    NASA Astrophysics Data System (ADS)

    Möller, Thomas; Bellin, Knut; Creutzburg, Reiner

    2015-03-01

    The aim of this paper is to show the recent progress in the design and prototypical development of a software suite Copra Breeder* for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small and medium-sized enterprises.

  9. Empirical cost models for estimating power and energy consumption in database servers

    NASA Astrophysics Data System (ADS)

    Valdivia Garcia, Harold Dwight

    The explosive growth in the size of data centers, coupled with the widespread use of virtualization technology has brought power and energy consumption as major concerns for data center administrators. Provisioning decisions must take into consideration not only target application performance but also the power demands and total energy consumption incurred by the hardware and software to be deployed at the data center. Failure to do so will result in damaged equipment, power outages, and inefficient operation. Since database servers comprise one of the most popular and important server applications deployed in such facilities, it becomes necessary to have accurate cost models that can predict the power and energy demands that each database workloads will impose in the system. In this work we present an empirical methodology to estimate the power and energy cost of database operations. Our methodology uses multiple-linear regression to derive accurate cost models that depend only on readily available statistics such as selectivity factors, tuple size, numbers columns and relational cardinality. Moreover, our method does not need measurement of individual hardware components, but rather total power and energy consumption measured at a server. We have implemented our methodology, and ran experiments with several server configurations. Our experiments indicate that we can predict power and energy more accurately than alternative methods found in the literature.

  10. Autonomous Aerobraking: Thermal Analysis and Response Surface Development

    NASA Technical Reports Server (NTRS)

    Dec, John A.; Thornblom, Mark N.

    2011-01-01

    A high-fidelity thermal model of the Mars Reconnaissance Orbiter was developed for use in an autonomous aerobraking simulation study. Response surface equations were derived from the high-fidelity thermal model and integrated into the autonomous aerobraking simulation software. The high-fidelity thermal model was developed using the Thermal Desktop software and used in all phases of the analysis. The use of Thermal Desktop exclusively, represented a change from previously developed aerobraking thermal analysis methodologies. Comparisons were made between the Thermal Desktop solutions and those developed for the previous aerobraking thermal analyses performed on the Mars Reconnaissance Orbiter during aerobraking operations. A variable sensitivity screening study was performed to reduce the number of variables carried in the response surface equations. Thermal analysis and response surface equation development were performed for autonomous aerobraking missions at Mars and Venus.

  11. An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency

    NASA Astrophysics Data System (ADS)

    Phillips, Dewanne Marie

    Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software architecture framework and acquisition methodology to improve the resiliency of space systems from a software perspective with an emphasis on the early phases of the systems engineering life cycle. This methodology involves seven steps: 1) Define technical resiliency requirements, 1a) Identify standards/policy for software resiliency, 2) Develop a request for proposal (RFP)/statement of work (SOW) for resilient space systems software, 3) Define software resiliency goals for space systems, 4) Establish software resiliency quality attributes, 5) Perform architectural tradeoffs and identify risks, 6) Conduct architecture assessments as part of the procurement process, and 7) Ascertain space system software architecture resiliency metrics. Data illustrates that software vulnerabilities can lead to opportunities for malicious cyber activities, which could degrade the space mission capability for the user community. Reducing the number of vulnerabilities by improving architecture and software system engineering practices can contribute to making space systems more resilient. Since cyber-attacks are enabled by shortfalls in software, robust software engineering practices and an architectural design are foundational to resiliency, which is a quality that allows the system to "take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". To achieve software resiliency for space systems, acquirers and suppliers must identify relevant factors and systems engineering practices to apply across the lifecycle, in software requirements analysis, architecture development, design, implementation, verification and validation, and maintenance phases.

  12. A software methodology for compiling quantum programs

    NASA Astrophysics Data System (ADS)

    Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias

    2018-04-01

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.

  13. Teaching Agile Software Development: A Case Study

    ERIC Educational Resources Information Center

    Devedzic, V.; Milenkovic, S. R.

    2011-01-01

    This paper describes the authors' experience of teaching agile software development to students of computer science, software engineering, and other related disciplines, and comments on the implications of this and the lessons learned. It is based on the authors' eight years of experience in teaching agile software methodologies to various groups…

  14. Towards a Methodology for Identifying Program Constraints During Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Romo, Lilly; Gates, Ann Q.; Della-Piana, Connie Kubo

    1997-01-01

    Requirements analysis is the activity that involves determining the needs of the customer, identifying the services that the software system should provide and understanding the constraints on the solution. The result of this activity is a natural language document, typically referred to as the requirements definition document. Some of the problems that exist in defining requirements in large scale software projects includes synthesizing knowledge from various domain experts and communicating this information across multiple levels of personnel. One approach that addresses part of this problem is called context monitoring and involves identifying the properties of and relationships between objects that the system will manipulate. This paper examines several software development methodologies, discusses the support that each provide for eliciting such information from experts and specifying the information, and suggests refinements to these methodologies.

  15. Evaluation of the Red Blood Cell Advanced Software Application on the CellaVision DM96.

    PubMed

    Criel, M; Godefroid, M; Deckers, B; Devos, H; Cauwelier, B; Emmerechts, J

    2016-08-01

    The CellaVision Advanced Red Blood Cell (RBC) Software Application is a new software for advanced morphological analysis of RBCs on a digital microscopy system. Upon automated precharacterization into 21 categories, the software offers the possibility of reclassification of RBCs by the operator. We aimed to define the optimal cut-off to detect morphological RBC abnormalities and to evaluate the precharacterization performance of this software. Thirty-eight blood samples of healthy donors and sixty-eight samples of hospitalized patients were analyzed. Different methodologies to define a cut-off between negativity and positivity were used. Sensitivity and specificity were calculated according to these different cut-offs using the manual microscopic method as the gold standard. Imprecision was assessed by measuring analytical within-run and between-run variability and by measuring between-observer variability. By optimizing the cut-off between negativity and positivity, sensitivities exceeded 80% for 'critical' RBC categories (target cells, tear drop cells, spherocytes, sickle cells, and parasites), while specificities exceeded 80% for the other RBC morphological categories. Results of within-run, between-run, and between-observer variabilities were all clinically acceptable. The CellaVision Advanced RBC Software Application is an easy-to-use software that helps to detect most RBC morphological abnormalities in a sensitive and specific way without increasing work load, provided the proper cut-offs are chosen. However, evaluation of the images by an experienced observer remains necessary. © 2016 John Wiley & Sons Ltd.

  16. Object oriented development of engineering software using CLIPS

    NASA Technical Reports Server (NTRS)

    Yoon, C. John

    1991-01-01

    Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.

  17. Airborne Advanced Reconfigurable Computer System (ARCS)

    NASA Technical Reports Server (NTRS)

    Bjurman, B. E.; Jenkins, G. M.; Masreliez, C. J.; Mcclellan, K. L.; Templeman, J. E.

    1976-01-01

    A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

  18. Human factors technology for America's space program

    NASA Technical Reports Server (NTRS)

    Montemerlo, M. D.

    1982-01-01

    NASA is initiating a space human factors research and technology development program in October 1982. The impetus for this program stems from: the frequent and economical access to space provided by the Shuttle, the advances in control and display hardware/software made possible through the recent explosion in microelectronics technology, heightened interest in a space station, heightened interest by the military in space operations, and the fact that the technology for long duration stay times for man in space has received relatively little attention since the Apollo and Skylab missions. The rationale for and issues in the five thrusts of the new program are described. The main thrusts are: basic methodology, crew station design, ground control/operations, teleoperations and extra vehicular activity.

  19. Weighted Ensemble Simulation: Review of Methodology, Applications, and Software

    PubMed Central

    Zuckerman, Daniel M.; Chong, Lillian T.

    2018-01-01

    The weighted ensemble (WE) methodology orchestrates quasi-independent parallel simulations run with intermittent communication that can enhance sampling of rare events such as protein conformational changes, folding, and binding. The WE strategy can achieve superlinear scaling—the unbiased estimation of key observables such as rate constants and equilibrium state populations to greater precision than would be possible with ordinary parallel simulation. WE software can be used to control any dynamics engine, such as standard molecular dynamics and cell-modeling packages. This article reviews the theoretical basis of WE and goes on to describe successful applications to a number of complex biological processes—protein conformational transitions, (un)binding, and assembly processes, as well as cell-scale processes in systems biology. We furthermore discuss the challenges that need to be overcome in the next phase of WE methodological development. Overall, the combined advances in WE methodology and software have enabled the simulation of long-timescale processes that would otherwise not be practical on typical computing resources using standard simulation. PMID:28301772

  20. Weighted Ensemble Simulation: Review of Methodology, Applications, and Software.

    PubMed

    Zuckerman, Daniel M; Chong, Lillian T

    2017-05-22

    The weighted ensemble (WE) methodology orchestrates quasi-independent parallel simulations run with intermittent communication that can enhance sampling of rare events such as protein conformational changes, folding, and binding. The WE strategy can achieve superlinear scaling-the unbiased estimation of key observables such as rate constants and equilibrium state populations to greater precision than would be possible with ordinary parallel simulation. WE software can be used to control any dynamics engine, such as standard molecular dynamics and cell-modeling packages. This article reviews the theoretical basis of WE and goes on to describe successful applications to a number of complex biological processes-protein conformational transitions, (un)binding, and assembly processes, as well as cell-scale processes in systems biology. We furthermore discuss the challenges that need to be overcome in the next phase of WE methodological development. Overall, the combined advances in WE methodology and software have enabled the simulation of long-timescale processes that would otherwise not be practical on typical computing resources using standard simulation.

  1. Mesure sans contact d'un panneau d'aile d'avion et analyse numerique pour controle dimensionnel

    NASA Astrophysics Data System (ADS)

    Sok, Michel Christian

    During the manufacturing of the wing skin, the inspection steps are essential to ensure their conformity and thus allow the wings to ensure the required aerodynamic performances. Nowadays, considering the panel's low stiffness which prevents traditional inspection methods, this inspection is done manually with a template gauge and a jig. Iteratively, as long as form compliance is not reached, the panel goes through an additional dimensional refinement before being inspected in a second time. Because the jig is accurate, it is very expensive and furthermore, the inspection of panels is time-consuming by monopolizing the jig, which cannot be used in the meantime. Using this consideration as a starting point, this project seeks to provide a response to the practicability of a methodology based on the automation of that king of operation. This by integrating into the process non-contact measuring machines capable of acquiring numerically the geometrical shape of the panel. Moreover, the opportunity of realizing this operation without the use of a jig is also being considered, which would leave it free for other tasks. The methodology suggested use numerical simulations to check form compliance. Finally, this would provide a tool to assist the operator by allowing a semi automated inspection without jig. The methodology suggested can be describe in three steps, however it is necessary to propose an additional step to validate the results achieved with this methodology. Then, the first step consist of manually acquiring reference values which will served to be compared with the values obtained during the application of the methodology. The second step deals with the numerical acquisition, with a laser scanner, of the object to be inspected settled down on some supporting plate. The third step is the numerical reconstruction of this object with a computer-aided design software. Finally the last step consists of a numerical inspection of the object to predict the form compliance. Considering the large dimensions of the wing skins and of the jigs used in industry, the methodology suggested takes accounts of the available means in laboratory. Then, the objects used have lower dimensions than those used in the industry. That is the reason why a simplifying assumption that the shot peening operation has a negligible effect on the evolution of the thickness of the wing skin is made. Furthermore, the non-contact measurement device is also tested to know its accuracy under real conditions. Those two preliminary studies show that the thickness variation of a plate after being shot peened, with extreme parameters in terms of effects, remains negligible for the study of practicability realized in this thesis. The study on the performance of the REVscan 3D also brings to light that this variation would probably be drown in the uncertainty acquired by the device during the numerical acquisition. In this project, only the steps two and three are dealt with in depth. This study involves essentially to test the measuring device and the software about their capacity of numerically acquiring an object and then to bring it to another state of stresses with the help of a simulation. Indeed, the validation of the free state step is problematic because it is precisely a state that cannot be obtained in an experimental way. As an analogy, it is suggested to pass from a particular state of stress to another because, in a simplified way, the free state step is equivalent to a change of a state of stress. The study of the result allows to put forward a particular phenomenon linked to thin plates : it is a sudden change of the form when the plate is in a particular state of stress. The software is then no more able to predict that kind of comportment. Several tests are carried out to confirm the existence of that phenomenon and show that the stress modulus, the point of application of the stresses and the position of the support points are the more influent parameters. However, even by ensuring to avoid this phenomenon during the tests, the degree of accuracy reached by the software is far from being sufficient. Indeed, the uncertainty of the results is still too high and the next studies will have to focus on improving the results. Currently, the tests realized in this thesis are not enough to validate the steps 2 and 3 of the methodology suggested. Nevertheless, the phenomenon highlighted which can suddenly modify the comportment of thin plates and the information gathered in these tests establishes a base for further research. (Abstract shortened by UMI.).

  2. Development of Methodology for Programming Autonomous Agents

    NASA Technical Reports Server (NTRS)

    Erol, Kutluhan; Levy, Renato; Lang, Lun

    2004-01-01

    A brief report discusses the rationale for, and the development of, a methodology for generating computer code for autonomous-agent-based systems. The methodology is characterized as enabling an increase in the reusability of the generated code among and within such systems, thereby making it possible to reduce the time and cost of development of the systems. The methodology is also characterized as enabling reduction of the incidence of those software errors that are attributable to the human failure to anticipate distributed behaviors caused by the software. A major conceptual problem said to be addressed in the development of the methodology was that of how to efficiently describe the interfaces between several layers of agent composition by use of a language that is both familiar to engineers and descriptive enough to describe such interfaces unambivalently

  3. Self port scanning tool : providing a more secure computing Environment through the use of proactive port scanning

    NASA Technical Reports Server (NTRS)

    Kocher, Joshua E; Gilliam, David P.

    2005-01-01

    Secure computing is a necessity in the hostile environment that the internet has become. Protection from nefarious individuals and organizations requires a solution that is more a methodology than a one time fix. One aspect of this methodology is having the knowledge of which network ports a computer has open to the world, These network ports are essentially the doorways from the internet into the computer. An assessment method which uses the nmap software to scan ports has been developed to aid System Administrators (SAs) with analysis of open ports on their system(s). Additionally, baselines for several operating systems have been developed so that SAs can compare their open ports to a baseline for a given operating system. Further, the tool is deployed on a website where SAs and Users can request a port scan of their computer. The results are then emailed to the requestor. This tool aids Users, SAs, and security professionals by providing an overall picture of what services are running, what ports are open, potential trojan programs or backdoors, and what ports can be closed.

  4. Delivering Software Process-Specific Project Courses in Tertiary Education Environment: Challenges and Solution

    ERIC Educational Resources Information Center

    Rong, Guoping; Shao, Dong

    2012-01-01

    The importance of delivering software process courses to software engineering students has been more and more recognized in China in recent years. However, students usually cannot fully appreciate the value of software process courses by only learning methodology and principle in the classroom. Therefore, a process-specific project course was…

  5. Application of State Analysis and Goal-based Operations to a MER Mission Scenario

    NASA Technical Reports Server (NTRS)

    Morris, John Richard; Ingham, Michel D.; Mishkin, Andrew H.; Rasmussen, Robert D.; Starbird, Thomas W.

    2006-01-01

    State Analysis is a model-based systems engineering methodology employing a rigorous discovery process which articulates operations concepts and operability needs as an integrated part of system design. The process produces requirements on system and software design in the form of explicit models which describe the system behavior in terms of state variables and the relationships among them. By applying State Analysis to an actual MER flight mission scenario, this study addresses the specific real world challenges of complex space operations and explores technologies that can be brought to bear on future missions. The paper first describes the tools currently used on a daily basis for MER operations planning and provides an in-depth description of the planning process, in the context of a Martian day's worth of rover engineering activities, resource modeling, flight rules, science observations, and more. It then describes how State Analysis allows for the specification of a corresponding goal-based sequence that accomplishes the same objectives, with several important additional benefits.

  6. Application of State Analysis and Goal-Based Operations to a MER Mission Scenario

    NASA Technical Reports Server (NTRS)

    Morris, J. Richard; Ingham, Michel D.; Mishkin, Andrew H.; Rasmussen, Robert D.; Starbird, Thomas W.

    2006-01-01

    State Analysis is a model-based systems engineering methodology employing a rigorous discovery process which articulates operations concepts and operability needs as an integrated part of system design. The process produces requirements on system and software design in the form of explicit models which describe the behavior of states and the relationships among them. By applying State Analysis to an actual MER flight mission scenario, this study addresses the specific real world challenges of complex space operations and explores technologies that can be brought to bear on future missions. The paper describes the tools currently used on a daily basis for MER operations planning and provides an in-depth description of the planning process, in the context of a Martian day's worth of rover engineering activities, resource modeling, flight rules, science observations, and more. It then describes how State Analysis allows for the specification of a corresponding goal-based sequence that accomplishes the same objectives, with several important additional benefits.

  7. NASA software specification and evaluation system design, part 1

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The research to develop methods for reducing the effort expended in software and verification is reported. The development of a formal software requirements methodology, a formal specifications language, a programming language, a language preprocessor, and code analysis tools are discussed.

  8. AU-FREDI - AUTONOMOUS FREQUENCY DOMAIN IDENTIFICATION

    NASA Technical Reports Server (NTRS)

    Yam, Y.

    1994-01-01

    The Autonomous Frequency Domain Identification program, AU-FREDI, is a system of methods, algorithms and software that was developed for the identification of structural dynamic parameters and system transfer function characterization for control of large space platforms and flexible spacecraft. It was validated in the CALTECH/Jet Propulsion Laboratory's Large Spacecraft Control Laboratory. Due to the unique characteristics of this laboratory environment, and the environment-specific nature of many of the software's routines, AU-FREDI should be considered to be a collection of routines which can be modified and reassembled to suit system identification and control experiments on large flexible structures. The AU-FREDI software was originally designed to command plant excitation and handle subsequent input/output data transfer, and to conduct system identification based on the I/O data. Key features of the AU-FREDI methodology are as follows: 1. AU-FREDI has on-line digital filter design to support on-orbit optimal input design and data composition. 2. Data composition of experimental data in overlapping frequency bands overcomes finite actuator power constraints. 3. Recursive least squares sine-dwell estimation accurately handles digitized sinusoids and low frequency modes. 4. The system also includes automated estimation of model order using a product moment matrix. 5. A sample-data transfer function parametrization supports digital control design. 6. Minimum variance estimation is assured with a curve fitting algorithm with iterative reweighting. 7. Robust root solvers accurately factorize high order polynomials to determine frequency and damping estimates. 8. Output error characterization of model additive uncertainty supports robustness analysis. The research objectives associated with AU-FREDI were particularly useful in focusing the identification methodology for realistic on-orbit testing conditions. Rather than estimating the entire structure, as is typically done in ground structural testing, AU-FREDI identifies only the key transfer function parameters and uncertainty bounds that are necessary for on-line design and tuning of robust controllers. AU-FREDI's system identification algorithms are independent of the JPL-LSCL environment, and can easily be extracted and modified for use with input/output data files. The basic approach of AU-FREDI's system identification algorithms is to non-parametrically identify the sampled data in the frequency domain using either stochastic or sine-dwell input, and then to obtain a parametric model of the transfer function by curve-fitting techniques. A cross-spectral analysis of the output error is used to determine the additive uncertainty in the estimated transfer function. The nominal transfer function estimate and the estimate of the associated additive uncertainty can be used for robust control analysis and design. AU-FREDI's I/O data transfer routines are tailored to the environment of the CALTECH/ JPL-LSCL which included a special operating system to interface with the testbed. Input commands for a particular experiment (wideband, narrowband, or sine-dwell) were computed on-line and then issued to respective actuators by the operating system. The operating system also took measurements through displacement sensors and passed them back to the software for storage and off-line processing. In order to make use of AU-FREDI's I/O data transfer routines, a user would need to provide an operating system capable of overseeing such functions between the software and the experimental setup at hand. The program documentation contains information designed to support users in either providing such an operating system or modifying the system identification algorithms for use with input/output data files. It provides a history of the theoretical, algorithmic and software development efforts including operating system requirements and listings of some of the various special purpose subroutines which were developed and optimized for Lahey FORTRAN compilers on IBM PC-AT computers before the subroutines were integrated into the system software. Potential purchasers are encouraged to purchase and review the documentation before purchasing the AU-FREDI software. AU-FREDI is distributed in DEC VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard media) or a TK50 tape cartridge. AU-FREDI was developed in 1989 and is a copyrighted work with all copyright vested in NASA.

  9. The Photogrammetric Survey Methodologies Applied to Low Cost 3d Virtual Exploration in Multidisciplinary Field

    NASA Astrophysics Data System (ADS)

    Palestini, C.; Basso, A.

    2017-11-01

    In recent years, an increase in international investment in hardware and software technology to support programs that adopt algorithms for photomodeling or data management from laser scanners significantly reduced the costs of operations in support of Augmented Reality and Virtual Reality, designed to generate real-time explorable digital environments integrated to virtual stereoscopic headset. The research analyzes transversal methodologies related to the acquisition of these technologies in order to intervene directly on the phenomenon of acquiring the current VR tools within a specific workflow, in light of any issues related to the intensive use of such devices , outlining a quick overview of the possible "virtual migration" phenomenon, assuming a possible integration with the new internet hyper-speed systems, capable of triggering a massive cyberspace colonization process that paradoxically would also affect the everyday life and more in general, on human space perception. The contribution aims at analyzing the application systems used for low cost 3d photogrammetry by means of a precise pipeline, clarifying how a 3d model is generated, automatically retopologized, textured by color painting or photo-cloning techniques, and optimized for parametric insertion on virtual exploration platforms. Workflow analysis will follow some case studies related to photomodeling, digital retopology and "virtual 3d transfer" of some small archaeological artifacts and an architectural compartment corresponding to the pronaus of Aurum, a building designed in the 1940s by Michelucci. All operations will be conducted on cheap or free licensed software that today offer almost the same performance as their paid counterparts, progressively improving in the data processing speed and management.

  10. Quantitative evaluation of a thrust vector controlled transport at the conceptual design phase

    NASA Astrophysics Data System (ADS)

    Ricketts, Vincent Patrick

    The impetus to innovate, to push the bounds and break the molds of evolutionary design trends, often comes from competition but sometimes requires catalytic political legislature. For this research endeavor, the 'catalyzing legislation' comes in response to the rise in cost of fossil fuels and the request put forth by NASA on aircraft manufacturers to show reduced aircraft fuel consumption of +60% within 30 years. This necessitates that novel technologies be considered to achieve these values of improved performance. One such technology is thrust vector control (TVC). The beneficial characteristic of thrust vector control technology applied to the traditional tail-aft configuration (TAC) commercial transport is its ability to retain the operational advantage of this highly evolved aircraft type like cabin evacuation, ground operation, safety, and certification. This study explores if the TVC transport concept offers improved flight performance due to synergistically reducing the traditional empennage size, overall resulting in reduced weight and drag, and therefore reduced aircraft fuel consumption. In particular, this study explores if the TVC technology in combination with the reduced empennage methodology enables the TAC aircraft to synergistically evolve while complying with current safety and certification regulation. This research utilizes the multi-disciplinary parametric sizing software, AVD Sizing, developed by the Aerospace Vehicle Design (AVD) Laboratory. The sizing software is responsible for visualizing the total system solution space via parametric trades and is capable of determining if the TVC technology can enable the TAC aircraft to synergistically evolve, showing marked improvements in performance and cost. This study indicates that the TVC plus reduced empennage methodology shows marked improvements in performance and cost.

  11. Mobile Agents: A Distributed Voice-Commanded Sensory and Robotic System for Surface EVA Assistance

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ronnie

    2003-01-01

    A model-based, distributed architecture integrates diverse components in a system designed for lunar and planetary surface operations: spacesuit biosensors, cameras, GPS, and a robotic assistant. The system transmits data and assists communication between the extra-vehicular activity (EVA) astronauts, the crew in a local habitat, and a remote mission support team. Software processes ("agents"), implemented in a system called Brahms, run on multiple, mobile platforms, including the spacesuit backpacks, all-terrain vehicles, and robot. These "mobile agents" interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. Different types of agents relate platforms to each other ("proxy agents"), devices to software ("comm agents"), and people to the system ("personal agents"). A state-of-the-art spoken dialogue interface enables people to communicate with their personal agents, supporting a speech-driven navigation and scheduling tool, field observation record, and rover command system. An important aspect of the engineering methodology involves first simulating the entire hardware and software system in Brahms, and then configuring the agents into a runtime system. Design of mobile agent functionality has been based on ethnographic observation of scientists working in Mars analog settings in the High Canadian Arctic on Devon Island and the southeast Utah desert. The Mobile Agents system is developed iteratively in the context of use, with people doing authentic work. This paper provides a brief introduction to the architecture and emphasizes the method of empirical requirements analysis, through which observation, modeling, design, and testing are integrated in simulated EVA operations.

  12. Automated x-ray/light field congruence using the LINAC EPID panel.

    PubMed

    Polak, Wojciech; O'Doherty, Jim; Jones, Matt

    2013-03-01

    X-ray/light field alignment is a test described in many guidelines for the routine quality control of clinical linear accelerators (LINAC). Currently, the gold standard method for measuring alignment is through utilization of radiographic film. However, many modern LINACs are equipped with an electronic portal imaging device (EPID) that may be used to perform this test and thus subsequently reducing overall cost, processing, and analysis time, removing operator dependency and the requirement to sustain the departmental film processor. This work describes a novel method of utilizing the EPID together with a custom inhouse designed jig and automatic image processing software allowing measurement of the light field size, x-ray field size, and congruence between them. The authors present results of testing the method for aS1000 and aS500 Varian EPID detectors for six LINACs at a range of energies (6, 10, and 15 MV) in comparison with the results obtained from the use of radiographic film. Reproducibility of the software in fully automatic operation under a range of operating conditions for a single image showed a congruence of 0.01 cm with a coefficient of variation of 0. Slight variation in congruence repeatability was noted through semiautomatic processing by four independent operators due to manual marking of positions on the jig. Testing of the methodology using the automatic method shows a high precision of 0.02 mm compared to a maximum of 0.06 mm determined by film processing. Intraindividual examination of operator measurements of congruence was shown to vary as much as 0.75 mm. Similar congruence measurements of 0.02 mm were also determined for a lower resolution EPID (aS500 model), after rescaling of the image to the aS1000 image size. The designed methodology was proven to be time efficient, cost effective, and at least as accurate as using the gold standard radiographic film. Additionally, congruence testing can be easily performed for all four cardinal gantry angles which can be difficult when using radiographic film. Therefore, the authors propose it can be used as an alternative to the radiographic film method allowing decommissioning of the film processor.

  13. Selecting a software development methodology. [of digital flight control systems

    NASA Technical Reports Server (NTRS)

    Jones, R. E.

    1981-01-01

    The state of the art analytical techniques for the development and verification of digital flight control software is studied and a practical designer oriented development and verification methodology is produced. The effectiveness of the analytic techniques chosen for the development and verification methodology are assessed both technically and financially. Technical assessments analyze the error preventing and detecting capabilities of the chosen technique in all of the pertinent software development phases. Financial assessments describe the cost impact of using the techniques, specifically, the cost of implementing and applying the techniques as well as the relizable cost savings. Both the technical and financial assessment are quantitative where possible. In the case of techniques which cannot be quantitatively assessed, qualitative judgements are expressed about the effectiveness and cost of the techniques. The reasons why quantitative assessments are not possible will be documented.

  14. 75 FR 11918 - Hewlett Pachard Company, Business Critical Systems, Mission Critical Business Software Division...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-12

    ... Pachard Company, Business Critical Systems, Mission Critical Business Software Division, Openvms Operating... Business Software Division, Openvms Operating System Development Group, Including an Employee Operating Out... Company, Business Critical Systems, Mission Critical Business Software Division, OpenVMS Operating System...

  15. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2007-01-01

    NASA relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft launched that does not have a computer on board that will provide command and control services. There have been recent incidents where software has played a role in high-profile mission failures and hazardous incidents. For example, the Mars Orbiter, Mars Polar Lander, the DART (Demonstration of Autonomous Rendezvous Technology), and MER (Mars Exploration Rover) Spirit anomalies were all caused or contributed to by software. The Mission Control Centers for the Shuttle, ISS, and unmanned programs are highly dependant on software for data displays, analysis, and mission planning. Despite this growing dependence on software control and monitoring, there has been little to no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Meanwhile, academia and private industry have been stepping forward with procedures and standards for safety critical systems and software, for example Dr. Nancy Leveson's book Safeware: System Safety and Computers. The NASA Software Safety Standard, originally published in 1997, was widely ignored due to its complexity and poor organization. It also focused on concepts rather than definite procedural requirements organized around a software project lifecycle. Led by NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard has recently undergone a significant update. This new standard provides the procedures and guidelines for evaluating a project for safety criticality and then lays out the minimum project lifecycle requirements to assure the software is created, operated, and maintained in the safest possible manner. This update of the standard clearly delineates the minimum set of software safety requirements for a project without detailing the implementation for those requirements. This allows the projects leeway to meet these requirements in many forms that best suit a particular project's needs and safety risk. In other words, it tells the project what to do, not how to do it. This update also incorporated advances in the state of the practice of software safety from academia and private industry. It addresses some of the more common issues now facing software developers in the NASA environment such as the use of Commercial-Off-the-Shelf Software (COTS), Modified OTS (MOTS), Government OTS (GOTS), and reused software. A team from across NASA developed the update and it has had both NASA-wide internal reviews by software engineering, quality, safety, and project management. It has also had expert external review. This presentation and paper will discuss the new NASA Software Safety Standard, its organization, and key features. It will start with a brief discussion of some NASA mission failures and incidents that had software as one of their root causes. It will then give a brief overview of the NASA Software Safety Process. This will include an overview of the key personnel responsibilities and functions that must be performed for safety-critical software.

  16. Methodology for Software Reliability Prediction. Volume 1.

    DTIC Science & Technology

    1987-11-01

    SPACECRAFT 0 MANNED SPACECRAFT B ATCH SYSTEM AIRBORNE AVIONICS 0 UNMANNED EVENT C014TROL a REAL TIME CLOSED 0 UNMANNED SPACECRAFT LOOP OPERATINS SPACECRAFT...software reliability. A Software Reliability Measurement Framework was established which spans the life cycle of a software system and includes the...specification, prediction, estimation, and assessment of software reliability. Data from 59 systems , representing over 5 million lines of code, were

  17. Measuring the software process and product: Lessons learned in the SEL

    NASA Technical Reports Server (NTRS)

    Basili, V. R.

    1985-01-01

    The software development process and product can and should be measured. The software measurement process at the Software Engineering Laboratory (SEL) has taught a major lesson: develop a goal-driven paradigm (also characterized as a goal/question/metric paradigm) for data collection. Project analysis under this paradigm leads to a design for evaluating and improving the methodology of software development and maintenance.

  18. LANDSAT-D flight segment operations manual. Appendix B: OBC software operations

    NASA Technical Reports Server (NTRS)

    Talipsky, R.

    1981-01-01

    The LANDSAT 4 satellite contains two NASA standard spacecraft computers and 65,536 words of memory. Onboard computer software is divided into flight executive and applications processors. Both applications processors and the flight executive use one or more of 67 system tables to obtain variables, constants, and software flags. Output from the software for monitoring operation is via 49 OBC telemetry reports subcommutated in the spacecraft telemetry. Information is provided about the flight software as it is used to control the various spacecraft operations and interpret operational OBC telemetry. Processor function descriptions, processor operation, software constraints, processor system tables, processor telemetry, and processor flow charts are presented.

  19. 75 FR 5146 - Hewlett Packard Company Business Critical Systems, Mission Critical Business Software Division...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-01

    ... Packard Company Business Critical Systems, Mission Critical Business Software Division, OpenVMS Operating... Software Division, OpenVMS Operating System Development Group, Including an Employee Operating Out of the..., Mission Critical Business Software Division, OpenVMS Operating System Development Group, including...

  20. Monte Carlo Methodology Serves Up a Software Success

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Widely used for the modeling of gas flows through the computation of the motion and collisions of representative molecules, the Direct Simulation Monte Carlo method has become the gold standard for producing research and engineering predictions in the field of rarefied gas dynamics. Direct Simulation Monte Carlo was first introduced in the early 1960s by Dr. Graeme Bird, a professor at the University of Sydney, Australia. It has since proved to be a valuable tool to the aerospace and defense industries in providing design and operational support data, as well as flight data analysis. In 2002, NASA brought to the forefront a software product that maintains the same basic physics formulation of Dr. Bird's method, but provides effective modeling of complex, three-dimensional, real vehicle simulations and parallel processing capabilities to handle additional computational requirements, especially in areas where computational fluid dynamics (CFD) is not applicable. NASA's Direct Simulation Monte Carlo Analysis Code (DAC) software package is now considered the Agency s premier high-fidelity simulation tool for predicting vehicle aerodynamics and aerothermodynamic environments in rarified, or low-density, gas flows.

  1. A SOFTWARE TOOL TO COMPARE MEASURED AND SIMULATED BUILDING ENERGY PERFORMANCE DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maile, Tobias; Bazjanac, Vladimir; O'Donnell, James

    2011-11-01

    Building energy performance is often inadequate when compared to design goals. To link design goals to actual operation one can compare measured with simulated energy performance data. Our previously developed comparison approach is the Energy Performance Comparison Methodology (EPCM), which enables the identification of performance problems based on a comparison of measured and simulated performance data. In context of this method, we developed a software tool that provides graphing and data processing capabilities of the two performance data sets. The software tool called SEE IT (Stanford Energy Efficiency Information Tool) eliminates the need for manual generation of data plots andmore » data reformatting. SEE IT makes the generation of time series, scatter and carpet plots independent of the source of data (measured or simulated) and provides a valuable tool for comparing measurements with simulation results. SEE IT also allows assigning data points on a predefined building object hierarchy and supports different versions of simulated performance data. This paper briefly introduces the EPCM, describes the SEE IT tool and illustrates its use in the context of a building case study.« less

  2. MisTec - A software application for supporting space exploration scenario options and technology development analysis and planning

    NASA Technical Reports Server (NTRS)

    Horsham, Gary A. P.

    1992-01-01

    This structure and composition of a new, emerging software application, which models and analyzes space exploration scenario options for feasibility based on technology development projections is presented. The software application consists of four main components: a scenario generator for designing and inputting scenario options and constraints; a processor which performs algorithmic coupling and options analyses of mission activity requirements and technology capabilities; a results display which graphically and textually shows coupling and options analysis results; and a data/knowledge base which contains information on a variety of mission activities and (power and propulsion) technology system capabilities. The general long-range study process used by NASA to support recent studies is briefly introduced to provide the primary basis for comparison for discussing the potential advantages to be gained from developing and applying this kind of application. A hypothetical example of a scenario option to facilitate the best conceptual understanding of what the application is, how it works, or the operating methodology, and when it might be applied is presented.

  3. MisTec: A software application for supporting space exploration scenario options and technology development analysis and planning

    NASA Technical Reports Server (NTRS)

    Horsham, Gary A. P.

    1991-01-01

    The structure and composition of a new, emerging software application, which models and analyzes space exploration scenario options for feasibility based on technology development projections is presented. The software application consists of four main components: a scenario generator for designing and inputting scenario options and constraints; a processor which performs algorithmic coupling and options analyses of mission activity requirements and technology capabilities; a results display which graphically and textually shows coupling and options analysis results; and a data/knowledge base which contains information on a variety of mission activities and (power and propulsion) technology system capabilities. The general long-range study process used by NASA to support recent studies is briefly introduced to provide the primary basis for comparison for discussing the potential advantages to be gained from developing and applying this king of application. A hypothetical example of a scenario option to facilitate the best conceptual understanding of what the application is, how it works, or the operating methodology, and when it might be applied is presented.

  4. Use of multilevel modeling for determining optimal parameters of heat supply systems

    NASA Astrophysics Data System (ADS)

    Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.

    2017-07-01

    The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in St. Petersburg, the city of Bratsk, and the Magistral'nyi settlement.

  5. Integral Design Methodology of Photocatalytic Reactors for Air Pollution Remediation.

    PubMed

    Passalía, Claudio; Alfano, Orlando M; Brandi, Rodolfo J

    2017-06-07

    An integral reactor design methodology was developed to address the optimal design of photocatalytic wall reactors to be used in air pollution control. For a target pollutant to be eliminated from an air stream, the proposed methodology is initiated with a mechanistic derived reaction rate. The determination of intrinsic kinetic parameters is associated with the use of a simple geometry laboratory scale reactor, operation under kinetic control and a uniform incident radiation flux, which allows computing the local superficial rate of photon absorption. Thus, a simple model can describe the mass balance and a solution may be obtained. The kinetic parameters may be estimated by the combination of the mathematical model and the experimental results. The validated intrinsic kinetics obtained may be directly used in the scaling-up of any reactor configuration and size. The bench scale reactor may require the use of complex computational software to obtain the fields of velocity, radiation absorption and species concentration. The complete methodology was successfully applied to the elimination of airborne formaldehyde. The kinetic parameters were determined in a flat plate reactor, whilst a bench scale corrugated wall reactor was used to illustrate the scaling-up methodology. In addition, an optimal folding angle of the corrugated reactor was found using computational fluid dynamics tools.

  6. Modular design of synthetic gene circuits with biological parts and pools.

    PubMed

    Marchisio, Mario Andrea

    2015-01-01

    Synthetic gene circuits can be designed in an electronic fashion by displaying their basic components-Standard Biological Parts and Pools of molecules-on the computer screen and connecting them with hypothetical wires. This procedure, achieved by our add-on for the software ProMoT, was successfully applied to bacterial circuits. Recently, we have extended this design-methodology to eukaryotic cells. Here, highly complex components such as promoters and Pools of mRNA contain hundreds of species and reactions whose calculation demands a rule-based modeling approach. We showed how to build such complex modules via the joint employment of the software BioNetGen (rule-based modeling) and ProMoT (modularization). In this chapter, we illustrate how to utilize our computational tool for synthetic biology with the in silico implementation of a simple eukaryotic gene circuit that performs the logic AND operation.

  7. An ontology based information system for the management of institutional repository's collections

    NASA Astrophysics Data System (ADS)

    Tsolakidis, A.; Kakoulidis, P.; Skourlas, C.

    2015-02-01

    In this paper we discuss a simple methodological approach to create, and customize institutional repositories for the domain of the technological education. The use of the open source software platform of DSpace is proposed to build up the repository application and provide access to digital resources including research papers, dissertations, administrative documents, educational material, etc. Also the use of owl ontologies is proposed for indexing and accessing the various, heterogeneous items stored in the repository. Customization and operation of a platform for the selection and use of terms or parts of similar existing owl ontologies is also described. This platform could be based on the open source software Protégé that supports owl, is widely used, and also supports visualization, SPARQL etc. The combined use of the owl platform and the DSpace repository form a basis for creating customized ontologies, accommodating the semantic metadata of items and facilitating searching.

  8. Localization-based super-resolution imaging meets high-content screening.

    PubMed

    Beghin, Anne; Kechkar, Adel; Butler, Corey; Levet, Florian; Cabillic, Marine; Rossier, Olivier; Giannone, Gregory; Galland, Rémi; Choquet, Daniel; Sibarita, Jean-Baptiste

    2017-12-01

    Single-molecule localization microscopy techniques have proven to be essential tools for quantitatively monitoring biological processes at unprecedented spatial resolution. However, these techniques are very low throughput and are not yet compatible with fully automated, multiparametric cellular assays. This shortcoming is primarily due to the huge amount of data generated during imaging and the lack of software for automation and dedicated data mining. We describe an automated quantitative single-molecule-based super-resolution methodology that operates in standard multiwell plates and uses analysis based on high-content screening and data-mining software. The workflow is compatible with fixed- and live-cell imaging and allows extraction of quantitative data like fluorophore photophysics, protein clustering or dynamic behavior of biomolecules. We demonstrate that the method is compatible with high-content screening using 3D dSTORM and DNA-PAINT based super-resolution microscopy as well as single-particle tracking.

  9. IFC BIM-Based Methodology for Semi-Automated Building Energy Performance Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bazjanac, Vladimir

    2008-07-01

    Building energy performance (BEP) simulation is still rarely used in building design, commissioning and operations. The process is too costly and too labor intensive, and it takes too long to deliver results. Its quantitative results are not reproducible due to arbitrary decisions and assumptions made in simulation model definition, and can be trusted only under special circumstances. A methodology to semi-automate BEP simulation preparation and execution makes this process much more effective. It incorporates principles of information science and aims to eliminate inappropriate human intervention that results in subjective and arbitrary decisions. This is achieved by automating every part ofmore » the BEP modeling and simulation process that can be automated, by relying on data from original sources, and by making any necessary data transformation rule-based and automated. This paper describes the new methodology and its relationship to IFC-based BIM and software interoperability. It identifies five steps that are critical to its implementation, and shows what part of the methodology can be applied today. The paper concludes with a discussion of application to simulation with EnergyPlus, and describes data transformation rules embedded in the new Geometry Simplification Tool (GST).« less

  10. ARC Software and Models

    Science.gov Websites

    produce software code and methodologies that are transferred to TARDEC and industry partners. These constraints", ASME Dynamic Systems and Control Conference, 2013, DOI:10.1115/DSCC2013-3935 Software Monitoring",IEEE Transactions on Control Systems Technology, DOI:10.1109/TCST.2012.2217143 Fast

  11. Performance testing of 3D point cloud software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  12. The GenABEL Project for statistical genomics.

    PubMed

    Karssen, Lennart C; van Duijn, Cornelia M; Aulchenko, Yurii S

    2016-01-01

    Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the "core team", facilitating agile statistical omics methodology development and fast dissemination.

  13. The IceCube Neutrino Observatory: instrumentation and online systems

    NASA Astrophysics Data System (ADS)

    Aartsen, M. G.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Altmann, D.; Andeen, K.; Anderson, T.; Ansseau, I.; Anton, G.; Archinger, M.; Argüelles, C.; Auer, R.; Auffenberg, J.; Axani, S.; Baccus, J.; Bai, X.; Barnet, S.; Barwick, S. W.; Baum, V.; Bay, R.; Beattie, K.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; Bendfelt, T.; BenZvi, S.; Berley, D.; Bernardini, E.; Bernhard, A.; Besson, D. Z.; Binder, G.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blot, S.; Boersma, D.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Bouchta, A.; Braun, J.; Brayeur, L.; Bretz, H.-P.; Bron, S.; Burgman, A.; Burreson, C.; Carver, T.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Clark, K.; Classen, L.; Coenders, S.; Collin, G. H.; Conrad, J. M.; Cowen, D. F.; Cross, R.; Day, C.; Day, M.; de André, J. P. A. M.; De Clercq, C.; del Pino Rosendo, E.; Dembinski, H.; De Ridder, S.; Descamps, F.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; di Lorenzo, V.; Dujmovic, H.; Dumm, J. P.; Dunkman, M.; Eberhardt, B.; Edwards, W. R.; Ehrhardt, T.; Eichmann, B.; Eller, P.; Euler, S.; Evenson, P. A.; Fahey, S.; Fazely, A. R.; Feintzeig, J.; Felde, J.; Filimonov, K.; Finley, C.; Flis, S.; Fösig, C.-C.; Franckowiak, A.; Frère, M.; Friedman, E.; Fuchs, T.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Giang, W.; Gladstone, L.; Glauch, T.; Glowacki, D.; Glüsenkamp, T.; Goldschmidt, A.; Gonzalez, J. G.; Grant, D.; Griffith, Z.; Gustafsson, L.; Haack, C.; Hallgren, A.; Halzen, F.; Hansen, E.; Hansmann, T.; Hanson, K.; Haugen, J.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Heller, R.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Hoshina, K.; Huang, F.; Huber, M.; Hulth, P. O.; Hultqvist, K.; In, S.; Inaba, M.; Ishihara, A.; Jacobi, E.; Jacobsen, J.; Japaridze, G. S.; Jeong, M.; Jero, K.; Jones, A.; Jones, B. J. P.; Joseph, J.; Kang, W.; Kappes, A.; Karg, T.; Karle, A.; Katz, U.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kemp, J.; Kheirandish, A.; Kim, J.; Kim, M.; Kintscher, T.; Kiryluk, J.; Kitamura, N.; Kittler, T.; Klein, S. R.; Kleinfelder, S.; Kleist, M.; Kohnen, G.; Koirala, R.; Kolanoski, H.; Konietz, R.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krasberg, M.; Krings, K.; Kroll, M.; Krückl, G.; Krüger, C.; Kunnen, J.; Kunwar, S.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Laihem, K.; Landsman, H.; Lanfranchi, J. L.; Larson, M. J.; Lauber, F.; Laundrie, A.; Lennarz, D.; Leich, H.; Lesiak-Bzdak, M.; Leuermann, M.; Lu, L.; Ludwig, J.; Lünemann, J.; Mackenzie, C.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Mancina, S.; Mandelartz, M.; Maruyama, R.; Mase, K.; Matis, H.; Maunu, R.; McNally, F.; McParland, C. P.; Meade, P.; Meagher, K.; Medici, M.; Meier, M.; Meli, A.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Minor, R. H.; Montaruli, T.; Moulai, M.; Murray, T.; Nahnhauer, R.; Naumann, U.; Neer, G.; Newcomb, M.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke Pollmann, A.; Olivas, A.; O'Murchadha, A.; Palczewski, T.; Pandya, H.; Pankova, D. V.; Patton, S.; Peiffer, P.; Penek, Ö.; Pepper, J. A.; Pérez de los Heros, C.; Pettersen, C.; Pieloth, D.; Pinat, E.; Price, P. B.; Przybylski, G. T.; Quinnan, M.; Raab, C.; Rädel, L.; Rameez, M.; Rawlins, K.; Reimann, R.; Relethford, B.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Riedel, B.; Robertson, S.; Rongen, M.; Roucelle, C.; Rott, C.; Ruhe, T.; Ryckbosch, D.; Rysewyk, D.; Sabbatini, L.; Sanchez Herrera, S. E.; Sandrock, A.; Sandroos, J.; Sandstrom, P.; Sarkar, S.; Satalecka, K.; Schlunder, P.; Schmidt, T.; Schoenen, S.; Schöneberg, S.; Schukraft, A.; Schumacher, L.; Seckel, D.; Seunarine, S.; Solarz, M.; Soldin, D.; Song, M.; Spiczak, G. M.; Spiering, C.; Stanev, T.; Stasik, A.; Stettner, J.; Steuer, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Ström, R.; Strotjohann, N. L.; Sulanke, K.-H.; Sullivan, G. W.; Sutherland, M.; Taavola, H.; Taboada, I.; Tatar, J.; Tenholt, F.; Ter-Antonyan, S.; Terliuk, A.; Tešić, G.; Thollander, L.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Toscano, S.; Tosi, D.; Tselengidou, M.; Turcati, A.; Unger, E.; Usner, M.; Vandenbroucke, J.; van Eijndhoven, N.; Vanheule, S.; van Rossem, M.; van Santen, J.; Vehring, M.; Voge, M.; Vogel, E.; Vraeghe, M.; Wahl, D.; Walck, C.; Wallace, A.; Wallraff, M.; Wandkowsky, N.; Weaver, Ch.; Weiss, M. J.; Wendt, C.; Westerhoff, S.; Wharton, D.; Whelan, B. J.; Wickmann, S.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wills, L.; Wisniewski, P.; Wolf, M.; Wood, T. R.; Woolsey, E.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zoll, M.

    2017-03-01

    The IceCube Neutrino Observatory is a cubic-kilometer-scale high-energy neutrino detector built into the ice at the South Pole. Construction of IceCube, the largest neutrino detector built to date, was completed in 2011 and enabled the discovery of high-energy astrophysical neutrinos. We describe here the design, production, and calibration of the IceCube digital optical module (DOM), the cable systems, computing hardware, and our methodology for drilling and deployment. We also describe the online triggering and data filtering systems that select candidate neutrino and cosmic ray events for analysis. Due to a rigorous pre-deployment protocol, 98.4% of the DOMs in the deep ice are operating and collecting data. IceCube routinely achieves a detector uptime of 99% by emphasizing software stability and monitoring. Detector operations have been stable since construction was completed, and the detector is expected to operate at least until the end of the next decade.

  14. Space Situational Awareness in the Joint Space Operations Center

    NASA Astrophysics Data System (ADS)

    Wasson, M.

    2011-09-01

    Flight safety of orbiting resident space objects is critical to our national interest and defense. United States Strategic Command has assigned the responsibility for Space Situational Awareness (SSA) to its Joint Functional Component Command - Space (JFCC SPACE) at Vandenberg Air Force Base. This paper will describe current SSA imperatives, new developments in SSA tools and developments in Defensive Operations. Current SSA processes are being examined to capture, and possibly improve, tasking of SSN sensors and "new" space-based sensors, "common" conjunction assessment methodology, and SSA sharing due to the growth seen over the last two years. The stand-up of a Defensive Ops Branch will highlight the need for advanced analysis and collaboration across space, weather, intelligence, and cyber specialties. New developments in SSA tools will be a description of computing hardware/software upgrades planned as well as the use of User-Defined Operating Pictures and visualization applications.

  15. An expert system for municipal solid waste management simulation analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsieh, M.C.; Chang, N.B.

    1996-12-31

    Optimization techniques were usually used to model the complicated metropolitan solid waste management system to search for the best dynamic combination of waste recycling, facility siting, and system operation, where sophisticated and well-defined interrelationship are required in the modeling process. But this paper applied the Concurrent Object-Oriented Simulation (COOS), a new simulation software construction method, to bridge the gap between the physical system and its computer representation. The case study of Kaohsiung solid waste management system in Taiwan is prepared for the illustration of the analytical methodology of COOS and its implementation in the creation of an expert system.

  16. Adaptive System Modeling for Spacecraft Simulation

    NASA Technical Reports Server (NTRS)

    Thomas, Justin

    2011-01-01

    This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).

  17. Automatic lesion detection in capsule endoscopy based on color saliency: closer to an essential adjunct for reviewing software.

    PubMed

    Iakovidis, Dimitris K; Koulaouzidis, Anastasios

    2014-11-01

    The advent of wireless capsule endoscopy (WCE) has revolutionized the diagnostic approach to small-bowel disease. However, the task of reviewing WCE video sequences is laborious and time-consuming; software tools offering automated video analysis would enable a timelier and potentially a more accurate diagnosis. To assess the validity of innovative, automatic lesion-detection software in WCE. A color feature-based pattern recognition methodology was devised and applied to the aforementioned image group. This study was performed at the Royal Infirmary of Edinburgh, United Kingdom, and the Technological Educational Institute of Central Greece, Lamia, Greece. A total of 137 deidentified WCE single images, 77 showing pathology and 60 normal images. The proposed methodology, unlike state-of-the-art approaches, is capable of detecting several different types of lesions. The average performance, in terms of the area under the receiver-operating characteristic curve, reached 89.2 ± 0.9%. The best average performance was obtained for angiectasias (97.5 ± 2.4%) and nodular lymphangiectasias (96.3 ± 3.6%). Single expert for annotation of pathologies, single type of WCE model, use of single images instead of entire WCE videos. A simple, yet effective, approach allowing automatic detection of all types of abnormalities in capsule endoscopy is presented. Based on color pattern recognition, it outperforms previous state-of-the-art approaches. Moreover, it is robust in the presence of luminal contents and is capable of detecting even very small lesions. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.

  18. Sustaining Software-Intensive Systems

    DTIC Science & Technology

    2006-05-01

    2.2 Multi- Service Operational Test and Evaluation .......................................4 2.3 Stable Software Baseline...or equivalent document • completed Multi- Service Operational Test and Evaluation (MOT&E) for the potential production software package (or OT&E if...not multi- service ) • stable software production baseline • complete and current software documentation • Authority to Operate (ATO) for an

  19. Autonomous Performance Monitoring System: Monitoring and Self-Tuning (MAST)

    NASA Technical Reports Server (NTRS)

    Peterson, Chariya; Ziyad, Nigel A.

    2000-01-01

    Maintaining the long-term performance of software onboard a spacecraft can be a major factor in the cost of operations. In particular, the task of controlling and maintaining a future mission of distributed spacecraft will undoubtedly pose a great challenge, since the complexity of multiple spacecraft flying in formation grows rapidly as the number of spacecraft in the formation increases. Eventually, new approaches will be required in developing viable control systems that can handle the complexity of the data and that are flexible, reliable and efficient. In this paper we propose a methodology that aims to maintain the accuracy of flight software, while reducing the computational complexity of software tuning tasks. The proposed Monitoring and Self-Tuning (MAST) method consists of two parts: a flight software monitoring algorithm and a tuning algorithm. The dependency on the software being monitored is mostly contained in the monitoring process, while the tuning process is a generic algorithm independent of the detailed knowledge on the software. This architecture will enable MAST to be applicable to different onboard software controlling various dynamics of the spacecraft, such as attitude self-calibration, and formation control. An advantage of MAST over conventional techniques such as filter or batch least square is that the tuning algorithm uses machine learning approach to handle uncertainty in the problem domain, resulting in reducing over all computational complexity. The underlying concept of this technique is a reinforcement learning scheme based on cumulative probability generated by the historical performance of the system. The success of MAST will depend heavily on the reinforcement scheme used in the tuning algorithm, which guarantees the tuning solutions exist.

  20. Integrated Systems Health Management (ISHM) Toolkit

    NASA Technical Reports Server (NTRS)

    Venkatesh, Meera; Kapadia, Ravi; Walker, Mark; Wilkins, Kim

    2013-01-01

    A framework of software components has been implemented to facilitate the development of ISHM systems according to a methodology based on Reliability Centered Maintenance (RCM). This framework is collectively referred to as the Toolkit and was developed using General Atomics' Health MAP (TM) technology. The toolkit is intended to provide assistance to software developers of mission-critical system health monitoring applications in the specification, implementation, configuration, and deployment of such applications. In addition to software tools designed to facilitate these objectives, the toolkit also provides direction to software developers in accordance with an ISHM specification and development methodology. The development tools are based on an RCM approach for the development of ISHM systems. This approach focuses on defining, detecting, and predicting the likelihood of system functional failures and their undesirable consequences.

  1. Software reuse in spacecraft planning and scheduling systems

    NASA Technical Reports Server (NTRS)

    Mclean, David; Tuchman, Alan; Broseghini, Todd; Yen, Wen; Page, Brenda; Johnson, Jay; Bogovich, Lynn; Burkhardt, Chris; Mcintyre, James; Klein, Scott

    1993-01-01

    The use of a software toolkit and development methodology that supports software reuse is described. The toolkit includes source-code-level library modules and stand-alone tools which support such tasks as data reformatting and report generation, simple relational database applications, user interfaces, tactical planning, strategic planning and documentation. The current toolkit is written in C and supports applications that run on IBM-PC's under DOS and UNlX-based workstations under OpenLook and Motif. The toolkit is fully integrated for building scheduling systems that reuse AI knowledge base technology. A typical scheduling scenario and three examples of applications that utilize the reuse toolkit will be briefly described. In addition to the tools themselves, a description of the software evolution and reuse methodology that was used is presented.

  2. [An educational software development proposal for nursing in neonatal cardiopulmonary resuscitation].

    PubMed

    Rodrigues, Rita de Cassia Vieira; Peres, Heloisa Helena Ciqueto

    2013-02-01

    The objective of this study was to develop an educational software program for nursing continuing education. This program was intended to incorporate applied methodological research that used the learning management system methodology created by Galvis Panqueva in association with contextualized instructional design for software design. As a result of this study, we created a computerized educational product (CEP) called ENFNET. This study describes all the necessary steps taken during its development. The creation of a CEP demands a great deal of study, dedication and investment as well as the necessity of specialized technical personnel to construct it. At the end of the study, the software was positively evaluated and shown to be a useful strategy to help users in their education, skills development and professional training.

  3. Designing Distributed Learning Environments with Intelligent Software Agents

    ERIC Educational Resources Information Center

    Lin, Fuhua, Ed.

    2005-01-01

    "Designing Distributed Learning Environments with Intelligent Software Agents" reports on the most recent advances in agent technologies for distributed learning. Chapters are devoted to the various aspects of intelligent software agents in distributed learning, including the methodological and technical issues on where and how intelligent agents…

  4. Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.

  5. Treatment of dyeing wastewater by TiO2/H2O2/UV process: experimental design approach for evaluating total organic carbon (TOC) removal efficiency.

    PubMed

    Lee, Seung-Mok; Kim, Young-Gyu; Cho, Il-Hyoung

    2005-01-01

    Optimal operating conditions in order to treat dyeing wastewater were investigated by using the factorial design and responses surface methodology (RSM). The experiment was statistically designed and carried out according to a 22 full factorial design with four factorial points, three center points, and four axial points. Then, the linear and nonlinear regression was applied on the data by using SAS package software. The independent variables were TiO2 dosage, H2O2 concentration and total organic carbon (TOC) removal efficiency of dyeing wastewater was dependent variable. From the factorial design and responses surface methodology (RSM), maximum removal efficiency (85%) of dyeing wastewater was obtained at TiO2 dosage (1.82 gL(-1)), H2O2 concentration (980 mgL(-1)) for oxidation reaction (20 min).

  6. Waste treatability guidance program. User`s guide. Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, C.

    1995-12-21

    DOE sites across the country generate and manage radioactive, hazardous, mixed, and sanitary wastes. It is necessary for each site to find the technologies and associated capacities required to manage its waste. One role of DOE HQ Office of Environmental Restoration and Waste Management is to facilitate the integration of the site- specific plans into coherent national plans. DOE has developed a standard methodology for defining and categorizing waste streams into treatability groups based on characteristic parameters that influence waste management technology needs. This Waste Treatability Guidance Program automates the Guidance Document for the categorization of waste information into treatabilitymore » groups; this application provides a consistent implementation of the methodology across the National TRU Program. This User`s Guide provides instructions on how to use the program, including installations instructions and program operation. This document satisfies the requirements of the Software Quality Assurance Plan.« less

  7. Use of Soft Computing Technologies for a Qualitative and Reliable Engine Control System for Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Brown, Terry; Crumbley, R. T. (Technical Monitor)

    2001-01-01

    The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to improve overall vehicle system safety, reliability, and rocket engine performance by development of a qualitative and reliable engine control system (QRECS). Specifically, this will be addressed by enhancing rocket engine control using SCT, innovative data mining tools, and sound software engineering practices used in Marshall's Flight Software Group (FSG). The principle goals for addressing the issue of quality are to improve software management, software development time, software maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control methodologies, but to provide alternative design choices for control, implementation, performance, and sustaining engineering, all relative to addressing the issue of reliability. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion (system level), software engineering for embedded flight software systems, and soft computing technologies (i.e., neural networks, fuzzy logic, data mining, and Bayesian belief networks); some of which are briefed in this paper. For this effort, the targeted demonstration rocket engine testbed is the MC-1 engine (formerly FASTRAC) which is simulated with hardware and software in the Marshall Avionics & Software Testbed (MAST) laboratory that currently resides at NASA's Marshall Space Flight Center, building 4476, and is managed by the Avionics Department. A brief plan of action for design, development, implementation, and testing a Phase One effort for QRECS is given, along with expected results. Phase One will focus on development of a Smart Start Engine Module and a Mainstage Engine Module for proper engine start and mainstage engine operations. The overall intent is to demonstrate that by employing soft computing technologies, the quality and reliability of the overall scheme to engine controller development is further improved and vehicle safety is further insured. The final product that this paper proposes is an approach to development of an alternative low cost engine controller that would be capable of performing in unique vision spacecraft vehicles requiring low cost advanced avionics architectures for autonomous operations from engine pre-start to engine shutdown.

  8. Electronic Design Automation: Integrating the Design and Manufacturing Functions

    NASA Technical Reports Server (NTRS)

    Bachnak, Rafic; Salkowski, Charles

    1997-01-01

    As the complexity of electronic systems grows, the traditional design practice, a sequential process, is replaced by concurrent design methodologies. A major advantage of concurrent design is that the feedback from software and manufacturing engineers can be easily incorporated into the design. The implementation of concurrent engineering methodologies is greatly facilitated by employing the latest Electronic Design Automation (EDA) tools. These tools offer integrated simulation of the electrical, mechanical, and manufacturing functions and support virtual prototyping, rapid prototyping, and hardware-software co-design. This report presents recommendations for enhancing the electronic design and manufacturing capabilities and procedures at JSC based on a concurrent design methodology that employs EDA tools.

  9. Knowledge Sharing through Pair Programming in Learning Environments: An Empirical Study

    ERIC Educational Resources Information Center

    Kavitha, R. K.; Ahmed, M. S.

    2015-01-01

    Agile software development is an iterative and incremental methodology, where solutions evolve from self-organizing, cross-functional teams. Pair programming is a type of agile software development technique where two programmers work together with one computer for developing software. This paper reports the results of the pair programming…

  10. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Badger, W.; Beckman, C. S.; Beshers, G.; Hammerslag, D.; Kimball, J.; Kirslis, P. A.; Render, H.; Richards, P.; Terwilliger, R.

    1984-01-01

    The project to automate the management of software production systems is described. The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. Several major components of the SAGA system are completed to prototype form. The construction methods are described.

  11. 5G: rethink mobile communications for 2020+.

    PubMed

    Chih-Lin, I; Han, Shuangfeng; Xu, Zhikun; Sun, Qi; Pan, Zhengang

    2016-03-06

    The 5G network is anticipated to meet the challenging requirements of mobile traffic in the 2020s, which are characterized by super high data rate, low latency, high mobility, high energy efficiency and high traffic density. This paper provides an overview of China Mobile's 5G vision and potential solutions. Three key characteristics of 5G are analysed, i.e. super fast, soft and green. The main 5G R&D themes are further elaborated, which include five fundamental rethinkings of the traditional design methodologies. The 5G network design considerations are also discussed, with cloud radio access network, ultra-dense network, software defined network and network function virtualization examined as key potential solutions towards a green and soft 5G network. The paradigm shift to user-centric network operation from the traditional cell-centric operation is also investigated, where the decoupled downlink and uplink, control and data, and adaptive multiple connections provide sufficient means to achieve a user-centric 5G network with 'no more cells'. The software defined air interface is investigated under a uniform framework and can adaptively adapt the parameters to well satisfy various requirements in different 5G scenarios. © 2016 The Author(s).

  12. IEEE Computer Society/Software Engineering Institute Software Process Achievement (SPA) Award 2009

    DTIC Science & Technology

    2011-03-01

    capabilities to our GDM. We also introduced software as a service ( SaaS ) as part our technology solutions and have further enhanced our ability to...model PROSPER Infosys production support methodology Q&P quality and productivity R&D research and development SaaS software as a service ... Software Development Life Cycle (SDLC) 23 Table 10: Scientific Estimation Coverage by Service Line 27 CMU/SEI-2011-TR-008 | vi CMU/SEI-2011

  13. An Agile Course-Delivery Approach

    ERIC Educational Resources Information Center

    Capellan, Mirkeya

    2009-01-01

    In the world of software development, agile methodologies have gained popularity thanks to their lightweight methodologies and flexible approach. Many advocates believe that agile methodologies can provide significant benefits if applied in the educational environment as a teaching method. The need for an approach that engages and motivates…

  14. A Guide to the Application of Probability Risk Assessment Methodology and Hazard Risk Frequency Criteria as a Hazard Control for the Use of the Mobile Servicing System on the International Space Station

    NASA Astrophysics Data System (ADS)

    D'silva, Oneil; Kerrison, Roger

    2013-09-01

    A key feature for the increased utilization of space robotics is to automate Extra-Vehicular manned space activities and thus significantly reduce the potential for catastrophic hazards while simultaneously minimizing the overall costs associated with manned space. The principal scope of the paper is to evaluate the use of industry standard accepted Probability risk/safety assessment (PRA/PSA) methodologies and Hazard Risk frequency Criteria as a hazard control. This paper illustrates the applicability of combining the selected Probability risk assessment methodology and hazard risk frequency criteria, in order to apply the necessary safety controls that allow for the increased use of the Mobile Servicing system (MSS) robotic system on the International Space Station. This document will consider factors such as component failure rate reliability, software reliability, and periods of operation and dormancy, fault tree analyses and their effects on the probability risk assessments. The paper concludes with suggestions for the incorporation of existing industry Risk/Safety plans to create an applicable safety process for future activities/programs

  15. Validating agent oriented methodology (AOM) for netlogo modelling and simulation

    NASA Astrophysics Data System (ADS)

    WaiShiang, Cheah; Nissom, Shane; YeeWai, Sim; Sharbini, Hamizan

    2017-10-01

    AOM (Agent Oriented Modeling) is a comprehensive and unified agent methodology for agent oriented software development. AOM methodology was proposed to aid developers with the introduction of technique, terminology, notation and guideline during agent systems development. Although AOM methodology is claimed to be capable of developing a complex real world system, its potential is yet to be realized and recognized by the mainstream software community and the adoption of AOM is still at its infancy. Among the reason is that there are not much case studies or success story of AOM. This paper presents two case studies on the adoption of AOM for individual based modelling and simulation. It demonstrate how the AOM is useful for epidemiology study and ecological study. Hence, it further validate the AOM in a qualitative manner.

  16. Contingency theoretic methodology for agent-based web-oriented manufacturing systems

    NASA Astrophysics Data System (ADS)

    Durrett, John R.; Burnell, Lisa J.; Priest, John W.

    2000-12-01

    The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.

  17. Space Shuttle RTOS Bayesian Network

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Beling, Peter A.

    2001-01-01

    With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores. Using a prioritization of measures from the decision-maker, trade-offs between the scores are used to rank order the available set of RTOS candidates.

  18. Reconstruction of Cyber and Physical Software Using Novel Spread Method

    NASA Astrophysics Data System (ADS)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    Cyber and Physical software has been concerned for many years since 2010. Actually, many researchers would disagree with the deployment of traditional Spread Method for reconstruction of Cyber and physical software, which embodies the key principles reconstruction of cyber physical system. NSM(novel spread method), our new methodology for reconstruction of cyber and physical software, is the solution to all of these challenges.

  19. Bringing Legacy Visualization Software to Modern Computing Devices via Application Streaming

    NASA Astrophysics Data System (ADS)

    Fisher, Ward

    2014-05-01

    Planning software compatibility across forthcoming generations of computing platforms is a problem commonly encountered in software engineering and development. While this problem can affect any class of software, data analysis and visualization programs are particularly vulnerable. This is due in part to their inherent dependency on specialized hardware and computing environments. A number of strategies and tools have been designed to aid software engineers with this task. While generally embraced by developers at 'traditional' software companies, these methodologies are often dismissed by the scientific software community as unwieldy, inefficient and unnecessary. As a result, many important and storied scientific software packages can struggle to adapt to a new computing environment; for example, one in which much work is carried out on sub-laptop devices (such as tablets and smartphones). Rewriting these packages for a new platform often requires significant investment in terms of development time and developer expertise. In many cases, porting older software to modern devices is neither practical nor possible. As a result, replacement software must be developed from scratch, wasting resources better spent on other projects. Enabled largely by the rapid rise and adoption of cloud computing platforms, 'Application Streaming' technologies allow legacy visualization and analysis software to be operated wholly from a client device (be it laptop, tablet or smartphone) while retaining full functionality and interactivity. It mitigates much of the developer effort required by other more traditional methods while simultaneously reducing the time it takes to bring the software to a new platform. This work will provide an overview of Application Streaming and how it compares against other technologies which allow scientific visualization software to be executed from a remote computer. We will discuss the functionality and limitations of existing application streaming frameworks and how a developer might prepare their software for application streaming. We will also examine the secondary benefits realized by moving legacy software to the cloud. Finally, we will examine the process by which a legacy Java application, the Integrated Data Viewer (IDV), is to be adapted for tablet computing via Application Streaming.

  20. Building information models for astronomy projects

    NASA Astrophysics Data System (ADS)

    Ariño, Javier; Murga, Gaizka; Campo, Ramón; Eletxigerra, Iñigo; Ampuero, Pedro

    2012-09-01

    A Building Information Model is a digital representation of physical and functional characteristics of a building. BIMs represent the geometrical characteristics of the Building, but also properties like bills of quantities, definition of COTS components, status of material in the different stages of the project, project economic data, etc. The BIM methodology, which is well established in the Architecture Engineering and Construction (AEC) domain for conventional buildings, has been brought one step forward in its application for Astronomical/Scientific facilities. In these facilities steel/concrete structures have high dynamic and seismic requirements, M&E installations are complex and there is a large amount of special equipment and mechanisms involved as a fundamental part of the facility. The detail design definition is typically implemented by different design teams in specialized design software packages. In order to allow the coordinated work of different engineering teams, the overall model, and its associated engineering database, is progressively integrated using a coordination and roaming software which can be used before starting construction phase for checking interferences, planning the construction sequence, studying maintenance operation, reporting to the project office, etc. This integrated design & construction approach will allow to efficiently plan construction sequence (4D). This is a powerful tool to study and analyze in detail alternative construction sequences and ideally coordinate the work of different construction teams. In addition engineering, construction and operational database can be linked to the virtual model (6D), what gives to the end users a invaluable tool for the lifecycle management, as all the facility information can be easily accessed, added or replaced. This paper presents the BIM methodology as implemented by IDOM with the E-ELT and ATST Enclosures as application examples.

  1. Requirement Metrics for Risk Identification

    NASA Technical Reports Server (NTRS)

    Hammer, Theodore; Huffman, Lenore; Wilson, William; Rosenberg, Linda; Hyatt, Lawrence

    1996-01-01

    The Software Assurance Technology Center (SATC) is part of the Office of Mission Assurance of the Goddard Space Flight Center (GSFC). The SATC's mission is to assist National Aeronautics and Space Administration (NASA) projects to improve the quality of software which they acquire or develop. The SATC's efforts are currently focused on the development and use of metric methodologies and tools that identify and assess risks associated with software performance and scheduled delivery. This starts at the requirements phase, where the SATC, in conjunction with software projects at GSFC and other NASA centers is working to identify tools and metric methodologies to assist project managers in identifying and mitigating risks. This paper discusses requirement metrics currently being used at NASA in a collaborative effort between the SATC and the Quality Assurance Office at GSFC to utilize the information available through the application of requirements management tools.

  2. Logic flowgraph methodology - A tool for modeling embedded systems

    NASA Technical Reports Server (NTRS)

    Muthukumar, C. T.; Guarro, S. B.; Apostolakis, G. E.

    1991-01-01

    The logic flowgraph methodology (LFM), a method for modeling hardware in terms of its process parameters, has been extended to form an analytical tool for the analysis of integrated (hardware/software) embedded systems. In the software part of a given embedded system model, timing and the control flow among different software components are modeled by augmenting LFM with modified Petrinet structures. The objective of the use of such an augmented LFM model is to uncover possible errors and the potential for unanticipated software/hardware interactions. This is done by backtracking through the augmented LFM mode according to established procedures which allow the semiautomated construction of fault trees for any chosen state of the embedded system (top event). These fault trees, in turn, produce the possible combinations of lower-level states (events) that may lead to the top event.

  3. The GenABEL Project for statistical genomics

    PubMed Central

    Karssen, Lennart C.; van Duijn, Cornelia M.; Aulchenko, Yurii S.

    2016-01-01

    Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the “core team”, facilitating agile statistical omics methodology development and fast dissemination. PMID:27347381

  4. Validation of a digital audio recording method for the objective assessment of cough in the horse.

    PubMed

    Duz, M; Whittaker, A G; Love, S; Parkin, T D H; Hughes, K J

    2010-10-01

    To validate the use of digital audio recording and analysis for quantification of coughing in horses. Part A: Nine simultaneous digital audio and video recordings were collected individually from seven stabled horses over a 1 h period using a digital audio recorder attached to the halter. Audio files were analysed using audio analysis software. Video and audio recordings were analysed for cough count and timing by two blinded operators on two occasions using a randomised study design for determination of intra-operator and inter-operator agreement. Part B: Seventy-eight hours of audio recordings obtained from nine horses were analysed once by two blinded operators to assess inter-operator repeatability on a larger sample. Part A: There was complete agreement between audio and video analyses and inter- and intra-operator analyses. Part B: There was >97% agreement between operators on number and timing of 727 coughs recorded over 78 h. The results of this study suggest that the cough monitor methodology used has excellent sensitivity and specificity for the objective assessment of cough in horses and intra- and inter-operator variability of recorded coughs is minimal. Crown Copyright 2010. Published by Elsevier India Pvt Ltd. All rights reserved.

  5. Mechanistic-empirical Pavement Design Guide Implementation

    DOT National Transportation Integrated Search

    2010-06-01

    The recently introduced Mechanistic-Empirical Pavement Design Guide (MEPDG) and associated computer software provides a state-of-practice mechanistic-empirical highway pavement design methodology. The MEPDG methodology is based on pavement responses ...

  6. Space Telecommunications Radio System (STRS) Compliance Testing

    NASA Technical Reports Server (NTRS)

    Handler, Louis M.

    2011-01-01

    The Space Telecommunications Radio System (STRS) defines an open architecture for software defined radios. This document describes the testing methodology to aid in determining the degree of compliance to the STRS architecture. Non-compliances are reported to the software and hardware developers as well as the NASA project manager so that any non-compliances may be fixed or waivers issued. Since the software developers may be divided into those that provide the operating environment including the operating system and STRS infrastructure (OE) and those that supply the waveform applications, the tests are divided accordingly. The static tests are also divided by the availability of an automated tool that determines whether the source code and configuration files contain the appropriate items. Thus, there are six separate step-by-step test procedures described as well as the corresponding requirements that they test. The six types of STRS compliance tests are: STRS application automated testing, STRS infrastructure automated testing, STRS infrastructure testing by compiling WFCCN with the infrastructure, STRS configuration file testing, STRS application manual code testing, and STRS infrastructure manual code testing. Examples of the input and output of the scripts are shown in the appendices as well as more specific information about what to configure and test in WFCCN for non-compliance. In addition, each STRS requirement is listed and the type of testing briefly described. Attached is also a set of guidelines on what to look for in addition to the requirements to aid in the document review process.

  7. Development of a flight software testing methodology

    NASA Technical Reports Server (NTRS)

    Mccluskey, E. J.; Andrews, D. M.

    1985-01-01

    The research to develop a testing methodology for flight software is described. An experiment was conducted in using assertions to dynamically test digital flight control software. The experiment showed that 87% of typical errors introduced into the program would be detected by assertions. Detailed analysis of the test data showed that the number of assertions needed to detect those errors could be reduced to a minimal set. The analysis also revealed that the most effective assertions tested program parameters that provided greater indirect (collateral) testing of other parameters. In addition, a prototype watchdog task system was built to evaluate the effectiveness of executing assertions in parallel by using the multitasking features of Ada.

  8. Automating Risk Analysis of Software Design Models

    PubMed Central

    Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P.

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance. PMID:25136688

  9. Automating risk analysis of software design models.

    PubMed

    Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  10. Euroforgen-NoE collaborative exercise on LRmix to demonstrate standardization of the interpretation of complex DNA profiles.

    PubMed

    Prieto, L; Haned, H; Mosquera, A; Crespillo, M; Alemañ, M; Aler, M; Alvarez, F; Baeza-Richer, C; Dominguez, A; Doutremepuich, C; Farfán, M J; Fenger-Grøn, M; García-Ganivet, J M; González-Moya, E; Hombreiro, L; Lareu, M V; Martínez-Jarreta, B; Merigioli, S; Milans Del Bosch, P; Morling, N; Muñoz-Nieto, M; Ortega-González, E; Pedrosa, S; Pérez, R; Solís, C; Yurrebaso, I; Gill, P

    2014-03-01

    There has been very little work published on the variation of reporting practices of mixtures between laboratories, but it has been previously demonstrated that there is little consistency. This is because there is no current uniformity of practice, so different laboratories will operate using different rules. The interpretation of mixtures is not solely a matter of using some software to provide 'an answer'. An assessment of a case will usually begin with a consideration of the circumstances of a crime. Assumptions made about the numbers of contributors follow from an examination of the electropherogram(s)--and these may differ between the prosecution and the defence hypotheses. There may be a necessity to evaluate several sets of hypotheses for any given case if the circumstances are uncertain. Once the hypotheses are formulated, the mathematical analysis is complex and can only be accomplished by the use of specialist software. In order to obtain meaningful results, it is essential that scientists are trained, not only in the use of the software, but also in the methodology to understand the likelihood ratio concept that is used. The Euroforgen-NoE initiative has developed a training course that utilizes the LRmix program to carry out the calculations. This software encompasses the recommendations of the ISFG DNA commissions on mixture interpretation and is able to interpret samples that may come from two or more contributors and may also be partial profiles. Recently, eighteen different laboratories were trained in the methodology. Afterwards they were asked to independently analyze two different cases with partial mixture DNA evidence and to write a statement court-report. We show that by introducing a structured training programme, it is possible to demonstrate, for the first time, that a high degree of standardization, leading to uniformity of results can be achieved by participating laboratories. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Software platform virtualization in chemistry research and university teaching

    PubMed Central

    2009-01-01

    Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997

  12. Software platform virtualization in chemistry research and university teaching.

    PubMed

    Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver

    2009-11-16

    Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.

  13. A method for the complete analysis of NORM building materials by γ-ray spectrometry using HPGe detectors.

    PubMed

    Quintana, B; Pedrosa, M C; Vázquez-Canelas, L; Santamaría, R; Sanjuán, M A; Puertas, F

    2018-04-01

    A methodology including software tools for analysing NORM building materials and residues by low-level gamma-ray spectrometry has been developed. It comprises deconvolution of gamma-ray spectra using the software GALEA with focus on the natural radionuclides and Monte Carlo simulations for efficiency and true coincidence summing corrections. The methodology has been tested on a range of building materials and validated against reference materials. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A Software Engineering Environment for the Navy.

    DTIC Science & Technology

    1982-03-31

    Engineering Pr.cess . - 55 ?art II: Description of A Software Engineering Env.Lonnmeut 1. Data Base ........................................ 7 -3 L.I...Methodology to Tool 1-54 2.2.2.2-6 Flow of Management: Activity to Methodology to Tool 21- 55 2.2.2.2-7 Pipelining for Activity-Specific Tools 11-56 A.1.1-1 A...testing techniques. 2.2. 2 Methodciogies and Tools: Correctness Analysis Pai e T- 4Metboioioo ies aews - Pev2.ews Jeicrmine the in ernai ’ Qolc .. ness and

  15. IEEE Computer Society/Software Engineering Institute Watts S. Humphrey Software Process Achievement Award 2016: Raytheon Integrated Defense Systems Design for Six Sigma Team

    DTIC Science & Technology

    2017-04-01

    notice for non -US Government use and distribution. External use: This material may be reproduced in its entirety, without modification, and freely...Combinatorial Design Methods 4 2.1 Identification of Significant Improvement Opportunity 4 2.2 Methodology Development 4 2.3 Piloting...11 3 Process Performance Modeling and Analysis 13 3.1 Identification of Significant Improvement Opportunity 13 3.2 Methodology Development 13 3.3

  16. Three Object-Oriented enhancement for EPICS

    NASA Astrophysics Data System (ADS)

    Osberg, E. A.; Dohan, D. A.; Richter, R.; Biggs, R.; Chillara, K.; Wade, D.; Bossom, J.

    1994-12-01

    In line with our group's intention of producing software using, where possible, Object-Oriented methodologies and techniques in the development of RF control systems, we have undertaken three projects to enhance the EPICS software environment. Two of the projects involve interfaces to EPICs Channel Access from Object-Oriented languages. The third is an enhancement to the EPICS State Notation Language to better support the Shlaer-Mellor Object-Oriented Analysis and Design Methodology. This paper discusses the motivation, approaches, results and future directions of these three projects.

  17. Software Testing and Verification in Climate Model Development

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Rood, RIchard B.

    2011-01-01

    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  18. Applications of decision analysis and related techniques to industrial engineering problems at KSC

    NASA Technical Reports Server (NTRS)

    Evans, Gerald W.

    1995-01-01

    This report provides: (1) a discussion of the origination of decision analysis problems (well-structured problems) from ill-structured problems; (2) a review of the various methodologies and software packages for decision analysis and related problem areas; (3) a discussion of how the characteristics of a decision analysis problem affect the choice of modeling methodologies, thus providing a guide as to when to choose a particular methodology; and (4) examples of applications of decision analysis to particular problems encountered by the IE Group at KSC. With respect to the specific applications at KSC, particular emphasis is placed on the use of the Demos software package (Lumina Decision Systems, 1993).

  19. A Reconfigurable Simulation-Based Test System for Automatically Assessing Software Operating Skills

    ERIC Educational Resources Information Center

    Su, Jun-Ming; Lin, Huan-Yu

    2015-01-01

    In recent years, software operating skills, the ability in computer literacy to solve problems using specific software, has become much more important. A great deal of research has also proven that students' software operating skills can be efficiently improved by practicing customized virtual and simulated examinations. However, constructing…

  20. IT Software Development and IT Operations Strategic Alignment: An Agile DevOps Model

    ERIC Educational Resources Information Center

    Hart, Michael

    2017-01-01

    Information Technology (IT) departments that include development and operations are essential to develop software that meet customer needs. DevOps is a term originally constructed from software development and IT operations. DevOps includes the collaboration of all stakeholders such as software engineers and systems administrators involved in the…

  1. A Research Agenda for Service-Oriented Architecture (SOA): Maintenance and Evolution of Service-Oriented Systems

    DTIC Science & Technology

    2010-03-01

    service consumers, and infrastructure. Techniques from any iterative and incremental software development methodology followed by the organiza- tion... Service -Oriented Architecture Environment (CMU/SEI-2008-TN-008). Software Engineering Institute, Carnegie Mellon University, 2008. http://www.sei.cmu.edu...Integrating Legacy Software into a Service Oriented Architecture.” Proceedings of the 10th European Conference on Software Maintenance (CSMR 2006). Bari

  2. Measuring the complexity of design in real-time imaging software

    NASA Astrophysics Data System (ADS)

    Sangwan, Raghvinder S.; Vercellone-Smith, Pamela; Laplante, Phillip A.

    2007-02-01

    Due to the intricacies in the algorithms involved, the design of imaging software is considered to be more complex than non-image processing software (Sangwan et al, 2005). A recent investigation (Larsson and Laplante, 2006) examined the complexity of several image processing and non-image processing software packages along a wide variety of metrics, including those postulated by McCabe (1976), Chidamber and Kemerer (1994), and Martin (2003). This work found that it was not always possible to quantitatively compare the complexity between imaging applications and nonimage processing systems. Newer research and an accompanying tool (Structure 101, 2006), however, provides a greatly simplified approach to measuring software complexity. Therefore it may be possible to definitively quantify the complexity differences between imaging and non-imaging software, between imaging and real-time imaging software, and between software programs of the same application type. In this paper, we review prior results and describe the methodology for measuring complexity in imaging systems. We then apply a new complexity measurement methodology to several sets of imaging and non-imaging code in order to compare the complexity differences between the two types of applications. The benefit of such quantification is far reaching, for example, leading to more easily measured performance improvement and quality in real-time imaging code.

  3. INTEGRATION OF POLLUTION PREVENTION TOOLS

    EPA Science Inventory

    A prototype computer-based decision support system was designed to provide small businesses with an integrated pollution prevention methodology. Preliminary research involved compilation of an inventory of existing pollution prevention tools (i.e., methodologies, software, etc.),...

  4. Effective World Modeling: Multisensor Data Fusion Methodology for Automated Driving

    PubMed Central

    Elfring, Jos; Appeldoorn, Rein; van den Dries, Sjoerd; Kwakkernaat, Maurice

    2016-01-01

    The number of perception sensors on automated vehicles increases due to the increasing number of advanced driver assistance system functions and their increasing complexity. Furthermore, fail-safe systems require redundancy, thereby increasing the number of sensors even further. A one-size-fits-all multisensor data fusion architecture is not realistic due to the enormous diversity in vehicles, sensors and applications. As an alternative, this work presents a methodology that can be used to effectively come up with an implementation to build a consistent model of a vehicle’s surroundings. The methodology is accompanied by a software architecture. This combination minimizes the effort required to update the multisensor data fusion system whenever sensors or applications are added or replaced. A series of real-world experiments involving different sensors and algorithms demonstrates the methodology and the software architecture. PMID:27727171

  5. System diagnostic builder

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Burke, Roger

    1992-01-01

    The System Diagnostic Builder (SDB) is an automated software verification and validation tool using state-of-the-art Artificial Intelligence (AI) technologies. The SDB is used extensively by project BURKE at NASA-JSC as one component of a software re-engineering toolkit. The SDB is applicable to any government or commercial organization which performs verification and validation tasks. The SDB has an X-window interface, which allows the user to 'train' a set of rules for use in a rule-based evaluator. The interface has a window that allows the user to plot up to five data parameters (attributes) at a time. Using these plots and a mouse, the user can identify and classify a particular behavior of the subject software. Once the user has identified the general behavior patterns of the software, he can train a set of rules to represent his knowledge of that behavior. The training process builds rules and fuzzy sets to use in the evaluator. The fuzzy sets classify those data points not clearly identified as a particular classification. Once an initial set of rules is trained, each additional data set given to the SDB will be used by a machine learning mechanism to refine the rules and fuzzy sets. This is a passive process and, therefore, it does not require any additional operator time. The evaluation component of the SDB can be used to validate a single software system using some number of different data sets, such as a simulator. Moreover, it can be used to validate software systems which have been re-engineered from one language and design methodology to a totally new implementation.

  6. Extended cooperative control synthesis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1994-01-01

    This paper reports on research for extending the Cooperative Control Synthesis methodology to include a more accurate modeling of the pilot's controller dynamics. Cooperative Control Synthesis (CCS) is a methodology that addresses the problem of how to design control laws for piloted, high-order, multivariate systems and/or non-conventional dynamic configurations in the absence of flying qualities specifications. This is accomplished by emphasizing the parallel structure inherent in any pilot-controlled, augmented vehicle. The original CCS methodology is extended to include the Modified Optimal Control Model (MOCM), which is based upon the optimal control model of the human operator developed by Kleinman, Baron, and Levison in 1970. This model provides a modeling of the pilot's compensation dynamics that is more accurate than the simplified pilot dynamic representation currently in the CCS methodology. Inclusion of the MOCM into the CCS also enables the modeling of pilot-observation perception thresholds and pilot-observation attention allocation affects. This Extended Cooperative Control Synthesis (ECCS) allows for the direct calculation of pilot and system open- and closed-loop transfer functions in pole/zero form and is readily implemented in current software capable of analysis and design for dynamic systems. Example results based upon synthesizing an augmentation control law for an acceleration command system in a compensatory tracking task using the ECCS are compared with a similar synthesis performed by using the original CCS methodology. The ECCS is shown to provide augmentation control laws that yield more favorable, predicted closed-loop flying qualities and tracking performance than those synthesized using the original CCS methodology.

  7. Capturing security requirements for software systems.

    PubMed

    El-Hadary, Hassan; El-Kassas, Sherif

    2014-07-01

    Security is often an afterthought during software development. Realizing security early, especially in the requirement phase, is important so that security problems can be tackled early enough before going further in the process and avoid rework. A more effective approach for security requirement engineering is needed to provide a more systematic way for eliciting adequate security requirements. This paper proposes a methodology for security requirement elicitation based on problem frames. The methodology aims at early integration of security with software development. The main goal of the methodology is to assist developers elicit adequate security requirements in a more systematic way during the requirement engineering process. A security catalog, based on the problem frames, is constructed in order to help identifying security requirements with the aid of previous security knowledge. Abuse frames are used to model threats while security problem frames are used to model security requirements. We have made use of evaluation criteria to evaluate the resulting security requirements concentrating on conflicts identification among requirements. We have shown that more complete security requirements can be elicited by such methodology in addition to the assistance offered to developers to elicit security requirements in a more systematic way.

  8. Capturing security requirements for software systems

    PubMed Central

    El-Hadary, Hassan; El-Kassas, Sherif

    2014-01-01

    Security is often an afterthought during software development. Realizing security early, especially in the requirement phase, is important so that security problems can be tackled early enough before going further in the process and avoid rework. A more effective approach for security requirement engineering is needed to provide a more systematic way for eliciting adequate security requirements. This paper proposes a methodology for security requirement elicitation based on problem frames. The methodology aims at early integration of security with software development. The main goal of the methodology is to assist developers elicit adequate security requirements in a more systematic way during the requirement engineering process. A security catalog, based on the problem frames, is constructed in order to help identifying security requirements with the aid of previous security knowledge. Abuse frames are used to model threats while security problem frames are used to model security requirements. We have made use of evaluation criteria to evaluate the resulting security requirements concentrating on conflicts identification among requirements. We have shown that more complete security requirements can be elicited by such methodology in addition to the assistance offered to developers to elicit security requirements in a more systematic way. PMID:25685514

  9. Multitasking operating systems for microprocessors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cramer, T.

    1981-01-01

    Microprocessors, because of their low cost, low power consumption, and small size, have caused an explosion in the number of innovative computer applications. Although there is a great deal of variation in microprocessor applications software, there is relatively little variation in the operating-system-level software from one application to the next. Nonetheless, operating system software, especially when multitasking is involved, can be very time consuming and expensive to develop. The major microprocessor manufacturers have acknowledged the need for operating systems in microprocessor applications and are now supplying real-time multitasking operating system software that is adaptable to a wide variety of usermore » systems. Use of this existing operating system software will decrease the number of redundant operating system development efforts, thus freeing programmers to work on more creative and productive problems. This paper discusses the basic terminology and concepts involved with multitasking operating systems. It is intended to provide a general understanding of the subject, so that the reader will be prepared to evaluate specific operating system software according to his or her needs. 2 references.« less

  10. Guidance, navigation, and control subsystem equipment selection algorithm using expert system methods

    NASA Technical Reports Server (NTRS)

    Allen, Cheryl L.

    1991-01-01

    Enhanced engineering tools can be obtained through the integration of expert system methodologies and existing design software. The application of these methodologies to the spacecraft design and cost model (SDCM) software provides an improved technique for the selection of hardware for unmanned spacecraft subsystem design. The knowledge engineering system (KES) expert system development tool was used to implement a smarter equipment section algorithm than that which is currently achievable through the use of a standard data base system. The guidance, navigation, and control subsystems of the SDCM software was chosen as the initial subsystem for implementation. The portions of the SDCM code which compute the selection criteria and constraints remain intact, and the expert system equipment selection algorithm is embedded within this existing code. The architecture of this new methodology is described and its implementation is reported. The project background and a brief overview of the expert system is described, and once the details of the design are characterized, an example of its implementation is demonstrated.

  11. Embedded parallel processing based ground control systems for small satellite telemetry

    NASA Technical Reports Server (NTRS)

    Forman, Michael L.; Hazra, Tushar K.; Troendly, Gregory M.; Nickum, William G.

    1994-01-01

    The use of networked terminals which utilize embedded processing techniques results in totally integrated, flexible, high speed, reliable, and scalable systems suitable for telemetry and data processing applications such as mission operations centers (MOC). Synergies of these terminals, coupled with the capability of terminal to receive incoming data, allow the viewing of any defined display by any terminal from the start of data acquisition. There is no single point of failure (other than with network input) such as exists with configurations where all input data goes through a single front end processor and then to a serial string of workstations. Missions dedicated to NASA's ozone measurements program utilize the methodologies which are discussed, and result in a multimission configuration of low cost, scalable hardware and software which can be run by one flight operations team with low risk.

  12. Assessment of the integration capability of system architectures from a complex and distributed software systems perspective

    NASA Astrophysics Data System (ADS)

    Leuchter, S.; Reinert, F.; Müller, W.

    2014-06-01

    Procurement and design of system architectures capable of network centric operations demand for an assessment scheme in order to compare different alternative realizations. In this contribution an assessment method for system architectures targeted at the C4ISR domain is presented. The method addresses the integration capability of software systems from a complex and distributed software system perspective focusing communication, interfaces and software. The aim is to evaluate the capability to integrate a system or its functions within a system-of-systems network. This method uses approaches from software architecture quality assessment and applies them on the system architecture level. It features a specific goal tree of several dimensions that are relevant for enterprise integration. These dimensions have to be weighed against each other and totalized using methods from the normative decision theory in order to reflect the intention of the particular enterprise integration effort. The indicators and measurements for many of the considered quality features rely on a model based view on systems, networks, and the enterprise. That means it is applicable to System-of-System specifications based on enterprise architectural frameworks relying on defined meta-models or domain ontologies for defining views and viewpoints. In the defense context we use the NATO Architecture Framework (NAF) to ground respective system models. The proposed assessment method allows evaluating and comparing competing system designs regarding their future integration potential. It is a contribution to the system-of-systems engineering methodology.

  13. PIPER: Performance Insight for Programmers and Exascale Runtimes: Guiding the Development of the Exascale Software Stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellor-Crummey, John

    The PIPER project set out to develop methodologies and software for measurement, analysis, attribution, and presentation of performance data for extreme-scale systems. Goals of the project were to support analysis of massive multi-scale parallelism, heterogeneous architectures, multi-faceted performance concerns, and to support both post-mortem performance analysis to identify program features that contribute to problematic performance and on-line performance analysis to drive adaptation. This final report summarizes the research and development activity at Rice University as part of the PIPER project. Producing a complete suite of performance tools for exascale platforms during the course of this project was impossible since bothmore » hardware and software for exascale systems is still a moving target. For that reason, the project focused broadly on the development of new techniques for measurement and analysis of performance on modern parallel architectures, enhancements to HPCToolkit’s software infrastructure to support our research goals or use on sophisticated applications, engaging developers of multithreaded runtimes to explore how support for tools should be integrated into their designs, engaging operating system developers with feature requests for enhanced monitoring support, engaging vendors with requests that they add hardware measure- ment capabilities and software interfaces needed by tools as they design new components of HPC platforms including processors, accelerators and networks, and finally collaborations with partners interested in using HPCToolkit to analyze and tune scalable parallel applications.« less

  14. Toxicity Estimation Software Tool (TEST)

    EPA Science Inventory

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  15. Human performance cognitive-behavioral modeling: a benefit for occupational safety.

    PubMed

    Gore, Brian F

    2002-01-01

    Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.

  16. Search for supporting methodologies - Or how to support SEI for 35 years

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Masline, Richard C.

    1991-01-01

    Concepts relevant to the development of an evolvable information management system are examined in terms of support for the Space Exploration Initiative. The issues of interoperability within NASA and industry initiatives are studied including the Open Systems Interconnection standard and the operating system of the Open Software Foundation. The requirements of partitioning functionality into separate areas are determined with attention given to the infrastructure required to ensure system-wide compliance. The need for a decision-making context is a key to the distributed implementation of the program, and this environment is concluded to be next step in developing an evolvable, interoperable, and securable support network.

  17. Human performance cognitive-behavioral modeling: a benefit for occupational safety

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2002-01-01

    Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.

  18. A first-generation software product line for data acquisition systems in astronomy

    NASA Astrophysics Data System (ADS)

    López-Ruiz, J. C.; Heradio, Rubén; Cerrada Somolinos, José Antonio; Coz Fernandez, José Ramón; López Ramos, Pablo

    2008-07-01

    This article presents a case study on developing a software product line for data acquisition systems in astronomy based on the Exemplar Driven Development methodology and the Exemplar Flexibilization Language tool. The main strategies to build the software product line are based on the domain commonality and variability, the incremental scope and the use of existing artifacts. It consists on a lean methodology with little impact on the organization, suitable for small projects, which reduces product line start-up time. Software Product Lines focuses on creating a family of products instead of individual products. This approach has spectacular benefits on reducing the time to market, maintaining the know-how, reducing the development costs and increasing the quality of new products. The maintenance of the products is also enhanced since all the data acquisition systems share the same product line architecture.

  19. j5 DNA assembly design automation.

    PubMed

    Hillson, Nathan J

    2014-01-01

    Modern standardized methodologies, described in detail in the previous chapters of this book, have enabled the software-automated design of optimized DNA construction protocols. This chapter describes how to design (combinatorial) scar-less DNA assembly protocols using the web-based software j5. j5 assists biomedical and biotechnological researchers construct DNA by automating the design of optimized protocols for flanking homology sequence as well as type IIS endonuclease-mediated DNA assembly methodologies. Unlike any other software tool available today, j5 designs scar-less combinatorial DNA assembly protocols, performs a cost-benefit analysis to identify which portions of an assembly process would be less expensive to outsource to a DNA synthesis service provider, and designs hierarchical DNA assembly strategies to mitigate anticipated poor assembly junction sequence performance. Software integrated with j5 add significant value to the j5 design process through graphical user-interface enhancement and downstream liquid-handling robotic laboratory automation.

  20. Knowledge-based assistance in costing the space station DMS

    NASA Technical Reports Server (NTRS)

    Henson, Troy; Rone, Kyle

    1988-01-01

    The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.

  1. 25 CFR 547.12 - What are the minimum technical standards for downloading on a Class II gaming system?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... limited to software, files, data, and prize schedules. (2) Downloads must use secure methodologies that... date of the completion of the download; (iii) The Class II gaming system components to which software was downloaded; (iv) The version(s) of download package and any software downloaded. Logging of the...

  2. 25 CFR 547.12 - What are the minimum technical standards for downloading on a Class II gaming system?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... limited to software, files, data, and prize schedules. (2) Downloads must use secure methodologies that... date of the completion of the download; (iii) The Class II gaming system components to which software was downloaded; (iv) The version(s) of download package and any software downloaded. Logging of the...

  3. Application of Real Options Theory to DoD Software Acquisitions

    DTIC Science & Technology

    2009-08-01

    words.) The traditional real options valuation methodology, when enhanced and properly formulated around a proposed or existing software investment...Std 239-18 - ii - THIS PAGE INTENTIONALLY LEFT BLANK - iii - Abstract The traditional real options valuation ...founder and CEO of Real Options Valuation , Inc., a consulting, training, and software development firm specializing in strategic real options

  4. An application generator for rapid prototyping of Ada real-time control software

    NASA Technical Reports Server (NTRS)

    Johnson, Jim; Biglari, Haik; Lehman, Larry

    1990-01-01

    The need to increase engineering productivity and decrease software life cycle costs in real-time system development establishes a motivation for a method of rapid prototyping. The design by iterative rapid prototyping technique is described. A tool which facilitates such a design methodology for the generation of embedded control software is described.

  5. System Evaluation and Life-Cycle Cost Analysis of a Commercial-Scale High-Temperature Electrolysis Hydrogen Production Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwin A. Harvego; James E. O'Brien; Michael G. McKellar

    2012-11-01

    Results of a system evaluation and lifecycle cost analysis are presented for a commercial-scale high-temperature electrolysis (HTE) central hydrogen production plant. The plant design relies on grid electricity to power the electrolysis process and system components, and industrial natural gas to provide process heat. The HYSYS process analysis software was used to evaluate the reference central plant design capable of producing 50,000 kg/day of hydrogen. The HYSYS software performs mass and energy balances across all components to allow optimization of the design using a detailed process flow sheet and realistic operating conditions specified by the analyst. The lifecycle cost analysismore » was performed using the H2A analysis methodology developed by the Department of Energy (DOE) Hydrogen Program. This methodology utilizes Microsoft Excel spreadsheet analysis tools that require detailed plant performance information (obtained from HYSYS), along with financial and cost information to calculate lifecycle costs. The results of the lifecycle analyses indicate that for a 10% internal rate of return, a large central commercial-scale hydrogen production plant can produce 50,000 kg/day of hydrogen at an average cost of $2.68/kg. When the cost of carbon sequestration is taken into account, the average cost of hydrogen production increases by $0.40/kg to $3.08/kg.« less

  6. The JPL telerobot operator control station. Part 2: Software

    NASA Technical Reports Server (NTRS)

    Kan, Edwin P.; Landell, B. Patrick; Oxenberg, Sheldon; Morimoto, Carl

    1989-01-01

    The Operator Control Station of the Jet Propulsion Laboratory (JPL)/NASA Telerobot Demonstrator System provides the man-machine interface between the operator and the system. It provides all the hardware and software for accepting human input for the direct and indirect (supervised) manipulation of the robot arms and tools for task execution. Hardware and software are also provided for the display and feedback of information and control data for the operator's consumption and interaction with the task being executed. The software design of the operator control system is discussed.

  7. Optimisation robuste des aeronefs et des groupes turboreacteurs

    NASA Astrophysics Data System (ADS)

    Couturier, Philippe

    Future aircraft and powerplant designs will need to meet and perhaps anticipate increasingly demanding operational constraints. This progressive evolution in design requirements is already at work and arises from the combined impacts of increasingly stringent environmental norms with regards to noise and atmospheric emissions, a depletion of fossil fuel reserves which is expected to drive fuel costs upwards, as well as a steady increase in air traffic. In order to adapt to these market shifts, aircraft and powerplant companies will need to explore the potential range of benefits and risks associated with a wide spectrum of new designs and technologies. At the same time, it will be necessary to ensure that the resulting end products provide cost effective solutions when operated in the economic environment foreseen for the next generation of aircrafts. The objective of this study is to develop a methodology which enables the selection of optimal robust designs at the preliminary design stage as well as to quantify the compromise between a robust design and a potential gain in performance. The developed methodology is used in the design of a seventy passenger aircraft in order to determine the effects of uncertainty. The methodology seeks to optimize the design while attenuating its sensitivity to uncertainties. The goal is to reduce the likelihood of costly concept reformulations in the later stages of the product development process. A design platform was developed to enable the study at a conceptual level of aircraft and engine performance. It comprises four modules namely: the aircraft design and performance software Pacelab APD, a metamodel constructed with the software GasTurb to calculate engine performance, a module to predict the noise level, and a module to determine the operating costs. The last two modules were constructed using data from the literature. The effects related to two types of uncertainties present at the preliminary design stage were analyzed. These are uncertainties related to the market forecast for when the next generation of aircrafts will be in service as well as uncertainties of the level of fidelity of the models used. Based on predictions for future oil costs, the research found that an aircraft built for a similar cruising speed as today's jet aircrafts will minimize the mean of the predicted operating cost by having a configuration that minimizes fuel consumption. Conversely, it has been determined that fuel cost does not affect the design optimized to minimize the mean of the predicted operating costs when the cruise Mach number is variable. Furthermore, the use of Pareto fronts in order to quantify the compromise between a robust design and a potential gain in performance showed that the design variables have little influence on the sensitivity of the operating cost subject to model uncertainties. It has also been determined that neglecting uncertainties during the design process can lead to the selection of a configuration with a high risk of not satisfying the constraints.

  8. Methodology issues concerning the accuracy of kinematic data collection and analysis using the ariel performance analysis system

    NASA Technical Reports Server (NTRS)

    Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)

    1992-01-01

    Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.

  9. A design methodology for portable software on parallel computers

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Miller, Keith W.; Chrisman, Dan A.

    1993-01-01

    This final report for research that was supported by grant number NAG-1-995 documents our progress in addressing two difficulties in parallel programming. The first difficulty is developing software that will execute quickly on a parallel computer. The second difficulty is transporting software between dissimilar parallel computers. In general, we expect that more hardware-specific information will be included in software designs for parallel computers than in designs for sequential computers. This inclusion is an instance of portability being sacrificed for high performance. New parallel computers are being introduced frequently. Trying to keep one's software on the current high performance hardware, a software developer almost continually faces yet another expensive software transportation. The problem of the proposed research is to create a design methodology that helps designers to more precisely control both portability and hardware-specific programming details. The proposed research emphasizes programming for scientific applications. We completed our study of the parallelizability of a subsystem of the NASA Earth Radiation Budget Experiment (ERBE) data processing system. This work is summarized in section two. A more detailed description is provided in Appendix A ('Programming Practices to Support Eventual Parallelism'). Mr. Chrisman, a graduate student, wrote and successfully defended a Ph.D. dissertation proposal which describes our research associated with the issues of software portability and high performance. The list of research tasks are specified in the proposal. The proposal 'A Design Methodology for Portable Software on Parallel Computers' is summarized in section three and is provided in its entirety in Appendix B. We are currently studying a proposed subsystem of the NASA Clouds and the Earth's Radiant Energy System (CERES) data processing system. This software is the proof-of-concept for the Ph.D. dissertation. We have implemented and measured the performance of a portion of this subsystem on the Intel iPSC/2 parallel computer. These results are provided in section four. Our future work is summarized in section five, our acknowledgements are stated in section six, and references for published papers associated with NAG-1-995 are provided in section seven.

  10. Development of a Sensor Node for Precision Horticulture

    PubMed Central

    López, Juan A.; Soto, Fulgencio; Sánchez, Pedro; Iborra, Andrés; Suardiaz, Juan; Vera, Juan A.

    2009-01-01

    This paper presents the design of a new wireless sensor node (GAIA Soil-Mote) for precision horticulture applications which permits the use of precision agricultural instruments based on the SDI-12 standard. Wireless communication is achieved with a transceiver compliant with the IEEE 802.15.4 standard. The GAIA Soil-Mote software implementation is based on TinyOS. A two-phase methodology was devised to validate the design of this sensor node. The first phase consisted of laboratory validation of the proposed hardware and software solution, including a study on power consumption and autonomy. The second phase consisted of implementing a monitoring application in a real broccoli (Brassica oleracea L. var Marathon) crop in Campo de Cartagena in south-east Spain. In this way the sensor node was validated in real operating conditions. This type of application was chosen because there is a large potential market for it in the farming sector, especially for the development of precision agriculture applications. PMID:22412309

  11. Linking data to decision-making: applying qualitative data analysis methods and software to identify mechanisms for using outcomes data.

    PubMed

    Patel, Vaishali N; Riley, Anne W

    2007-10-01

    A multiple case study was conducted to examine how staff in child out-of-home care programs used data from an Outcomes Management System (OMS) and other sources to inform decision-making. Data collection consisted of thirty-seven semi-structured interviews with clinicians, managers, and directors from two treatment foster care programs and two residential treatment centers, and individuals involved with developing the OMS; and observations of clinical and quality management meetings. Case study and grounded theory methodology guided analyses. The application of qualitative data analysis software is described. Results show that although staff rarely used data from the OMS, they did rely on other sources of systematically collected information to inform clinical, quality management, and program decisions. Analyses of how staff used these data suggest that improving the utility of OMS will involve encouraging staff to participate in data-based decision-making, and designing and implementing OMS in a manner that reflects how decision-making processes operate.

  12. Angle Measurement System (AMS) for Establishing Model Pitch and Roll Zero, and Performing Single Axis Angle Comparisons

    NASA Technical Reports Server (NTRS)

    Crawford, Bradley L.

    2007-01-01

    The angle measurement system (AMS) developed at NASA Langley Research Center (LaRC) is a system for many uses. It was originally developed to check taper fits in the wind tunnel model support system. The system was further developed to measure simultaneous pitch and roll angles using 3 orthogonally mounted accelerometers (3-axis). This 3-axis arrangement is used as a transfer standard from the calibration standard to the wind tunnel facility. It is generally used to establish model pitch and roll zero and performs the in-situ calibration on model attitude devices. The AMS originally used a laptop computer running DOS based software but has recently been upgraded to operate in a windows environment. Other improvements have also been made to the software to enhance its accuracy and add features. This paper will discuss the accuracy and calibration methodologies used in this system and some of the features that have contributed to its popularity.

  13. Development of a sensor node for precision horticulture.

    PubMed

    López, Juan A; Soto, Fulgencio; Sánchez, Pedro; Iborra, Andrés; Suardiaz, Juan; Vera, Juan A

    2009-01-01

    This paper presents the design of a new wireless sensor node (GAIA Soil-Mote) for precision horticulture applications which permits the use of precision agricultural instruments based on the SDI-12 standard. Wireless communication is achieved with a transceiver compliant with the IEEE 802.15.4 standard. The GAIA Soil-Mote software implementation is based on TinyOS. A two-phase methodology was devised to validate the design of this sensor node. The first phase consisted of laboratory validation of the proposed hardware and software solution, including a study on power consumption and autonomy. The second phase consisted of implementing a monitoring application in a real broccoli (Brassica oleracea L. var Marathon) crop in Campo de Cartagena in south-east Spain. In this way the sensor node was validated in real operating conditions. This type of application was chosen because there is a large potential market for it in the farming sector, especially for the development of precision agriculture applications.

  14. Mobile Videoconferencing Apps for Telemedicine

    PubMed Central

    Liu, Wei-Li; Locatis, Craig; Ackerman, Michael

    2016-01-01

    Abstract Introduction: The quality and performance of several videoconferencing applications (apps) tested on iOS (Apple, Cupertino, CA) and Android™ (Google, Mountain View, CA) mobile platforms using Wi-Fi (802.11), third-generation (3G), and fourth-generation (4G) cellular networks are described. Materials and Methods: The tests were done to determine how well apps perform compared with videoconferencing software installed on computers or with more traditional videoconferencing using dedicated hardware. The rationale for app assessment and the testing methodology are described. Results: Findings are discussed in relation to operating system platform (iOS or Android) for which the apps were designed and the type of network (Wi-Fi, 3G, or 4G) used. The platform, network, and apps interact, and it is impossible to discuss videoconferencing experienced on mobile devices in relation to one of these factors without referencing the others. Conclusions: Apps for mobile devices can vary significantly from other videoconferencing software or hardware. App performance increased over the testing period due to improvements in network infrastructure and how apps manage bandwidth. PMID:26204322

  15. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  16. Mobile Videoconferencing Apps for Telemedicine.

    PubMed

    Zhang, Kai; Liu, Wei-Li; Locatis, Craig; Ackerman, Michael

    2016-01-01

    The quality and performance of several videoconferencing applications (apps) tested on iOS (Apple, Cupertino, CA) and Android (Google, Mountain View, CA) mobile platforms using Wi-Fi (802.11), third-generation (3G), and fourth-generation (4G) cellular networks are described. The tests were done to determine how well apps perform compared with videoconferencing software installed on computers or with more traditional videoconferencing using dedicated hardware. The rationale for app assessment and the testing methodology are described. Findings are discussed in relation to operating system platform (iOS or Android) for which the apps were designed and the type of network (Wi-Fi, 3G, or 4G) used. The platform, network, and apps interact, and it is impossible to discuss videoconferencing experienced on mobile devices in relation to one of these factors without referencing the others. Apps for mobile devices can vary significantly from other videoconferencing software or hardware. App performance increased over the testing period due to improvements in network infrastructure and how apps manage bandwidth.

  17. Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.

    PubMed

    Arganda-Carreras, Ignacio; Andrey, Philippe

    2017-01-01

    With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.

  18. KSC management training system project

    NASA Technical Reports Server (NTRS)

    Sepulveda, Jose A.

    1993-01-01

    The stated objectives for the summer of 1993 were: to review the Individual Development Plan Surveys for 1994 in order to automate the analysis of the Needs Assessment effort; and to develop and implement evaluation methodologies to perform ongoing program-wide course-to-course assessment. This includes the following: to propose a methodology to develop and implement objective, performance-based assessment instruments for each training effort; to mechanize course evaluation forms and develop software to facilitate the data gathering, analysis, and reporting processes; and to implement the methodology, forms, and software in at lease one training course or seminar selected among those normally offered in the summer at KSC. Section two of this report addresses the work done in regard to the Individual Development Plan Surveys for 1994. Section three presents the methodology proposed to develop and implement objective, performance-based assessment instruments for each training course offered at KSC.

  19. Automated Methodologies for the Design of Flow Diagrams for Development and Maintenance Activities

    NASA Astrophysics Data System (ADS)

    Shivanand M., Handigund; Shweta, Bhat

    The Software Requirements Specification (SRS) of the organization is a text document prepared by strategic management incorporating the requirements of the organization. These requirements of ongoing business/ project development process involve the software tools, the hardware devices, the manual procedures, the application programs and the communication commands. These components are appropriately ordered for achieving the mission of the concerned process both in the project development and the ongoing business processes, in different flow diagrams viz. activity chart, workflow diagram, activity diagram, component diagram and deployment diagram. This paper proposes two generic, automatic methodologies for the design of various flow diagrams of (i) project development activities, (ii) ongoing business process. The methodologies also resolve the ensuing deadlocks in the flow diagrams and determine the critical paths for the activity chart. Though both methodologies are independent, each complements other in authenticating its correctness and completeness.

  20. The Autonomous Sciencecraft and applications to future science missions

    NASA Astrophysics Data System (ADS)

    Chien, S.

    2006-05-01

    The Autonomous Sciencecraft Software has operated the Earth Observing One (EO-1) Mission for over 5000 science observations [Chien et al. 2005a]. This software enables onboard analysis of data to drive: 1. production of rapid alerts summary products, 2. data editing, and 3. to inform subsequent observations. This methodology has been applied to more effectively study Volcano, Flooding, and Cryosphere processes on Earth. In this talk we discuss how this software enables new paradigms for science missions and discuss the types of science phenomena that can now be more readily studied (e.g. dynamic investigations, large scale searches for specific events). We also describe a range of Earth, Solar, and Space science applications under concept study for onboard autonomy. Finally, we describe ongoing work to link EO-1 with other spacecraft and in-situ sensor networks to enable a sensorweb for monitoring dynamic science events [Chien et al. 2005b]. S. Chien, R. Sherwood, D. Tran, B. Cichy, G. Rabideau, R. Castano, A. Davies, D. Mandl, S. Frye, B. Trout, S. Shulman, D. Boyer, "Using Autonomy Flight Software to Improve Science Return on Earth Observing One, Journal of Aerospace Computing, Information, & Communication, April 2005, AIAA. S. Chien, B. Cichy, A. Davies, D. Tran, G. Rabideau, R. Castano, R. Sherwood, D. Mandl, S. Frye, S. Shulman, J. Jones, S. Grosvenor, "An Autonomous Earth Observing Sensorweb," IEEE Intelligent Systems, May-June 2005, pp. 16- 24.

  1. Proceedings of the 19th Annual Software Engineering Workshop

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of applications software. The goals of the SEL are: (1) to understand the software development process in the GSFC environment; (2) to measure the effects of various methodologies, tools, and models on this process; and (3) to identify and then to apply successful development practices. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that include this document.

  2. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Beckman, Carol S.; Benzinger, Leonora; Beshers, George; Hammerslag, David; Kimball, John; Kirslis, Peter A.; Render, Hal; Richards, Paul; Terwilliger, Robert

    1985-01-01

    The SAGA system is a software environment that is designed to support most of the software development activities that occur in a software lifecycle. The system can be configured to support specific software development applications using given programming languages, tools, and methodologies. Meta-tools are provided to ease configuration. The SAGA system consists of a small number of software components that are adapted by the meta-tools into specific tools for use in the software development application. The modules are design so that the meta-tools can construct an environment which is both integrated and flexible. The SAGA project is documented in several papers which are presented.

  3. Implementation of Cyber-Physical Production Systems for Quality Prediction and Operation Control in Metal Casting.

    PubMed

    Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin

    2018-05-04

    The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry.

  4. Analytical and Methodological Issues in the Use of Qualitative Data Analysis Software: A Description of Three Studies.

    ERIC Educational Resources Information Center

    Margerum-Leys, Jon; Kupperman, Jeff; Boyle-Heimann, Kristen

    This paper presents perspectives on the use of data analysis software in the process of qualitative research. These perspectives were gained in the conduct of three qualitative research studies that differed in theoretical frames, areas of interests, and scope. Their common use of a particular data analysis software package allows the exploration…

  5. Reverse Engineering and Software Products Reuse to Teach Collaborative Web Portals: A Case Study with Final-Year Computer Science Students

    ERIC Educational Resources Information Center

    Medina-Dominguez, Fuensanta; Sanchez-Segura, Maria-Isabel; Mora-Soto, Arturo; Amescua, Antonio

    2010-01-01

    The development of collaborative Web applications does not follow a software engineering methodology. This is because when university students study Web applications in general, and collaborative Web portals in particular, they are not being trained in the use of software engineering techniques to develop collaborative Web portals. This paper…

  6. The cleanroom case study in the Software Engineering Laboratory: Project description and early analysis

    NASA Technical Reports Server (NTRS)

    Green, Scott; Kouchakdjian, Ara; Basili, Victor; Weidow, David

    1990-01-01

    This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage.

  7. Integrated design optimization research and development in an industrial environment

    NASA Astrophysics Data System (ADS)

    Kumar, V.; German, Marjorie D.; Lee, S.-J.

    1989-04-01

    An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.

  8. Integrated design optimization research and development in an industrial environment

    NASA Technical Reports Server (NTRS)

    Kumar, V.; German, Marjorie D.; Lee, S.-J.

    1989-01-01

    An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.

  9. Turbo FRMAC 2016 v. 7.3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madrid, Gregory J.; Whitener, Dustin Heath; Folz, Wesley

    2017-05-27

    The Turbo FRMAC (TF) software program is the software implementation of the science and methodologies utilized in the Federal Radiological Monitoring and Assessment Center (FRMAC). The software automates the calculations described in volumes 1 of "The Federal Manual for Assessing Environmental Data during a Radiological Emergency" (2015 version). In the event of the intentional or accidental release of radioactive material, the software is used to guide and govern the response of the Federal, State, Local, and Tribal governments. The manual, upon which the software is based, is unclassified and freely available on the Internet.

  10. Turbo FRMAC 2018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulton, John; Gallagher, Linda; Gonzales, Alejandro

    The Turbo FRMAC (TF) software program is the software implementation of the science and methodologies utilized in the Federal Radiological Monitoring and Assessment Center (FRMAC). The software automates the calculations described in volumes 1 of "The Federal Manual for Assessing Environmental Data during a Radiological Emergency" (2015 version). In the event of the intentional or accidental release of radioactive material, the software is used to guide and govern the response of the Federal, State, Local, and Tribal governments. The manual, upon which the software is based, is unclassified and freely available on the Internet.

  11. Methodology of decreasing software complexity using ontology

    NASA Astrophysics Data System (ADS)

    DÄ browska-Kubik, Katarzyna

    2015-09-01

    In this paper a model of web application`s source code, based on the OSD ontology (Ontology for Software Development), is proposed. This model is applied to implementation and maintenance phase of software development process through the DevOntoCreator tool [5]. The aim of this solution is decreasing software complexity of that source code, using many different maintenance techniques, like creation of documentation, elimination dead code, cloned code or bugs, which were known before [1][2]. Due to this approach saving on software maintenance costs of web applications will be possible.

  12. Turbo FRMAC 2016 Version 7.1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulton, John; Gallagher, Linda K.; Madrid, Gregory J.

    2016-08-01

    The Turbo FRMAC (TF) software program is the software implementation of the science and methodologies utilized in the Federal Radiological Monitoring and Assessment Center (FRMAC). The software automates the calculations described in volumes 1 of "The Federal Manual for Assessing Environmental Data during a Radiological Emergency" (2015 version). In the event of the intentional or accidental release of radioactive material, the software is used to guide and govern the response of the Federal, State, Local, and Tribal governments. The manual, upon which the software is based, is unclassified and freely available on the Internet.

  13. Turbo FRMAC 2016 v. 7.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madrid, Gregory J.; Whitener, Dustin Heath; Folz, Wesley

    2017-02-27

    The Turbo FRMAC (TF) software program is the software implementation of the science and methodologies utilized in the Federal Radiological Monitoring and Assessment Center (FRMAC). The software automates the calculations described in volumes 1 of "The Federal Manual for Assessing Environmental Data during a Radiological Emergency" (2015 version). In the event of the intentional or accidental release of radioactive material, the software is used to guide and govern the response of the Federal, State, Local, and Tribal governments. The manual, upon which the software is based, is unclassified and freely available on the Internet.

  14. A Proposed Theory Seeded Methodology for Design Based Research into Effective Use of MUVES in Vocational Education Contexts

    ERIC Educational Resources Information Center

    Cochrane, Todd; Davis, Niki; Morrow, Donna

    2013-01-01

    A methodology for design based research (DBR) into effective development and use of Multi-User Virtual Environments (MUVE) in vocational education is proposed. It blends software development with DBR with two theories selected to inform the methodology. Legitimate peripheral participation LPP (Lave & Wenger, 1991) provides a filter when…

  15. 45 CFR 307.5 - Mandatory computerized support enforcement systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... hardware, operational system software, and electronic linkages with the separate components of an... plans to use and how they will interface with the base system; (3) Provide documentation that the... and for operating costs including hardware, operational software and applications software of a...

  16. 45 CFR 307.5 - Mandatory computerized support enforcement systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... hardware, operational system software, and electronic linkages with the separate components of an... plans to use and how they will interface with the base system; (3) Provide documentation that the... and for operating costs including hardware, operational software and applications software of a...

  17. 45 CFR 307.5 - Mandatory computerized support enforcement systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... hardware, operational system software, and electronic linkages with the separate components of an... plans to use and how they will interface with the base system; (3) Provide documentation that the... and for operating costs including hardware, operational software and applications software of a...

  18. Accelerating a MPEG-4 video decoder through custom software/hardware co-design

    NASA Astrophysics Data System (ADS)

    Díaz, Jorge L.; Barreto, Dacil; García, Luz; Marrero, Gustavo; Carballo, Pedro P.; Núñez, Antonio

    2007-05-01

    In this paper we present a novel methodology to accelerate an MPEG-4 video decoder using software/hardware co-design for wireless DAB/DMB networks. Software support includes the services provided by the embedded kernel μC/OS-II, and the application tasks mapped to software. Hardware support includes several custom co-processors and a communication architecture with bridges to the main system bus and with a dual port SRAM. Synchronization among tasks is achieved at two levels, by a hardware protocol and by kernel level scheduling services. Our reference application is an MPEG-4 video decoder composed of several software functions and written using a special C++ library named CASSE. Profiling and space exploration techniques were used previously over the Advanced Simple Profile (ASP) MPEG-4 decoder to determinate the best HW/SW partition developed here. This research is part of the ARTEMI project and its main goal is the establishment of methodologies for the design of real-time complex digital systems using Programmable Logic Devices with embedded microprocessors as target technology and the design of multimedia systems for broadcasting networks as reference application.

  19. Software Engineering Laboratory (SEL) cleanroom process model

    NASA Technical Reports Server (NTRS)

    Green, Scott; Basili, Victor; Godfrey, Sally; Mcgarry, Frank; Pajerski, Rose; Waligora, Sharon

    1991-01-01

    The Software Engineering Laboratory (SEL) cleanroom process model is described. The term 'cleanroom' originates in the integrated circuit (IC) production process, where IC's are assembled in dust free 'clean rooms' to prevent the destructive effects of dust. When applying the clean room methodology to the development of software systems, the primary focus is on software defect prevention rather than defect removal. The model is based on data and analysis from previous cleanroom efforts within the SEL and is tailored to serve as a guideline in applying the methodology to future production software efforts. The phases that are part of the process model life cycle from the delivery of requirements to the start of acceptance testing are described. For each defined phase, a set of specific activities is discussed, and the appropriate data flow is described. Pertinent managerial issues, key similarities and differences between the SEL's cleanroom process model and the standard development approach used on SEL projects, and significant lessons learned from prior cleanroom projects are presented. It is intended that the process model described here will be further tailored as additional SEL cleanroom projects are analyzed.

  20. A Roadmap for Using Agile Development in a Traditional Environment

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara; Starbird, Thomas; Grenander, Sven

    2006-01-01

    One of the newer classes of software engineering techniques is called 'Agile Development'. In Agile Development software engineers take small implementation steps and, in some cases, they program in pairs. In addition, they develop automatic tests prior to implementing their small functional piece. Agile Development focuses on rapid turnaround, incremental planning, customer involvement and continuous integration. Agile Development is not the traditional waterfall method or even a rapid prototyping method (although this methodology is closer to Agile Development). At the Jet Propulsion Laboratory (JPL) a few groups have begun Agile Development software implementations. The difficulty with this approach becomes apparent when Agile Development is used in an organization that has specific criteria and requirements handed down for how software development is to be performed. The work at the JPL is performed for the National Aeronautics and Space Agency (NASA). Both organizations have specific requirements, rules and processes for developing software. This paper will discuss some of the initial uses of the Agile Development methodology, the spread of this method and the current status of the successful incorporation into the current JPL development policies and processes.

  1. A Roadmap for Using Agile Development in a Traditional Environment

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; Starbird, Thomas; Grenander, Sven

    2006-01-01

    One of the newer classes of software engineering techniques is called 'Agile Development'. In Agile Development software engineers take small implementation steps and, in some cases they program in pairs. In addition, they develop automatic tests prior to implementing their small functional piece. Agile Development focuses on rapid turnaround, incremental planning, customer involvement and continuous integration. Agile Development is not the traditional waterfall method or even a rapid prototyping method (although this methodology is closer to Agile Development). At Jet Propulsion Laboratory (JPL) a few groups have begun Agile Development software implementations. The difficulty with this approach becomes apparent when Agile Development is used in an organization that has specific criteria and requirements handed down for how software development is to be performed. The work at the JPL is performed for the National Aeronautics and Space Agency (NASA). Both organizations have specific requirements, rules and procedure for developing software. This paper will discuss the some of the initial uses of the Agile Development methodology, the spread of this method and the current status of the successful incorporation into the current JPL development policies.

  2. Highway User Benefit Analysis System Research Project #128

    DOT National Transportation Integrated Search

    2000-10-01

    In this research, a methodology for estimating road user costs of various competing alternatives was developed. Also, software was developed to calculate the road user cost, perform economic analysis and update cost tables. The methodology is based o...

  3. Vehicle management and mission planning systems with shuttle applications

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A preliminary definition of a concept for an automated system is presented that will support the effective management and planning of space shuttle operations. It is called the Vehicle Management and Mission Planning System (VMMPS). In addition to defining the system and its functions, some of the software requirements of the system are identified and a phased and evolutionary method is recommended for software design, development, and implementation. The concept is composed of eight software subsystems supervised by an executive system. These subsystems are mission design and analysis, flight scheduler, launch operations, vehicle operations, payload support operations, crew support, information management, and flight operations support. In addition to presenting the proposed system, a discussion of the evolutionary software development philosophy that the Mission Planning and Analysis Division (MPAD) would propose to use in developing the required supporting software is included. A preliminary software development schedule is also included.

  4. Bringing the Unidata IDV to the Cloud

    NASA Astrophysics Data System (ADS)

    Fisher, W. I.; Oxelson Ganter, J.

    2015-12-01

    Maintaining software compatibility across new computing environments and the associated underlying hardware is a common problem for software engineers and scientific programmers. While traditional software engineering provides a suite of tools and methodologies which may mitigate this issue, they are typically ignored by developers lacking a background in software engineering. Causing further problems, these methodologies are best applied at the start of project; trying to apply them to an existing, mature project can require an immense effort. Visualization software is particularly vulnerable to this problem, given the inherent dependency on particular graphics hardware and software API's. As a result of these issues, there exists a large body of software which is simultaneously critical to the scientists who are dependent upon it, and yet increasingly difficult to maintain.The solution to this problem was partially provided with the advent of Cloud Computing; Application Streaming. This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations, with little-to-no re-engineering required. When coupled with containerization technology such as Docker, we are able to easily bring the same visualization software to a desktop, a netbook, a smartphone, and the next generation of hardware, whatever it may be.Unidata has been able to harness Application Streaming to provide a tablet-compatible version of our visualization software, the Integrated Data Viewer (IDV). This work will examine the challenges associated with adapting the IDV to an application streaming platform, and include a brief discussion of the underlying technologies involved.

  5. Protected transitional solution to transformational satellite communications

    NASA Astrophysics Data System (ADS)

    Brand, Jerry C.

    2005-06-01

    As the Warfighter progresses into the next generation battlefield, transformational communications become evident as an enabling technology. Satellite communications become even more vital as the battles range over greater non-contiguous spaces. While current satellite communications provide suitable beyond line-of-sight communications and the Transformational Communications Architecture (TCA) sets the stage for sound information exchange, a realizable transition must occur to ensure successful succession to this higher level. This paper addresses the need for a planned escalation to the next generation satellite communications architecture and offers near-term alternatives. Commercial satellite systems continue to enable the Warfighter to reach back to needed information resources, providing a large majority of available bandwidth. Four areas of concentration for transition include encrypted Telemetry, Tracking and Control (or Command) (TT&C), encrypted and covered data, satellite attack detection and protection, and operational mobility. Solution methodologies include directly embedding COMSEC devices in the satellites and terminals, and supplementing existing terminals with suitable equipment and software. Future satellites planned for near-term launches can be adapted to include commercial grade and higher-level secure equipment. Alternately, the expected use of programmable modems (Software Defined Radios (SDR)) enables incorporation of powerful cipher methods approaching military standards as well as waveforms suitable for on-the-move operation. Minimal equipment and software additions on the satellites can provide reasonable attack detection and protection methods in concert with the planned satellite usage. Network management suite modifications enable cohesive incorporation of these protection schemes. Such transitional ideas offer a smooth and planned transition as the TCA takes life.

  6. Evolution paths for advanced automation

    NASA Technical Reports Server (NTRS)

    Healey, Kathleen J.

    1990-01-01

    As Space Station Freedom (SSF) evolves, increased automation and autonomy will be required to meet Space Station Freedom Program (SSFP) objectives. As a precursor to the use of advanced automation within the SSFP, especially if it is to be used on SSF (e.g., to automate the operation of the flight systems), the underlying technologies will need to be elevated to a high level of readiness to ensure safe and effective operations. Ground facilities supporting the development of these flight systems -- from research and development laboratories through formal hardware and software development environments -- will be responsible for achieving these levels of technology readiness. These facilities will need to evolve support the general evolution of the SSFP. This evolution will include support for increasing the use of advanced automation. The SSF Advanced Development Program has funded a study to define evolution paths for advanced automaton within the SSFP's ground-based facilities which will enable, promote, and accelerate the appropriate use of advanced automation on-board SSF. The current capability of the test beds and facilities, such as the Software Support Environment, with regard to advanced automation, has been assessed and their desired evolutionary capabilities have been defined. Plans and guidelines for achieving this necessary capability have been constructed. The approach taken has combined indepth interviews of test beds personnel at all SSF Work Package centers with awareness of relevant state-of-the-art technology and technology insertion methodologies. Key recommendations from the study include advocating a NASA-wide task force for advanced automation, and the creation of software prototype transition environments to facilitate the incorporation of advanced automation in the SSFP.

  7. Using software metrics and software reliability models to attain acceptable quality software for flight and ground support software for avionic systems

    NASA Technical Reports Server (NTRS)

    Lawrence, Stella

    1992-01-01

    This paper is concerned with methods of measuring and developing quality software. Reliable flight and ground support software is a highly important factor in the successful operation of the space shuttle program. Reliability is probably the most important of the characteristics inherent in the concept of 'software quality'. It is the probability of failure free operation of a computer program for a specified time and environment.

  8. STEM_CELL: a software tool for electron microscopy: part 2--analysis of crystalline materials.

    PubMed

    Grillo, Vincenzo; Rossi, Francesca

    2013-02-01

    A new graphical software (STEM_CELL) for analysis of HRTEM and STEM-HAADF images is here introduced in detail. The advantage of the software, beyond its graphic interface, is to put together different analysis algorithms and simulation (described in an associated article) to produce novel analysis methodologies. Different implementations and improvements to state of the art approach are reported in the image analysis, filtering, normalization, background subtraction. In particular two important methodological results are here highlighted: (i) the definition of a procedure for atomic scale quantitative analysis of HAADF images, (ii) the extension of geometric phase analysis to large regions up to potentially 1μm through the use of under sampled images with aliasing effects. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Overview of aerothermodynamic loads definition study

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.

    1989-01-01

    Over the years, NASA has been conducting the Advanced Earth-to-Orbit (AETO) Propulsion Technology Program to provide the knowledge, understanding, and design methodology that will allow the development of advanced Earth-to-orbit propulsion systems with high performance, extended service life, automated operations, and diagnostics for in-flight health monitoring. The objective of the Aerothermodynamic Loads Definition Study is to develop methods to more accurately predict the operating environment in AETO propulsion systems, such as the Space Shuttle Main Engine (SSME) powerhead. The approach taken consists of 2 parts: to modify, apply, and disseminate existing computational fluid dynamics tools in response to current needs and to develop new technology that will enable more accurate computation of the time averaged and unsteady aerothermodynamic loads in the SSME powerhead. The software tools are detailed. Significant progress was made in the area of turbomachinery, where there is an overlap between the AETO efforts and research in the aeronautical gas turbine field.

  10. Methodology for Automated Detection of Degradation and Faults in Packaged Air Conditioners and Heat Pumps Using Only Two Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-02-10

    The software was created in the process of developing a system known as the Smart Monitoring and Diagnostic System (SMDS) for packaged air conditioners and heat pumps used on commercial buildings (known as RTUs). The SMDS provides automated remote monitoring and detection of performance degradation and faults in these RTUs and could increase the awareness by building owners and maintenance providers of the condition of the equipment, the cost of operating it in degraded condition, and the quality of maintenance and repair service when it is performed. The SMDS provides these capabilities and would enable conditioned-based maintenance rather than themore » reactive and schedule-based preventive maintenance commonly used today, when maintenance of RTUs is done at all. Improved maintenance would help ensure persistent peak operating efficiencies, reducing energy consumption by an estimated 10% to 30%.« less

  11. Hybrid test on building structures using electrodynamic fatigue test machine

    NASA Astrophysics Data System (ADS)

    Xu, Zhao-Dong; Wang, Kai-Yang; Guo, Ying-Qing; Wu, Min-Dong; Xu, Meng

    2017-01-01

    Hybrid simulation is an advanced structural dynamic experimental method that combines experimental physical models with analytical numerical models. It has increasingly been recognised as a powerful methodology to evaluate structural nonlinear components and systems under realistic operating conditions. One of the barriers for this advanced testing is the lack of flexible software for hybrid simulation using heterogeneous experimental equipment. In this study, an electrodynamic fatigue test machine is made and a MATLAB program is developed for hybrid simulation. Compared with the servo-hydraulic system, electrodynamic fatigue test machine has the advantages of small volume, easy operation and fast response. A hybrid simulation is conducted to verify the flexibility and capability of the whole system whose experimental substructure is one spring brace and numerical substructure is a two-storey steel frame structure. Experimental and numerical results show the feasibility and applicability of the whole system.

  12. SQA of finite element method (FEM) codes used for analyses of pit storage/transport packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russel, E.

    1997-11-01

    This report contains viewgraphs on the software quality assurance of finite element method codes used for analyses of pit storage and transport projects. This methodology utilizes the ISO 9000-3: Guideline for application of 9001 to the development, supply, and maintenance of software, for establishing well-defined software engineering processes to consistently maintain high quality management approaches.

  13. Stochastic response surface methodology: A study in the human health area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, Teresa A., E-mail: teresa.oliveira@uab.pt; Oliveira, Amílcar, E-mail: amilcar.oliveira@uab.pt; Centro de Estatística e Aplicações, Universidade de Lisboa

    2015-03-10

    In this paper we review Stochastic Response Surface Methodology as a tool for modeling uncertainty in the context of Risk Analysis. An application in the survival analysis in the breast cancer context is implemented with R software.

  14. Taking advantage of ground data systems attributes to achieve quality results in testing software

    NASA Technical Reports Server (NTRS)

    Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.

    1994-01-01

    During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.

  15. Operations analysis (study 2.1): Shuttle upper stage software requirements

    NASA Technical Reports Server (NTRS)

    Wolfe, R. R.

    1974-01-01

    An investigation of software costs related to space shuttle upper stage operations with emphasis on the additional costs attributable to space servicing was conducted. The questions and problem areas include the following: (1) the key parameters involved with software costs; (2) historical data for extrapolation of future costs; (3) elements of the basic software development effort that are applicable to servicing functions; (4) effect of multiple servicing on complexity of the operation; and (5) are recurring software costs significant. The results address these questions and provide a foundation for estimating software costs based on the costs of similar programs and a series of empirical factors.

  16. Design requirements for SRB production control system. Volume 3: Package evaluation, modification and hardware

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The software package evaluation was designed to analyze commercially available, field-proven, production control or manufacturing resource planning management technology and software package. The analysis was conducted by comparing SRB production control software requirements and conceptual system design to software package capabilities. The methodology of evaluation and the findings at each stage of evaluation are described. Topics covered include: vendor listing; request for information (RFI) document; RFI response rate and quality; RFI evaluation process; and capabilities versus requirements.

  17. Generation de chemins de couverture pour des operations automatisees de controle non destructif appliquees dans l'industrie aerospatiale

    NASA Astrophysics Data System (ADS)

    Olivieri, Pierre

    Non destructive testing (NDT) plays an important role in the aerospace industry during the fabrication and maintenance of the structures built and is used, among other useful applications, to detect flaws such as cracks at an early stage. However, NDT techniques are still mainly done manually, especially on complex aeronautical structures, which then results in several drawbacks. In addition to be difficult and time-consuming, reliability and repeatability of inspection results are likely to be affected, since they rely on each operator's experience and dexterity. The present thesis is part of a larger project (MANU-418) of the Consortium for Research and Innovation in Aerospace in Quebec (CRIAQ). In this project, it has been proposed to develop a system using a 6-DOF manipulator arm to automate three particular NDT techniques often needed in the aerospace industry: eddy current testing (ECT), fluorescent penetrant inspection (FPI), and infrared thermography (IRT). The main objective of the MANU-418 project is to demonstrate the efficiency of the developed system and provide inspection results of surface and near surface flaws (cracks usually) at least as reliably and repeatably as inspection results from a human operator. One specific objective stemming from the main objective of the project is to develop a methodology and a software tool to generate covering paths adapted for the three aforementioned NDT techniques to inspect the complex surfaces of aerospace structures. The present thesis aims at reaching this specific objective. At first, geometrical and topological properties of the surfaces considered in this project are defined (flat surfaces, round and straight edges, cylindrical or near cylindrical surfaces, holes). It is also assumed that the 3D model of the surface to inspect is known in advance. Moreover, it has been decided within the framework of the MANU-418 project to give priority to the automation of ECT compared with the other techniques (FPI and IRT). As a result, the methodology developed to generate inspection paths is more closely focused on path constraints relative to the manual operations of ECT using a differential eddy current probe (named here EC probe), but it is developed to be flexible enough to be used with the other techniques as well. Common inspection paths for ECT are usually defined by a sweeping motion using a zigzag pattern with the EC probe in mild contact with the inspected surface. Moreover, the main axis of the probe must keep a normal orientation with the surface, and the alignment of its two coils must always be oriented along the direction of its motion. A first methodology is then proposed to generate covering paths on the whole surface of interest while meeting all EC probe motion constraints. First, the surface is meshed with triangular facets, and then it is subdivided into several patches such that their geometry and topology are simpler than the whole surface. Paths are then generated on each patch by intersecting their facets with offset section planes defined along a sweeping direction. Furthermore, another methodology is developed to generate paths around an indication (namely a small area where the presence of a flaw is suspected) whose position and orientation are assumed to be known a priori.. Then, a software tool with a graphical user interface has been developed in the MATLAB environment to generate inspection paths based on these methodologies. A set of path parameters can be changed by the user to get desired paths (distance between passes, sweep direction, etc.). Once paths are computed, an ordered list of coordinates (positions and orientations) of the tool is exported in an EXCEL spreadsheet so that it could be used with a real robot. In this research, these data are then used to perform simulations of trajectories (path described as a function of the time) with a MotoMan robot (model SV3XL) using the MotoSim software. After validation of these trajectories in this software (absence of collisions, positions are all reachable, etc.), they are finally converted into instructions for the real MotoMan robot to proceed with experimental tests. These first simulations and experimentations on a MotoMan robot of the generated paths have given results close to the expected inspection trajectories used manually in the NDT techniques considered, especially for the ECT technique. Nevertheless, it is strongly recommended to validate this path generation method with more experimental tests. For instance, a "test" tool could be manufactured to measure errors of position and orientation of this tool with respect to expected trajectories on a typical complex aeronautical structure. (Abstract shortened by UMI.).

  18. Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David

    1995-01-01

    Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.

  19. Digital Methodology to implement the ECOUTER engagement process.

    PubMed

    Wilson, Rebecca C; Butters, Oliver W; Clark, Tom; Minion, Joel; Turner, Andrew; Murtagh, Madeleine J

    2016-01-01

    ECOUTER ( E mploying CO ncept u al schema for policy and T ranslation E  in R esearch - French for 'to listen' - is a new stakeholder engagement method incorporating existing evidence to help participants draw upon their own knowledge of cognate issues and interact on a topic of shared concern. The results of an ECOUTER can form the basis of recommendations for research, governance, practice and/or policy. This paper describes the development of a digital methodology for the ECOUTER engagement process based on currently available mind mapping freeware software. The implementation of an ECOUTER process tailored to applications within health studies are outlined for both online and face-to-face scenarios. Limitations of the present digital methodology are discussed, highlighting the requirement of a purpose built software for ECOUTER research purposes.

  20. UTM TCL2 Software Requirements

    NASA Technical Reports Server (NTRS)

    Smith, Irene S.; Rios, Joseph L.; McGuirk, Patrick O.; Mulfinger, Daniel G.; Venkatesan, Priya; Smith, David R.; Baskaran, Vijayakumar; Wang, Leo

    2017-01-01

    The Unmanned Aircraft Systems (UAS) Traffic Management (UTM) Technical Capability Level (TCL) 2 software implements the UTM TCL 2 software requirements described herein. These software requirements are linked to the higher level UTM TCL 2 System Requirements. Each successive TCL implements additional UTM functionality, enabling additional use cases. TCL 2 demonstrated how to enable expanded multiple operations by implementing automation for beyond visual line-of-sight, tracking operations, and operations flying over sparsely populated areas.

  1. The road to successful ITS software acquisition. Volume 2, Software acquisition process reference guide

    DOT National Transportation Integrated Search

    2000-12-01

    The current performance-related specifications (PRS) methodology has been under development by the Federal Highway Administration (FHWA) for several years and has now reached a level at which it can be implemented by State highway agencies. PRS for h...

  2. Computer-aided software development process design

    NASA Technical Reports Server (NTRS)

    Lin, Chi Y.; Levary, Reuven R.

    1989-01-01

    The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.

  3. Demonstration of the Dynamic Flowgraph Methodology using the Titan 2 Space Launch Vehicle Digital Flight Control System

    NASA Technical Reports Server (NTRS)

    Yau, M.; Guarro, S.; Apostolakis, G.

    1993-01-01

    Dynamic Flowgraph Methodology (DFM) is a new approach developed to integrate the modeling and analysis of the hardware and software components of an embedded system. The objective is to complement the traditional approaches which generally follow the philosophy of separating out the hardware and software portions of the assurance analysis. In this paper, the DFM approach is demonstrated using the Titan 2 Space Launch Vehicle Digital Flight Control System. The hardware and software portions of this embedded system are modeled in an integrated framework. In addition, the time dependent behavior and the switching logic can be captured by this DFM model. In the modeling process, it is found that constructing decision tables for software subroutines is very time consuming. A possible solution is suggested. This approach makes use of a well-known numerical method, the Newton-Raphson method, to solve the equations implemented in the subroutines in reverse. Convergence can be achieved in a few steps.

  4. Process optimization via response surface methodology in the treatment of metal working industry wastewater with electrocoagulation.

    PubMed

    Guvenc, Senem Yazici; Okut, Yusuf; Ozak, Mert; Haktanir, Birsu; Bilgili, Mehmet Sinan

    2017-02-01

    In this study, process parameters in chemical oxygen demand (COD) and turbidity removal from metal working industry (MWI) wastewater were optimized by electrocoagulation (EC) using aluminum, iron and steel electrodes. The effects of process variables on COD and turbidity were investigated by developing a mathematical model using central composite design method, which is one of the response surface methodologies. Variance analysis was conducted to identify the interaction between process variables and model responses and the optimum conditions for the COD and turbidity removal. Second-order regression models were developed via the Statgraphics Centurion XVI.I software program to predict COD and turbidity removal efficiencies. Under the optimum conditions, removal efficiencies obtained from aluminum electrodes were found to be 76.72% for COD and 99.97% for turbidity, while the removal efficiencies obtained from iron electrodes were found to be 76.55% for COD and 99.9% for turbidity and the removal efficiencies obtained from steel electrodes were found to be 65.75% for COD and 99.25% for turbidity. Operational costs at optimum conditions were found to be 4.83, 1.91 and 2.91 €/m 3 for aluminum, iron and steel electrodes, respectively. Iron electrode was found to be more suitable for MWI wastewater treatment in terms of operational cost and treatment efficiency.

  5. Optimization of Synthesis Conditions of Carbon Nanotubes via Ultrasonic-Assisted Floating Catalyst Deposition Using Response Surface Methodology

    PubMed Central

    Mohammadian, Narges; Ghoreishi, Seyyed M.; Hafeziyeh, Samira; Saeidi, Samrand; Dionysiou, Dionysios D.

    2018-01-01

    The growing use of carbon nanotubes (CNTs) in a plethora of applications has provided to us a motivation to investigate CNT synthesis by new methods. In this study, ultrasonic-assisted chemical vapor deposition (CVD) method was employed to synthesize CNTs. The difficulty of controlling the size of clusters and achieving uniform distribution—the major problem in previous methods—was solved by using ultrasonic bath and dissolving ferrocene in xylene outside the reactor. The operating conditions were optimized using a rotatable central composite design (CCD), which helped optimize the operating conditions of the method. Response surface methodology (RSM) was used to analyze these experiments. Using statistical software was very effective, considering that it decreased the number of experiments needed to achieve the optimum conditions. Synthesis of CNTs was studied as a function of three independent parameters viz. hydrogen flow rate (120–280 cm3/min), catalyst concentration (2–6 wt %), and synthesis temperature (800–1200 °C). Optimum conditions for the synthesis of CNTs were found to be 3.78 wt %, 184 cm3/min, and 976 °C for catalyst concentration, hydrogen flow rate, and synthesis temperature, respectively. Under these conditions, Raman spectrum indicates high values of (IG/ID), which means high-quality CNTs. PMID:29747451

  6. Methodology for the Incorporation of Passive Component Aging Modeling into the RAVEN/ RELAP-7 Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua

    2014-11-01

    Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper representsmore » an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation environment such as RELAP-7. • Identify the risk-significant passive components, their failure modes and anticipated rates of degradation • Incorporate surveillance and maintenance activities and their effects into the plant state and into component aging progress. • Asses aging affects in a dynamic simulation environment 1. C. L. SMITH, V. N. SHAH, T. KAO, G. APOSTOLAKIS, “Incorporating Ageing Effects into Probabilistic Risk Assessment –A Feasibility Study Utilizing Reliability Physics Models,” NUREG/CR-5632, USNRC, (2001). 2. T. ALDEMIR, “A Survey of Dynamic Methodologies for Probabilistic Safety Assessment of Nuclear Power Plants, Annals of Nuclear Energy, 52, 113-124, (2013). 3. C. RABITI, A. ALFONSI, J. COGLIATI, D. MANDELLI and R. KINOSHITA “Reactor Analysis and Virtual Control Environment (RAVEN) FY12 Report,” INL/EXT-12-27351, (2012). 4. D. ANDERS et.al, "RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7," INL/EXT-12-25924, (2012).« less

  7. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    PubMed

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.

  8. The 1988 Directory of Educational Software Publishing Companies.

    ERIC Educational Resources Information Center

    Electronic Learning, 1988

    1988-01-01

    Based on questionnaires sent to educational software companies in January 1988, this directory lists 78 companies. Information given includes company address, curriculum subject areas for which the company publishes software, types of machines and operating systems on which the software operates, and grade level for which it is targeted. (LRW)

  9. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  10. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  11. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  12. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  13. 14 CFR 417.123 - Computing systems and software.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  14. 78 FR 23866 - Airworthiness Directives; the Boeing Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-23

    ... operational software in the cabin management system, and loading new software into the mass memory card. The...-200 and -300 series airplanes. The proposed AD would have required installing new operational software in the cabin management system, and loading new software into the mass memory card. Since the...

  15. Testing the system detection unit for measuring solid minerals bulk density

    NASA Astrophysics Data System (ADS)

    Voytyuk, I. N.; Kopteva, A. V.

    2017-10-01

    The paper provides a brief description of the system for measuring flux per volume of solid minerals via example of mineral coal. The paper discloses the operational principle of the detection unit. The paper provides full description of testing methodology, as well as practical implementation of the detection unit testing. This paper describes the removal of two data arrays via the channel of scattered anddirect radiation for the detection units of two generations. This paper describes Matlab software to determine the statistical characteristics of the studied objects. The mean value of pulses per cycles, and pulse counting inaccuracy relatively the mean value were determined for the calculation of the stability account of the detection units.

  16. The optimization problems of CP operation

    NASA Astrophysics Data System (ADS)

    Kler, A. M.; Stepanova, E. L.; Maximov, A. S.

    2017-11-01

    The problem of enhancing energy and economic efficiency of CP is urgent indeed. One of the main methods for solving it is optimization of CP operation. To solve the optimization problems of CP operation, Energy Systems Institute, SB of RAS, has developed a software. The software makes it possible to make optimization calculations of CP operation. The software is based on the techniques and software tools of mathematical modeling and optimization of heat and power installations. Detailed mathematical models of new equipment have been developed in the work. They describe sufficiently accurately the processes that occur in the installations. The developed models include steam turbine models (based on the checking calculation) which take account of all steam turbine compartments and regeneration system. They also enable one to make calculations with regenerative heaters disconnected. The software for mathematical modeling of equipment and optimization of CP operation has been developed. It is based on the technique for optimization of CP operating conditions in the form of software tools and integrates them in the common user interface. The optimization of CP operation often generates the need to determine the minimum and maximum possible total useful electricity capacity of the plant at set heat loads of consumers, i.e. it is necessary to determine the interval on which the CP capacity may vary. The software has been applied to optimize the operating conditions of the Novo-Irkutskaya CP of JSC “Irkutskenergo”. The efficiency of operating condition optimization and the possibility for determination of CP energy characteristics that are necessary for optimization of power system operation are shown.

  17. Comparison of 3D reconstruction of mandible for pre-operative planning using commercial and open-source software

    NASA Astrophysics Data System (ADS)

    Abdullah, Johari Yap; Omar, Marzuki; Pritam, Helmi Mohd Hadi; Husein, Adam; Rajion, Zainul Ahmad

    2016-12-01

    3D printing of mandible is important for pre-operative planning, diagnostic purposes, as well as for education and training. Currently, the processing of CT data is routinely performed with commercial software which increases the cost of operation and patient management for a small clinical setting. Usage of open-source software as an alternative to commercial software for 3D reconstruction of the mandible from CT data is scarce. The aim of this study is to compare two methods of 3D reconstruction of the mandible using commercial Materialise Mimics software and open-source Medical Imaging Interaction Toolkit (MITK) software. Head CT images with a slice thickness of 1 mm and a matrix of 512x512 pixels each were retrieved from the server located at the Radiology Department of Hospital Universiti Sains Malaysia. The CT data were analysed and the 3D models of mandible were reconstructed using both commercial Materialise Mimics and open-source MITK software. Both virtual 3D models were saved in STL format and exported to 3matic and MeshLab software for morphometric and image analyses. Both models were compared using Wilcoxon Signed Rank Test and Hausdorff Distance. No significant differences were obtained between the 3D models of the mandible produced using Mimics and MITK software. The 3D model of the mandible produced using MITK open-source software is comparable to the commercial MIMICS software. Therefore, open-source software could be used in clinical setting for pre-operative planning to minimise the operational cost.

  18. Use phase signals to promote lifetime extension for Windows PCs.

    PubMed

    Hickey, Stewart; Fitzpatrick, Colin; O'Connell, Maurice; Johnson, Michael

    2009-04-01

    This paper proposes a signaling methodology for personal computers. Signaling may be viewed as an ecodesign strategy that can positively influence the consumer to consumer (C2C) market process. A number of parameters are identified that can provide the basis for signal implementation. These include operating time, operating temperature, operating voltage, power cycle counts, hard disk drive (HDD) self-monitoring, and reporting technology (SMART) attributes and operating system (OS) event information. All these parameters are currently attainable or derivable via embedded technologies in modern desktop systems. A case study detailing a technical implementation of how the development of signals can be achieved in personal computers that incorporate Microsoft Windows operating systems is presented. Collation of lifetime temperature data from a system processor is demonstrated as a possible means of characterizing a usage profile for a desktop system. In addition, event log data is utilized for devising signals indicative of OS quality. The provision of lifetime usage data in the form of intuitive signals indicative of both hardware and software quality can in conjunction with consumer education facilitate an optimal remarketing strategy for used systems. This implementation requires no additional hardware.

  19. Advanced software development workstation: Knowledge base methodology: Methodology for first Engineering Script Language (ESL) knowledge base

    NASA Technical Reports Server (NTRS)

    Peeris, Kumar; Izygon, Michel

    1993-01-01

    This report explains some of the concepts of the ESL prototype and summarizes some of the lessons learned in using the prototype for implementing the Flight Mechanics Tool Kit (FMToolKit) series of Ada programs.

  20. Teaching Camera Calibration by a Constructivist Methodology

    ERIC Educational Resources Information Center

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  1. Software Design Improvements. Part 1; Software Benefits and Limitations

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R.; Packard, Michael H.; Ziemianski, Tom

    1997-01-01

    Computer hardware and associated software have been used for many years to process accounting information, to analyze test data and to perform engineering analysis. Now computers and software also control everything from automobiles to washing machines and the number and type of applications are growing at an exponential rate. The size of individual program has shown similar growth. Furthermore, software and hardware are used to monitor and/or control potentially dangerous products and safety-critical systems. These uses include everything from airplanes and braking systems to medical devices and nuclear plants. The question is: how can this hardware and software be made more reliable? Also, how can software quality be improved? What methodology needs to be provided on large and small software products to improve the design and how can software be verified?

  2. Examining Operational Software Influence on User Satisfaction within Small Manufacturing Businesses

    ERIC Educational Resources Information Center

    Frey, W. Bruce

    2010-01-01

    Managing a business requires vigilance and diligence. Small business owners are often ignored by IT vendors and inundated by the choices of software applications and therefore, need help finding a viable operating software solution for small business decisions and development. The extent, if any, of a significant influence of operational software…

  3. Integrated Software Health Management for Aircraft GN and C

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Mengshoel, Ole

    2011-01-01

    Modern aircraft rely heavily on dependable operation of many safety-critical software components. Despite careful design, verification and validation (V&V), on-board software can fail with disastrous consequences if it encounters problematic software/hardware interaction or must operate in an unexpected environment. We are using a Bayesian approach to monitor the software and its behavior during operation and provide up-to-date information about the health of the software and its components. The powerful reasoning mechanism provided by our model-based Bayesian approach makes reliable diagnosis of the root causes possible and minimizes the number of false alarms. Compilation of the Bayesian model into compact arithmetic circuits makes SWHM feasible even on platforms with limited CPU power. We show initial results of SWHM on a small simulator of an embedded aircraft software system, where software and sensor faults can be injected.

  4. Development of Techniques for Visualization of Scalar and Vector Fields in the Immersive Environment

    NASA Technical Reports Server (NTRS)

    Bidasaria, Hari B.; Wilson, John W.; Nealy, John E.

    2005-01-01

    Visualization of scalar and vector fields in the immersive environment (CAVE - Cave Automated Virtual Environment) is important for its application to radiation shielding research at NASA Langley Research Center. A complete methodology and the underlying software for this purpose have been developed. The developed software has been put to use for the visualization of the earth s magnetic field, and in particular for the study of the South Atlantic Anomaly. The methodology has also been put to use for the visualization of geomagnetically trapped protons and electrons within Earth's magnetosphere.

  5. WAMA: a method of optimizing reticle/die placement to increase litho cell productivity

    NASA Astrophysics Data System (ADS)

    Dor, Amos; Schwarz, Yoram

    2005-05-01

    This paper focuses on reticle/field placement methodology issues, the disadvantages of typical methods used in the industry, and the innovative way that the WAMA software solution achieves optimized placement. Typical wafer placement methodologies used in the semiconductor industry considers a very limited number of parameters, like placing the maximum amount of die on the wafer circle and manually modifying die placement to minimize edge yield degradation. This paper describes how WAMA software takes into account process characteristics, manufacturing constraints and business objectives to optimize placement for maximum stepper productivity and maximum good die (yield) on the wafer.

  6. Leveraging Existing Mission Tools in a Re-Usable, Component-Based Software Environment

    NASA Technical Reports Server (NTRS)

    Greene, Kevin; Grenander, Sven; Kurien, James; z,s (fshir. z[orttr); z,scer; O'Reilly, Taifun

    2006-01-01

    Emerging methods in component-based software development offer significant advantages but may seem incompatible with existing mission operations applications. In this paper we relate our positive experiences integrating existing mission applications into component-based tools we are delivering to three missions. In most operations environments, a number of software applications have been integrated together to form the mission operations software. In contrast, with component-based software development chunks of related functionality and data structures, referred to as components, can be individually delivered, integrated and re-used. With the advent of powerful tools for managing component-based development, complex software systems can potentially see significant benefits in ease of integration, testability and reusability from these techniques. These benefits motivate us to ask how component-based development techniques can be relevant in a mission operations environment, where there is significant investment in software tools that are not component-based and may not be written in languages for which component-based tools even exist. Trusted and complex software tools for sequencing, validation, navigation, and other vital functions cannot simply be re-written or abandoned in order to gain the advantages offered by emerging component-based software techniques. Thus some middle ground must be found. We have faced exactly this issue, and have found several solutions. Ensemble is an open platform for development, integration, and deployment of mission operations software that we are developing. Ensemble itself is an extension of an open source, component-based software development platform called Eclipse. Due to the advantages of component-based development, we have been able to vary rapidly develop mission operations tools for three surface missions by mixing and matching from a common set of mission operation components. We have also had to determine how to integrate existing mission applications for sequence development, sequence validation, and high level activity planning, and other functions into a component-based environment. For each of these, we used a somewhat different technique based upon the structure and usage of the existing application.

  7. Agile Software Teams: How They Engage with Systems Engineering on DoD Acquisition Programs

    DTIC Science & Technology

    2014-07-01

    under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineer- ing Institute, a federally funded...issues that would preclude or limit the use of Agile methods within the DoD” [Broadus 2013]. As operational tempos increase and programs fight to...environment in which it operates . This makes software different from other disciplines that have toleranc- es, generally resulting in software engineering

  8. Static and Dynamic Verification of Critical Software for Space Applications

    NASA Astrophysics Data System (ADS)

    Moreira, F.; Maia, R.; Costa, D.; Duro, N.; Rodríguez-Dapena, P.; Hjortnaes, K.

    Space technology is no longer used only for much specialised research activities or for sophisticated manned space missions. Modern society relies more and more on space technology and applications for every day activities. Worldwide telecommunications, Earth observation, navigation and remote sensing are only a few examples of space applications on which we rely daily. The European driven global navigation system Galileo and its associated applications, e.g. air traffic management, vessel and car navigation, will significantly expand the already stringent safety requirements for space based applications Apart from their usefulness and practical applications, every single piece of onboard software deployed into the space represents an enormous investment. With a long lifetime operation and being extremely difficult to maintain and upgrade, at least when comparing with "mainstream" software development, the importance of ensuring their correctness before deployment is immense. Verification &Validation techniques and technologies have a key role in ensuring that the onboard software is correct and error free, or at least free from errors that can potentially lead to catastrophic failures. Many RAMS techniques including both static criticality analysis and dynamic verification techniques have been used as a means to verify and validate critical software and to ensure its correctness. But, traditionally, these have been isolated applied. One of the main reasons is the immaturity of this field in what concerns to its application to the increasing software product(s) within space systems. This paper presents an innovative way of combining both static and dynamic techniques exploiting their synergy and complementarity for software fault removal. The methodology proposed is based on the combination of Software FMEA and FTA with Fault-injection techniques. The case study herein described is implemented with support from two tools: The SoftCare tool for the SFMEA and SFTA, and the Xception tool for fault-injection. Keywords: Verification &Validation, RAMS, Onboard software, SFMEA, STA, Fault-injection 1 This work is being performed under the project STADY Applied Static And Dynamic Verification Of Critical Software, ESA/ESTEC Contract Nr. 15751/02/NL/LvH.

  9. Corneal modeling for analysis of photorefractive keratectomy

    NASA Astrophysics Data System (ADS)

    Della Vecchia, Michael A.; Lamkin-Kennard, Kathleen

    1997-05-01

    Procedurally, excimer photorefractive keratectomy is based on the refractive correction of composite spherical and cylindrical ophthalmic errors of the entire eye. These refractive errors are inputted for correction at the corneal plane and for the properly controlled duration and location of laser energy. Topography is usually taken to correspondingly monitor spherical and cylindrical corneorefractive errors. While a corneal topographer provides surface morphologic information, the keratorefractive photoablation is based on the patient's spherical and cylindrical spectacle correction. Topography is at present not directly part of the procedural deterministic parameters. Examination of how corneal curvature at each of the keratometric reference loci affect the shape of the resultant corneal photoablated surface may enhance the accuracy of the desired correction. The objective of this study was to develop a methodology to utilize corneal topography for construction of models depicting pre- and post-operative keratomorphology for analysis of photorefractive keratectomy. Multiple types of models were developed then recreated in optical design software for examination of focal lengths and other optical characteristics. The corneal models were developed using data extracted from the TMS I corneal modeling system (Computed Anatomy, New York, NY). The TMS I does not allow for manipulation of data or differentiation of pre- and post-operative surfaces within its platform, thus models needed to be created for analysis. The data were imported into Matlab where 3D models, surface meshes, and contour plots were created. The data used to generate the models were pre- and post-operative curvatures, heights from the corneal apes, and x-y positions at 6400 locations on the corneal surface. Outlying non-contributory points were eliminated through statistical operations. Pre- and post- operative models were analyzed to obtain the resultant changes in the corneal surfaces during PRK. A sensitivity analysis of the corneal topography system was also performed. Ray tracings were performed using the height data and the optical design software Zemax (Focus Software, Inc., Tucson, AZ). Examining pre- and post-operative values of corneal surfaces may further the understanding of how areas of the cornea contribute toward desired visual correction. Gross resultant power across the corneal surface is used in PRK, however, understanding the contribution of each point to the average power may have important implications and prove to be significant for achieving projected surgical results.

  10. Object Oriented Learning Objects

    ERIC Educational Resources Information Center

    Morris, Ed

    2005-01-01

    We apply the object oriented software engineering (OOSE) design methodology for software objects (SOs) to learning objects (LOs). OOSE extends and refines design principles for authoring dynamic reusable LOs. Our learning object class (LOC) is a template from which individualised LOs can be dynamically created for, or by, students. The properties…

  11. Data synthesis and display programs for wave distribution function analysis

    NASA Technical Reports Server (NTRS)

    Storey, L. R. O.; Yeh, K. J.

    1992-01-01

    At the National Space Science Data Center (NSSDC) software was written to synthesize and display artificial data for use in developing the methodology of wave distribution analysis. The software comprises two separate interactive programs, one for data synthesis and the other for data display.

  12. Extreme Programming: A Kuhnian Revolution?

    NASA Astrophysics Data System (ADS)

    Northover, Mandy; Northover, Alan; Gruner, Stefan; Kourie, Gerrick G.; Boake, Andrew

    This paper critically assesses the extent to which the Agile Software community's use of Thomas Kuhn's theory of revolutionary scientific change is justified. It will be argued that Kuhn's concepts of "scientific revolution" and "paradigm shift" cannot adequately explain the change from one type of software methodology to another.

  13. The Design and Development of a Web-Interface for the Software Engineering Automation System

    DTIC Science & Technology

    2001-09-01

    application on the Internet. 14. SUBJECT TERMS Computer Aided Prototyping, Real Time Systems , Java 15. NUMBER OF...difficult. Developing the entire system only to find it does not meet the customer’s needs is a tremendous waste of time. Real - time systems need a...software prototyping is an iterative software development methodology utilized to improve the analysis and design of real - time systems [2]. One

  14. Designing application software in wide area network settings

    NASA Technical Reports Server (NTRS)

    Makpangou, Mesaac; Birman, Ken

    1990-01-01

    Progress in methodologies for developing robust local area network software has not been matched by similar results for wide area settings. The design of application software spanning multiple local area environments is examined. For important classes of applications, simple design techniques are presented that yield fault tolerant wide area programs. An implementation of these techniques as a set of tools for use within the ISIS system is described.

  15. Performance testing of LiDAR exploitation software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-04-01

    Mobile LiDAR systems are being used widely in recent years for many applications in the field of geoscience. One of most important limitations of this technology is the large computational requirements involved in data processing. Several software solutions for data processing are available in the market, but users are often unknown about the methodologies to verify their performance accurately. In this work a methodology for LiDAR software performance testing is presented and six different suites are studied: QT Modeler, AutoCAD Civil 3D, Mars 7, Fledermaus, Carlson and TopoDOT (all of them in x64). Results depict as QTModeler, TopoDOT and AutoCAD Civil 3D allow the loading of large datasets, while Fledermaus, Mars7 and Carlson do not achieve these powerful performance. AutoCAD Civil 3D needs large loading time in comparison with the most powerful softwares such as QTModeler and TopoDOT. Carlson suite depicts the poorest results among all the softwares under study, where point clouds larger than 5 million points cannot be loaded and loading time is very large in comparison with the other suites even for the smaller datasets. AutoCAD Civil 3D, Carlson and TopoDOT show more threads than other softwares like QTModeler, Mars7 and Fledermaus.

  16. Built To Last: Using Iterative Development Models for Sustainable Scientific Software Development

    NASA Astrophysics Data System (ADS)

    Jasiak, M. E.; Truslove, I.; Savoie, M.

    2013-12-01

    In scientific research, software development exists fundamentally for the results they create. The core research must take focus. It seems natural to researchers, driven by grant deadlines, that every dollar invested in software development should be used to push the boundaries of problem solving. This system of values is frequently misaligned with those of the software being created in a sustainable fashion; short-term optimizations create longer-term sustainability issues. The National Snow and Ice Data Center (NSIDC) has taken bold cultural steps in using agile and lean development and management methodologies to help its researchers meet critical deadlines, while building in the necessary support structure for the code to live far beyond its original milestones. Agile and lean software development and methodologies including Scrum, Kanban, Continuous Delivery and Test-Driven Development have seen widespread adoption within NSIDC. This focus on development methods is combined with an emphasis on explaining to researchers why these methods produce more desirable results for everyone, as well as promoting developers interacting with researchers. This presentation will describe NSIDC's current scientific software development model, how this addresses the short-term versus sustainability dichotomy, the lessons learned and successes realized by transitioning to this agile and lean-influenced model, and the current challenges faced by the organization.

  17. Thyroid Cancer and Tumor Collaborative Registry (TCCR).

    PubMed

    Shats, Oleg; Goldner, Whitney; Feng, Jianmin; Sherman, Alexander; Smith, Russell B; Sherman, Simon

    2016-01-01

    A multicenter, web-based Thyroid Cancer and Tumor Collaborative Registry (TCCR, http://tccr.unmc.edu) allows for the collection and management of various data on thyroid cancer (TC) and thyroid nodule (TN) patients. The TCCR is coupled with OpenSpecimen, an open-source biobank management system, to annotate biospecimens obtained from the TCCR subjects. The demographic, lifestyle, physical activity, dietary habits, family history, medical history, and quality of life data are provided and may be entered into the registry by subjects. Information on diagnosis, treatment, and outcome is entered by the clinical personnel. The TCCR uses advanced technical and organizational practices, such as (i) metadata-driven software architecture (design); (ii) modern standards and best practices for data sharing and interoperability (standardization); (iii) Agile methodology (project management); (iv) Software as a Service (SaaS) as a software distribution model (operation); and (v) the confederation principle as a business model (governance). This allowed us to create a secure, reliable, user-friendly, and self-sustainable system for TC and TN data collection and management that is compatible with various end-user devices and easily adaptable to a rapidly changing environment. Currently, the TCCR contains data on 2,261 subjects and data on more than 28,000 biospecimens. Data and biological samples collected by the TCCR are used in developing diagnostic, prevention, treatment, and survivorship strategies against TC.

  18. Conservative Allowables Determined by a Tsai-Hill Equivalent Criterion for Design of Satellite Composite Parts

    NASA Astrophysics Data System (ADS)

    Pommatau, Gilles

    2014-06-01

    The present paper deals with the industrial application, via a software developed by Thales Alenia Space, of a new failure criterion named "Tsai-Hill equivalent criterion" for composite structural parts of satellites. The first part of the paper briefly describes the main hypothesis and the possibilities in terms of failure analysis of the software. The second parts reminds the quadratic and conservative nature of the new failure criterion, already presented in ESA conference in a previous paper. The third part presents the statistical calculation possibilities of the software, and the associated sensitivity analysis, via results obtained on different composites. Then a methodology, proposed to customers and agencies, is presented with its limitations and advantages. It is then conclude that this methodology is an efficient industrial way to perform mechanical analysis on quasi-isotropic composite parts.

  19. Real-time closed-loop simulation and upset evaluation of control systems in harsh electromagnetic environments

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.

    1989-01-01

    Digital control systems for applications such as aircraft avionics and multibody systems must maintain adequate control integrity in adverse as well as nominal operating conditions. For example, control systems for advanced aircraft, and especially those with relaxed static stability, will be critical to flight and will, therefore, have very high reliability specifications which must be met regardless of operating conditions. In addition, multibody systems such as robotic manipulators performing critical functions must have control systems capable of robust performance in any operating environment in order to complete the assigned task reliably. Severe operating conditions for electronic control systems can result from electromagnetic disturbances caused by lightning, high energy radio frequency (HERF) transmitters, and nuclear electromagnetic pulses (NEMP). For this reason, techniques must be developed to evaluate the integrity of the control system in adverse operating environments. The most difficult and illusive perturbations to computer-based control systems that can be caused by an electromagnetic environment (EME) are functional error modes that involve no component damage. These error modes are collectively known as upset, can occur simultaneously in all of the channels of a redundant control system, and are software dependent. Upset studies performed to date have not addressed the assessment of fault tolerant systems and do not involve the evaluation of a control system operating in a closed-loop with the plant. A methodology for performing a real-time simulation of the closed-loop dynamics of a fault tolerant control system with a simulated plant operating in an electromagnetically harsh environment is presented. In particular, considerations for performing upset tests on the controller are discussed. Some of these considerations are the generation and coupling of analog signals representative of electromagnetic disturbances to a control system under test, analog data acquisition, and digital data acquisition from fault tolerant systems. In addition, a case study of an upset test methodology for a fault tolerant electromagnetic aircraft engine control system is presented.

  20. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  1. Nanosatellite and Plug-and-Play Architecture 2 (NAPA 2)

    DTIC Science & Technology

    2017-02-28

    potentially other militarily relevant roles. The "i- Missions" focus area studies the kinetics of rapid mission development. The methodology involves...the US and Sweden in the Nanosatellite and Plug-and-play Architecture or "NAPA" program) is to pioneer a methodology for creating mission capable 6U...spacecraft. The methodology involves interchangeable blackbox (self-describing) components, software (middleware and applications), advanced

  2. An ontology based trust verification of software license agreement

    NASA Astrophysics Data System (ADS)

    Lu, Wenhuan; Li, Xiaoqing; Gan, Zengqin; Wei, Jianguo

    2017-08-01

    When we install software or download software, there will show up so big mass document to state the rights and obligations, for which lots of person are not patient to read it or understand it. That would may make users feel distrust for the software. In this paper, we propose an ontology based verification for Software License Agreement. First of all, this work proposed an ontology model for domain of Software License Agreement. The domain ontology is constructed by proposed methodology according to copyright laws and 30 software license agreements. The License Ontology can act as a part of generalized copyright law knowledge model, and also can work as visualization of software licenses. Based on this proposed ontology, a software license oriented text summarization approach is proposed which performances showing that it can improve the accuracy of software licenses summarizing. Based on the summarization, the underline purpose of the software license can be explicitly explored for trust verification.

  3. Digital Geological Mapping for Earth Science Students

    NASA Astrophysics Data System (ADS)

    England, Richard; Smith, Sally; Tate, Nick; Jordan, Colm

    2010-05-01

    This SPLINT (SPatial Literacy IN Teaching) supported project is developing pedagogies for the introduction of teaching of digital geological mapping to Earth Science students. Traditionally students are taught to make geological maps on a paper basemap with a notebook to record their observations. Learning to use a tablet pc with GIS based software for mapping and data recording requires emphasis on training staff and students in specific GIS and IT skills and beneficial adjustments to the way in which geological data is recorded in the field. A set of learning and teaching materials are under development to support this learning process. Following the release of the British Geological Survey's Sigma software we have been developing generic methodologies for the introduction of digital geological mapping to students that already have experience of mapping by traditional means. The teaching materials introduce the software to the students through a series of structured exercises. The students learn the operation of the software in the laboratory by entering existing observations, preferably data that they have collected. Through this the students benefit from being able to reflect on their previous work, consider how it might be improved and plan new work. Following this they begin fieldwork in small groups using both methods simultaneously. They are able to practise what they have learnt in the classroom and review the differences, advantages and disadvantages of the two methods, while adding to the work that has already been completed. Once the field exercises are completed students use the data that they have collected in the production of high quality map products and are introduced to the use of integrated digital databases which they learn to search and extract information from. The relatively recent development of the technologies which underpin digital mapping also means that many academic staff also require training before they are able to deliver the course materials. Consequently, a set of staff training materials are being developed in parallel to those for the students. These focus on the operation of the software and an introduction to the structure of the exercises. The presentation will review the teaching exercises and student and staff responses to their introduction.

  4. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  5. Constraints and Opportunities in GCM Model Development

    NASA Technical Reports Server (NTRS)

    Schmidt, Gavin; Clune, Thomas

    2010-01-01

    Over the past 30 years climate models have evolved from relatively simple representations of a few atmospheric processes to complex multi-disciplinary system models which incorporate physics from bottom of the ocean to the mesopause and are used for seasonal to multi-million year timescales. Computer infrastructure over that period has gone from punchcard mainframes to modern parallel clusters. Constraints of working within an ever evolving research code mean that most software changes must be incremental so as not to disrupt scientific throughput. Unfortunately, programming methodologies have generally not kept pace with these challenges, and existing implementations now present a heavy and growing burden on further model development as well as limiting flexibility and reliability. Opportunely, advances in software engineering from other disciplines (e.g. the commercial software industry) as well as new generations of powerful development tools can be incorporated by the model developers to incrementally and systematically improve underlying implementations and reverse the long term trend of increasing development overhead. However, these methodologies cannot be applied blindly, but rather must be carefully tailored to the unique characteristics of scientific software development. We will discuss the need for close integration of software engineers and climate scientists to find the optimal processes for climate modeling.

  6. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Programs in use today generally have all of the function and information processing capabilities required to do their specified job. However, older programs usually use obsolete technology, are not integrated properly with other programs, and are difficult to maintain. Reengineering is becoming a prominent discipline as organizations try to move their systems to more modern and maintainable technologies. The Johnson Space Center (JSC) Software Technology Branch (STB) is researching and developing a system to support reengineering older FORTRAN programs into more maintainable forms that can also be more readily translated to a modern languages such as FORTRAN 8x, Ada, or C. This activity has led to the development of maintenance strategies for design recovery and reengineering. These strategies include a set of standards, methodologies, and the concepts for a software environment to support design recovery and reengineering. A brief description of the problem being addressed and the approach that is being taken by the STB toward providing an economic solution to the problem is provided. A statement of the maintenance problems, the benefits and drawbacks of three alternative solutions, and a brief history of the STB experience in software reengineering are followed by the STB new FORTRAN standards, methodology, and the concepts for a software environment.

  7. Methodology and measures for preventing unacceptable flow-accelerated corrosion thinning of pipelines and equipment of NPP power generating units

    NASA Astrophysics Data System (ADS)

    Tomarov, G. V.; Shipkov, A. A.; Lovchev, V. N.; Gutsev, D. F.

    2016-10-01

    Problems of metal flow-accelerated corrosion (FAC) in the pipelines and equipment of the condensate- feeding and wet-steam paths of NPP power-generating units (PGU) are examined. Goals, objectives, and main principles of the methodology for the implementation of an integrated program of AO Concern Rosenergoatom for the prevention of unacceptable FAC thinning and for increasing operational flow-accelerated corrosion resistance of NPP EaP are worded (further the Program). A role is determined and potentialities are shown for the use of Russian software packages in the evaluation and prediction of FAC rate upon solving practical problems for the timely detection of unacceptable FAC thinning in the elements of pipelines and equipment (EaP) of the secondary circuit of NPP PGU. Information is given concerning the structure, properties, and functions of the software systems for plant personnel support in the monitoring and planning of the inservice inspection of FAC thinning elements of pipelines and equipment of the secondary circuit of NPP PGUs, which are created and implemented at some Russian NPPs equipped with VVER-1000, VVER-440, and BN-600 reactors. It is noted that one of the most important practical results of software packages for supporting NPP personnel concerning the issue of flow-accelerated corrosion consists in revealing elements under a hazard of intense local FAC thinning. Examples are given for successful practice at some Russian NPP concerning the use of software systems for supporting the personnel in early detection of secondary-circuit pipeline elements with FAC thinning close to an unacceptable level. Intermediate results of working on the Program are presented and new tasks set in 2012 as a part of the updated program are denoted. The prospects of the developed methods and tools in the scope of the Program measures at the stages of design and construction of NPP PGU are discussed. The main directions of the work on solving the problems of flow-accelerated corrosion of pipelines and equipment in Russian NPP PGU are defined.

  8. Bootstrapping Methods Applied for Simulating Laboratory Works

    ERIC Educational Resources Information Center

    Prodan, Augustin; Campean, Remus

    2005-01-01

    Purpose: The aim of this work is to implement bootstrapping methods into software tools, based on Java. Design/methodology/approach: This paper presents a category of software e-tools aimed at simulating laboratory works and experiments. Findings: Both students and teaching staff use traditional statistical methods to infer the truth from sample…

  9. TEST (Toxicity Estimation Software Tool) Ver 4.1

    EPA Science Inventory

    The Toxicity Estimation Software Tool (T.E.S.T.) has been developed to allow users to easily estimate toxicity and physical properties using a variety of QSAR methodologies. T.E.S.T allows a user to estimate toxicity without requiring any external programs. Users can input a chem...

  10. The Stabilization, Exploration, and Expression of Computer Game History

    ERIC Educational Resources Information Center

    Kaltman, Eric

    2017-01-01

    Computer games are now a significant cultural phenomenon, and a significant artistic output of humanity. However, little effort and attention have been paid to how the medium of games and interactive software developed, and even less to the historical storage of software development documentation. This thesis borrows methodologies and practices…

  11. Risk Based Inspection Methodology and Software Applied to Atmospheric Storage Tanks

    NASA Astrophysics Data System (ADS)

    Topalis, P.; Korneliussen, G.; Hermanrud, J.; Steo, Y.

    2012-05-01

    A new risk-based inspection (RBI) methodology and software is presented in this paper. The objective of this work is to allow management of the inspections of atmospheric storage tanks in the most efficient way, while, at the same time, accident risks are minimized. The software has been built on the new risk framework architecture, a generic platform facilitating efficient and integrated development of software applications using risk models. The framework includes a library of risk models and the user interface is automatically produced on the basis of editable schemas. This risk-framework-based RBI tool has been applied in the context of RBI for above-ground atmospheric storage tanks (AST) but it has been designed with the objective of being generic enough to allow extension to the process plants in general. This RBI methodology is an evolution of an approach and mathematical models developed for Det Norske Veritas (DNV) and the American Petroleum Institute (API). The methodology assesses damage mechanism potential, degradation rates, probability of failure (PoF), consequence of failure (CoF) in terms of environmental damage and financial loss, risk and inspection intervals and techniques. The scope includes assessment of the tank floor for soil-side external corrosion and product-side internal corrosion and the tank shell courses for atmospheric corrosion and internal thinning. It also includes preliminary assessment for brittle fracture and cracking. The data are structured according to an asset hierarchy including Plant, Production Unit, Process Unit, Tag, Part and Inspection levels and the data are inherited / defaulted seamlessly from a higher hierarchy level to a lower level. The user interface includes synchronized hierarchy tree browsing, dynamic editor and grid-view editing and active reports with drill-in capability.

  12. The Design of Model-Based Training Programs

    NASA Technical Reports Server (NTRS)

    Polson, Peter; Sherry, Lance; Feary, Michael; Palmer, Everett; Alkin, Marty; McCrobie, Dan; Kelley, Jerry; Rosekind, Mark (Technical Monitor)

    1997-01-01

    This paper proposes a model-based training program for the skills necessary to operate advance avionics systems that incorporate advanced autopilots and fight management systems. The training model is based on a formalism, the operational procedure model, that represents the mission model, the rules, and the functions of a modem avionics system. This formalism has been defined such that it can be understood and shared by pilots, the avionics software, and design engineers. Each element of the software is defined in terms of its intent (What?), the rationale (Why?), and the resulting behavior (How?). The Advanced Computer Tutoring project at Carnegie Mellon University has developed a type of model-based, computer aided instructional technology called cognitive tutors. They summarize numerous studies showing that training times to a specified level of competence can be achieved in one third the time of conventional class room instruction. We are developing a similar model-based training program for the skills necessary to operation the avionics. The model underlying the instructional program and that simulates the effects of pilots entries and the behavior of the avionics is based on the operational procedure model. Pilots are given a series of vertical flightpath management problems. Entries that result in violations, such as failure to make a crossing restriction or violating the speed limits, result in error messages with instruction. At any time, the flightcrew can request suggestions on the appropriate set of actions. A similar and successful training program for basic skills for the FMS on the Boeing 737-300 was developed and evaluated. The results strongly support the claim that the training methodology can be adapted to the cockpit.

  13. Implementation of Cyber-Physical Production Systems for Quality Prediction and Operation Control in Metal Casting

    PubMed Central

    Lee, JuneHyuck; Noh, Sang Do; Kim, Hyun-Jung; Kang, Yong-Shin

    2018-01-01

    The prediction of internal defects of metal casting immediately after the casting process saves unnecessary time and money by reducing the amount of inputs into the next stage, such as the machining process, and enables flexible scheduling. Cyber-physical production systems (CPPS) perfectly fulfill the aforementioned requirements. This study deals with the implementation of CPPS in a real factory to predict the quality of metal casting and operation control. First, a CPPS architecture framework for quality prediction and operation control in metal-casting production was designed. The framework describes collaboration among internet of things (IoT), artificial intelligence, simulations, manufacturing execution systems, and advanced planning and scheduling systems. Subsequently, the implementation of the CPPS in actual plants is described. Temperature is a major factor that affects casting quality, and thus, temperature sensors and IoT communication devices were attached to casting machines. The well-known NoSQL database, HBase and the high-speed processing/analysis tool, Spark, are used for IoT repository and data pre-processing, respectively. Many machine learning algorithms such as decision tree, random forest, artificial neural network, and support vector machine were used for quality prediction and compared with R software. Finally, the operation of the entire system is demonstrated through a CPPS dashboard. In an era in which most CPPS-related studies are conducted on high-level abstract models, this study describes more specific architectural frameworks, use cases, usable software, and analytical methodologies. In addition, this study verifies the usefulness of CPPS by estimating quantitative effects. This is expected to contribute to the proliferation of CPPS in the industry. PMID:29734699

  14. Processing and review interface for strong motion data (PRISM) software, version 1.0.0—Methodology and automated processing

    USGS Publications Warehouse

    Jones, Jeanne; Kalkan, Erol; Stephens, Christopher

    2017-02-23

    A continually increasing number of high-quality digital strong-motion records from stations of the National Strong-Motion Project (NSMP) of the U.S. Geological Survey (USGS), as well as data from regional seismic networks within the United States, call for automated processing of strong-motion records with human review limited to selected significant or flagged records. The NSMP has developed the Processing and Review Interface for Strong Motion data (PRISM) software to meet this need. In combination with the Advanced National Seismic System Quake Monitoring System (AQMS), PRISM automates the processing of strong-motion records. When used without AQMS, PRISM provides batch-processing capabilities. The PRISM version 1.0.0 is platform independent (coded in Java), open source, and does not depend on any closed-source or proprietary software. The software consists of two major components: a record processing engine and a review tool that has a graphical user interface (GUI) to manually review, edit, and process records. To facilitate use by non-NSMP earthquake engineers and scientists, PRISM (both its processing engine and review tool) is easy to install and run as a stand-alone system on common operating systems such as Linux, OS X, and Windows. PRISM was designed to be flexible and extensible in order to accommodate new processing techniques. This report provides a thorough description and examples of the record processing features supported by PRISM. All the computing features of PRISM have been thoroughly tested.

  15. Experimental analysis of computer system dependability

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar, K.; Tang, Dong

    1993-01-01

    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.

  16. Software-Based Safety Systems in Space - Learning from other Domains

    NASA Astrophysics Data System (ADS)

    Klicker, M.; Putzer, H.

    2012-01-01

    Increasing complexity and new emerging capabilities for manned and unmanned missions have been the hallmark of the past decades of space exploration. One of the drivers in this process was the ever increasing use of software and software-intensive systems to implement system functions necessary to the capabilities needed. The course of technological evolution suggests that this development will continue well into the future with a number of challenges for the safety community some of which shall be discussed in this paper. The current state of the art reveals a number of problems with developing and assessing safety critical software which explains the reluctance of the space community to rely on software-based safety measures to mitigate hazards. Among others, usually lack of trustworthy evidence of software integrity in all foreseeable situations and the difficulties to integrate software in the traditional safety analysis framework are cited. Experience from other domains and recent developments in modern software development methodologies and verification techniques are analysed for the suitability for space systems and an avionics architectural framework (see STANAG 4626) for the implementation of safety critical software is proposed. This is shown to create among other features the possibility of numerous degradation modes enhancing overall system safety and interoperability of computerized space systems. It also potentially simplifies international cooperation on a technical level by introducing a higher degree of compatibility. As software safety cannot be tested or argued into a system in hindsight, the development process and especially the architecture chosen are essential to establish safety properties for the software used to implement safety functions. The core of the safety argument revolves around the separation of different functions and software modules from each other by minimal coupling of functions and credible separation mechanisms in the architecture combined with rigorous development methodologies for the software itself.

  17. System testing of a production Ada (trademark) project: The GRODY study

    NASA Technical Reports Server (NTRS)

    Seigle, Jeffrey; Esker, Linda; Shi, Ying-Liang

    1990-01-01

    The use of the Ada language and design methodologies that utilize its features has a strong impact on all phases of the software development project lifecycle. At the National Aeronautics and Space Administration/Goddard Space Flight Center (NASA/GSFC), the Software Engineering Laboratory (SEL) conducted an experiment in parallel development of two flight dynamics systems in FORTRAN and Ada. The teams found some qualitative differences between the system test phases of the two projects. Although planning for system testing and conducting of tests were not generally affected by the use of Ada, the solving of problems found in system testing was generally facilitated by Ada constructs and design methodology. Most problems found in system testing were not due to difficulty with the language or methodology but to lack of experience with the application.

  18. Analyzing qualitative data with computer software.

    PubMed Central

    Weitzman, E A

    1999-01-01

    OBJECTIVE: To provide health services researchers with an overview of the qualitative data analysis process and the role of software within it; to provide a principled approach to choosing among software packages to support qualitative data analysis; to alert researchers to the potential benefits and limitations of such software; and to provide an overview of the developments to be expected in the field in the near future. DATA SOURCES, STUDY DESIGN, METHODS: This article does not include reports of empirical research. CONCLUSIONS: Software for qualitative data analysis can benefit the researcher in terms of speed, consistency, rigor, and access to analytic methods not available by hand. Software, however, is not a replacement for methodological training. PMID:10591282

  19. Software development for teleroentgenogram analysis

    NASA Astrophysics Data System (ADS)

    Goshkoderov, A. A.; Khlebnikov, N. A.; Obabkov, I. N.; Serkov, K. V.; Gajniyarov, I. M.; Aliev, A. A.

    2017-09-01

    A framework for the analysis and calculation of teleroentgenograms was developed. Software development was carried out in the Department of Children's Dentistry and Orthodontics in Ural State Medical University. The software calculates the teleroentgenogram by the original method which was developed in this medical department. Program allows designing its own methods for calculating the teleroentgenograms by new methods. It is planned to use the technology of machine learning (Neural networks) in the software. This will help to make the process of calculating the teleroentgenograms easier because methodological points will be placed automatically.

  20. Evaluating software development characteristics: A comparison of software errors in different environments

    NASA Technical Reports Server (NTRS)

    Weiss, D. M.

    1981-01-01

    Error data obtained from two different software development environments are compared. To obtain data that was complete, accurate, and meaningful, a goal-directed data collection methodology was used. Changes made to software were monitored concurrently with its development. Similarities common to both environments are included: (1) the principal error was in the design and implementation of single routines; (2) few errors were the result of changes, required more than one attempt to correct, and resulted in other errors; (3) relatively few errors took more than a day to correct.

  1. Financial constraints in capacity planning: a national utility regulatory model (NUREG). Volume III of III: software description. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1981-10-29

    This volume is the software description for the National Utility Regulatory Model (NUREG). This is the third of three volumes provided by ICF under contract number DEAC-01-79EI-10579. These three volumes are: a manual describing the NUREG methodology; a users guide; and a description of the software. This manual describes the software which has been developed for NUREG. This includes a listing of the source modules. All computer code has been written in FORTRAN.

  2. Operational excellence (six sigma) philosophy: Application to software quality assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lackner, M.

    1997-11-01

    This report contains viewgraphs on operational excellence philosophy of six sigma applied to software quality assurance. This report outlines the following: goal of six sigma; six sigma tools; manufacturing vs administrative processes; Software quality assurance document inspections; map software quality assurance requirements document; failure mode effects analysis for requirements document; measuring the right response variables; and questions.

  3. An Operations Management System for the Space Station

    NASA Astrophysics Data System (ADS)

    Rosenthal, H. G.

    1986-09-01

    This paper presents an overview of the conceptual design of an integrated onboard Operations Management System (OMS). Both hardware and software concepts are presented and the integrated space station network is discussed. It is shown that using currently available software technology, an integrated software solution for Space Station management and control, implemented with OMS software, is feasible.

  4. Information Technology. DOD Needs to Strengthen Management of Its Statutorily Mandated Software and System Process Improvement Efforts

    DTIC Science & Technology

    2009-09-01

    NII)/CIO Assistant Secretary of Defense for Networks and Information Integration/Chief Information Officer CMMI Capability Maturity Model...a Web-based portal to share knowledge about software process-related methodologies, such as the SEI’s Capability Maturity Model Integration ( CMMI ...19 SEI’s IDEALSM model, and Lean Six Sigma.20 For example, the portal features content areas such as software acquisition management, the SEI CMMI

  5. The Personal Software Process(Trademark) (PSP(Trademark)) Body of Knowledge, Version 1.0

    DTIC Science & Technology

    2005-08-01

    PSP books and reports written by Watts S . Humphrey , listed in the bibliography of this...Software Process ( PSP ) methodology. Developed in 1993 by Watts S . Humphrey , the PSP is a disci- plined and structured approach to developing software. By...engineer. The content is drawn from the work of Watts S . Humphrey over the past decade. As PSP adoption continues to grow, it is expected that the PSP

  6. Operational Suitability Guide. Volume 2. Templates

    DTIC Science & Technology

    1990-05-01

    Intended mission, and the required technical and operational characteristics. The mission must be adequately defined and key hardware and software ...operational availability. With the use of fault-tolerant computer hardware and software , the system R&M will significantly improve end-to-end...should Include both hardware and software elements, as appropriate. Unique characteristics or unique support concepts should be Identified if they result

  7. Using CONFIG for Simulation of Operation of Water Recovery Subsystems for Advanced Control Software Evaluation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Flores, Luis; Fleming, Land; Throop, Daiv

    2002-01-01

    A hybrid discrete/continuous simulation tool, CONFIG, has been developed to support evaluation of the operability life support systems. CON FIG simulates operations scenarios in which flows and pressures change continuously while system reconfigurations occur as discrete events. In simulations, intelligent control software can interact dynamically with hardware system models. CONFIG simulations have been used to evaluate control software and intelligent agents for automating life support systems operations. A CON FIG model of an advanced biological water recovery system has been developed to interact with intelligent control software that is being used in a water system test at NASA Johnson Space Center

  8. Transportable Payload Operations Control Center reusable software: Building blocks for quality ground data systems

    NASA Technical Reports Server (NTRS)

    Mahmot, Ron; Koslosky, John T.; Beach, Edward; Schwarz, Barbara

    1994-01-01

    The Mission Operations Division (MOD) at Goddard Space Flight Center builds Mission Operations Centers which are used by Flight Operations Teams to monitor and control satellites. Reducing system life cycle costs through software reuse has always been a priority of the MOD. The MOD's Transportable Payload Operations Control Center development team established an extensive library of 14 subsystems with over 100,000 delivered source instructions of reusable, generic software components. Nine TPOCC-based control centers to date support 11 satellites and achieved an average software reuse level of more than 75 percent. This paper shares experiences of how the TPOCC building blocks were developed and how building block developer's, mission development teams, and users are all part of the process.

  9. Systemic Operational Design: Improving Operational Planning for the Netherlands Armed Forces

    DTIC Science & Technology

    2006-05-25

    This methodology is called Soft Systems Methodology . His methodology is a structured way of thinking in which not only a perceived problematic...Many similarities exist between Systemic Operational Design and Soft Systems Methodology , their epistemology is related. Furthermore, they both have...Systems Thinking: Managing Chaos and Complexity. Boston: Butterworth Heinemann, 1999. Checkland, Peter, and Jim Scholes. Soft Systems Methodology in

  10. Operating System Abstraction Layer (OSAL)

    NASA Technical Reports Server (NTRS)

    Yanchik, Nicholas J.

    2007-01-01

    This viewgraph presentation reviews the concept of the Operating System Abstraction Layer (OSAL) and its benefits. The OSAL is A small layer of software that allows programs to run on many different operating systems and hardware platforms It runs independent of the underlying OS & hardware and it is self-contained. The benefits of OSAL are that it removes dependencies from any one operating system, promotes portable, reusable flight software. It allows for Core Flight software (FSW) to be built for multiple processors and operating systems. The presentation discusses the functionality, the various OSAL releases, and describes the specifications.

  11. Software requirements: Guidance and control software development specification

    NASA Technical Reports Server (NTRS)

    Withers, B. Edward; Rich, Don C.; Lowman, Douglas S.; Buckland, R. C.

    1990-01-01

    The software requirements for an implementation of Guidance and Control Software (GCS) are specified. The purpose of the GCS is to provide guidance and engine control to a planetary landing vehicle during its terminal descent onto a planetary surface and to communicate sensory information about that vehicle and its descent to some receiving device. The specification was developed using the structured analysis for real time system specification methodology by Hatley and Pirbhai and was based on a simulation program used to study the probability of success of the 1976 Viking Lander missions to Mars. Three versions of GCS are being generated for use in software error studies.

  12. Impact of Ada and object-oriented design in the flight dynamics division at Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Waligora, Sharon; Bailey, John; Stark, Mike

    1995-01-01

    The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of applications software. The goals of the SEL are (1) to understand the software development process in the GSFC environment; (2) to measure the effects of various methodologies, tools, and models on this process; and (3) to identify and then to apply successful development practices. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that includes this document.

  13. Life Cycle Assessment Software for Product and Process Sustainability Analysis

    ERIC Educational Resources Information Center

    Vervaeke, Marina

    2012-01-01

    In recent years, life cycle assessment (LCA), a methodology for assessment of environmental impacts of products and services, has become increasingly important. This methodology is applied by decision makers in industry and policy, product developers, environmental managers, and other non-LCA specialists working on environmental issues in a wide…

  14. SIMCA T 1.0: A SAS Computer Program for Simulating Computer Adaptive Testing

    ERIC Educational Resources Information Center

    Raiche, Gilles; Blais, Jean-Guy

    2006-01-01

    Monte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their…

  15. Shaping ability of reciprocating motion of WaveOne and HyFlex in moderate to severe curved canals: A comparative study with cone beam computed tomography

    PubMed Central

    Simpsy, Gurram Samuel; Sajjan, Girija S.; Mudunuri, Padmaja; Chittem, Jyothi; Prasanthi, Nalam N. V. D.; Balaga, Pankaj

    2016-01-01

    Introduction: M-Wire and reciprocating motion of WaveOne and controlled memory (CM) wire) of HyFlex were the recent innovations using thermal treatment. Therefore, a study was planned to evaluate the shaping ability of reciprocating motion of WaveOne and HyFlex using cone beam computed tomography (CBCT). Methodology: Forty-five freshly extracted mandibular teeth were selected and stored in saline until use. All teeth were scanned pre- and post-operatively using CBCT (Kodak 9000). All teeth were accessed and divided into three groups. (1) Group 1 (control n = 15): Instrumented with ProTaper. (2) Group 2 (n = 15): Instrumented with primary file (8%/25) WaveOne. (3) Group 3 (n = 15): Instrumented with (4%/25) HyFlex CM. Sections at 1, 3, and 5 mm were obtained from the pre- and post-operative scans. Measurement was done using CS3D software and Adobe Photoshop software. Apical transportation and degree of straightening were measured and statistically analyzed. Results: HyFlex showed lesser apical transportation when compared to other groups at 1 and 3 mm. WaveOne showed lesser degree of straightening when compared to other groups. Conclusion: This present study concluded that all systems could be employed in routine endodontics whereas HyFlex and WaveOne could be employed in severely curved canals. PMID:27994323

  16. Visual Decision Support Tool for Supporting Asset ...

    EPA Pesticide Factsheets

    Abstract:Managing urban water infrastructures faces the challenge of jointly dealing with assets of diverse types, useful life, cost, ages and condition. Service quality and sustainability require sound long-term planning, well aligned with tactical and operational planning and management. In summary, the objective of an integrated approach to infrastructure asset management is to assist utilities answer the following questions:•Who are we at present?•What service do we deliver?•What do we own?•Where do we want to be in the long-term?•How do we get there?The AWARE-P approach (www.aware-p.org) offers a coherent methodological framework and a valuable portfolio of software tools. It is designed to assist water supply and wastewater utility decision-makers in their analyses and planning processes. It is based on a Plan-Do-Check-Act process and is in accordance with the key principles of the International Standards Organization (ISO) 55000 standards on asset management. It is compatible with, and complementary to WERF’s SIMPLE framework. The software assists in strategic, tactical, and operational planning, through a non-intrusive, web-based, collaborative environment where objectives and metrics drive IAM planning. It is aimed at industry professionals and managers, as well as at the consultants and technical experts that support them. It is easy to use and maximizes the value of information from multiple existing data sources, both in da

  17. Performance analysis and optimization of an advanced pharmaceutical wastewater treatment plant through a visual basic software tool (PWWT.VB).

    PubMed

    Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha

    2016-05-01

    A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.

  18. Advanced engineering software for in-space assembly and manned planetary spacecraft

    NASA Technical Reports Server (NTRS)

    Delaquil, Donald; Mah, Robert

    1990-01-01

    Meeting the objectives of the Lunar/Mars initiative to establish safe and cost-effective extraterrestrial bases requires an integrated software/hardware approach to operational definitions and systems implementation. This paper begins this process by taking a 'software-first' approach to systems design, for implementing specific mission scenarios in the domains of in-space assembly and operations of the manned Mars spacecraft. The technological barriers facing implementation of robust operational systems within these two domains are discussed, and preliminary software requirements and architectures that resolve these barriers are provided.

  19. The instrumental genesis process in future primary teachers using Dynamic Geometry Software

    NASA Astrophysics Data System (ADS)

    Ruiz-López, Natalia

    2018-05-01

    This paper, which describes a study undertaken with pairs of future primary teachers using GeoGebra software to solve geometry problems, includes a brief literature review, the theoretical framework and methodology used. An analysis of the instrumental genesis process for a pair participating in the case study is also provided. This analysis addresses the techniques and types of dragging used, the obstacles to learning encountered, a description of the interaction between the pair and their interaction with the teacher, and the type of language used. Based on this analysis, possibilities and limitations of the instrumental genesis process are identified for the development of geometric competencies such as conjecture creation, property checking and problem researching. It is also suggested that the methodology used in the analysis of the problem solving process may be useful for those teachers and researchers who want to integrate Dynamic Geometry Software (DGS) in their classrooms.

  20. Recent experience with the CQE{trademark}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, C.D.; Kehoe, D.B.; O`Connor, D.C.

    1997-12-31

    CQE (the Coal Quality Expert) is a software tool that brings a new level of sophistication to fuel decisions by seamlessly integrating the system-wide effects of fuel purchase decisions on power plant performance, emissions, and power generation costs. The CQE technology, which addresses fuel quality from the coal mine to the busbar and the stack, is an integration and improvement of predecessor software tools including: EPRI`s Coal Quality Information System, EPRI`s Coal Cleaning Cost Model, EPRI`s Coal Quality Impact Model, and EPRI and DOE models to predict slagging and fouling. CQE can be used as a stand-alone workstation or asmore » a network application for utilities, coal producers, and equipment manufacturers to perform detailed analyses of the impacts of coal quality, capital improvements, operational changes, and/or environmental compliance alternatives on power plant emissions, performance and production costs. It can be used as a comprehensive, precise and organized methodology for systematically evaluating all such impacts or it may be used in pieces with some default data to perform more strategic or comparative studies.« less

  1. Automated Liquid Microjunction Surface Sampling-HPLC-MS/MS Analysis of Drugs and Metabolites in Whole-Body Thin Tissue Sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos; Van Berkel, Gary J

    A fully automated liquid extraction-based surface sampling system utilizing a commercially available autosampler coupled to high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) detection is reported. Discrete spots selected for droplet-based sampling and automated sample queue generation for both the autosampler and MS were enabled by using in-house developed software. In addition, co-registration of spatially resolved sampling position and HPLC-MS information to generate heatmaps of compounds monitored for subsequent data analysis was also available in the software. The system was evaluated with whole-body thin tissue sections from propranolol dosed rat. The hands-free operation of the system was demonstrated by creating heatmapsmore » of the parent drug and its hydroxypropranolol glucuronide metabolites with 1 mm resolution in the areas of interest. The sample throughput was approximately 5 min/sample defined by the time needed for chromatographic separation. The spatial distributions of both the drug and its metabolites were consistent with previous studies employing other liquid extraction-based surface sampling methodologies.« less

  2. Development of Data Processing Software for NBI Spectroscopic Analysis System

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodan; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Wu, Deyun; Cui, Qinglong

    2015-04-01

    A set of data processing software is presented in this paper for processing NBI spectroscopic data. For better and more scientific managment and querying these data, they are managed uniformly by the NBI data server. The data processing software offers the functions of uploading beam spectral original and analytic data to the data server manually and automatically, querying and downloading all the NBI data, as well as dealing with local LZO data. The set software is composed of a server program and a client program. The server software is programmed in C/C++ under a CentOS development environment. The client software is developed under a VC 6.0 platform, which offers convenient operational human interfaces. The network communications between the server and the client are based on TCP. With the help of this set software, the NBI spectroscopic analysis system realizes the unattended automatic operation, and the clear interface also makes it much more convenient to offer beam intensity distribution data and beam power data to operators for operation decision-making. supported by National Natural Science Foundation of China (No. 11075183), the Chinese Academy of Sciences Knowledge Innovation

  3. A Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Software

    NASA Astrophysics Data System (ADS)

    Oh, S. H.; Kang, Y. W.; Byun, Y. I.

    2007-12-01

    We present a software which we developed for the multi-purpose CCD camera. This software can be used on the all 3 types of CCD - KAF-0401E (768×512), KAF-1602E (15367times;1024), KAF-3200E (2184×1472) made in KODAK Co.. For the efficient CCD camera control, the software is operated with two independent processes of the CCD control program and the temperature/shutter operation program. This software is designed to fully automatic operation as well as manually operation under LINUX system, and is controled by LINUX user signal procedure. We plan to use this software for all sky survey system and also night sky monitoring and sky observation. As our results, the read-out time of each CCD are about 15sec, 64sec, 134sec for KAF-0401E, KAF-1602E, KAF-3200E., because these time are limited by the data transmission speed of parallel port. For larger format CCD, the data transmission is required more high speed. we are considering this control software to one using USB port for high speed data transmission.

  4. A Methodology for Flight-Time Identification of Helicopter-Slung Load Frequency Response Characteristics Using CIFER

    NASA Technical Reports Server (NTRS)

    Sahai, Ranjana; Pierce, Larry; Cicolani, Luigi; Tischler, Mark

    1998-01-01

    Helicopter slung load operations are common in both military and civil contexts. The slung load adds load rigid body modes, sling stretching, and load aerodynamics to the system dynamics, which can degrade system stability and handling qualities, and reduce the operating envelope of the combined system below that of the helicopter alone. Further, the effects of the load on system dynamics vary significantly among the large range of loads, slings, and flight conditions that a utility helicopter will encounter in its operating life. In this context, military helicopters and loads are often qualified for slung load operations via flight tests which can be time consuming and expensive. One way to reduce the cost and time required to carry out these tests and generate quantitative data more readily is to provide an efficient method for analysis during the flight, so that numerous test points can be evaluated in a single flight test, with evaluations performed in near real time following each test point and prior to clearing the aircraft to the next point. Methodology for this was implemented at Ames and demonstrated in slung load flight tests in 1997 and was improved for additional flight tests in 1999. The parameters of interest for the slung load tests are aircraft handling qualities parameters (bandwidth and phase delay), stability margins (gain and phase margin), and load pendulum roots (damping and natural frequency). A procedure for the identification of these parameters from frequency sweep data was defined using the CIFER software package. CIFER is a comprehensive interactive package of utilities for frequency domain analysis previously developed at Ames for aeronautical flight test applications. It has been widely used in the US on a variety of aircraft, including some primitive flight time analysis applications.

  5. Computer-Aided Sensor Development Focused on Security Issues.

    PubMed

    Bialas, Andrzej

    2016-05-26

    The paper examines intelligent sensor and sensor system development according to the Common Criteria methodology, which is the basic security assurance methodology for IT products and systems. The paper presents how the development process can be supported by software tools, design patterns and knowledge engineering. The automation of this process brings cost-, quality-, and time-related advantages, because the most difficult and most laborious activities are software-supported and the design reusability is growing. The paper includes a short introduction to the Common Criteria methodology and its sensor-related applications. In the experimental section the computer-supported and patterns-based IT security development process is presented using the example of an intelligent methane detection sensor. This process is supported by an ontology-based tool for security modeling and analyses. The verified and justified models are transferred straight to the security target specification representing security requirements for the IT product. The novelty of the paper is to provide a patterns-based and computer-aided methodology for the sensors development with a view to achieving their IT security assurance. The paper summarizes the validation experiment focused on this methodology adapted for the sensors system development, and presents directions of future research.

  6. Computer-Aided Sensor Development Focused on Security Issues

    PubMed Central

    Bialas, Andrzej

    2016-01-01

    The paper examines intelligent sensor and sensor system development according to the Common Criteria methodology, which is the basic security assurance methodology for IT products and systems. The paper presents how the development process can be supported by software tools, design patterns and knowledge engineering. The automation of this process brings cost-, quality-, and time-related advantages, because the most difficult and most laborious activities are software-supported and the design reusability is growing. The paper includes a short introduction to the Common Criteria methodology and its sensor-related applications. In the experimental section the computer-supported and patterns-based IT security development process is presented using the example of an intelligent methane detection sensor. This process is supported by an ontology-based tool for security modeling and analyses. The verified and justified models are transferred straight to the security target specification representing security requirements for the IT product. The novelty of the paper is to provide a patterns-based and computer-aided methodology for the sensors development with a view to achieving their IT security assurance. The paper summarizes the validation experiment focused on this methodology adapted for the sensors system development, and presents directions of future research. PMID:27240360

  7. IUS/TUG orbital operations and mission support study. Volume 4: Project planning data

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Planning data are presented for the development phases of interim upper stage (IUS) and tug systems. Major project planning requirements, major event schedules, milestones, system development and operations process networks, and relevant support research and technology requirements are included. Topics discussed include: IUS flight software; tug flight software; IUS/tug ground control center facilities, personnel, data systems, software, and equipment; IUS mission events; tug mission events; tug/spacecraft rendezvous and docking; tug/orbiter operations interface, and IUS/orbiter operations interface.

  8. Implications of the Social Web Environment for User Story Education

    ERIC Educational Resources Information Center

    Fancott, Terrill; Kamthan, Pankaj; Shahmir, Nazlie

    2012-01-01

    In recent years, user stories have emerged in academia, as well as industry, as a notable approach for expressing user requirements of interactive software systems that are developed using agile methodologies. There are social aspects inherent to software development, in general, and user stories, in particular. This paper presents directions and…

  9. Digital Methodologies of Education Governance: Pearson plc and the Remediation of Methods

    ERIC Educational Resources Information Center

    Williamson, Ben

    2016-01-01

    This article analyses the rise of software systems in education governance, focusing on digital methods in the collection, calculation and circulation of educational data. It examines how software-mediated methods intervene in the ways educational institutions and actors are seen, known and acted upon through an analysis of the methodological…

  10. Methodology for Software Reliability Prediction. Volume 2.

    DTIC Science & Technology

    1987-11-01

    The overall acquisition ,z program shall include the resources, schedule, management, structure , and controls necessary to ensure that specified AD...Independent Verification/Validation - Programming Team Structure - Educational Level of Team Members - Experience Level of Team Members * Methods Used...Prediction or Estimation Parameter Supported: Software - Characteristics 3. Objectives: Structured programming studies and Government Ur.’.. procurement

  11. EVALUATION OF VADOSE ZONE AND SOURCE MODELS FOR MULTI-MEDIA, MULTI-PATHWAY, MULTI-RECEPTOR RISK ASSESSMENT USING LARGE SOIL COLUMN EXPERIMENT DATA

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) is developing a comprehensive environmental exposure and risk analysis software system for agency-wide application using the methodology of a Multi-media, Multi-pathway, Multi-receptor Risk Assessment (3MRA) model. This software sys...

  12. Problem Solving Frameworks for Mathematics and Software Development

    ERIC Educational Resources Information Center

    McMaster, Kirby; Sambasivam, Samuel; Blake, Ashley

    2012-01-01

    In this research, we examine how problem solving frameworks differ between Mathematics and Software Development. Our methodology is based on the assumption that the words used frequently in a book indicate the mental framework of the author. We compared word frequencies in a sample of 139 books that discuss problem solving. The books were grouped…

  13. Online Tutoring and Emotional Labour in the Private Sector

    ERIC Educational Resources Information Center

    Webb, Sue

    2012-01-01

    Purpose: What happens when computer software is designed to replace the teacher and the human role is to service the relationship between the software and the learner? Specifically, this paper aims to consider whether or not emotional labour is performed in contexts mediated by technology in the private sector. Design/methodology/approach: The…

  14. The Effect of Multimedia Writing Support Software on Written Productivity

    ERIC Educational Resources Information Center

    Racicot, Rose

    2016-01-01

    The purpose of this study was to explore the effects of multimedia writing support software on the quality and quantity of writing productivity and self-perception for students who have mild to moderate developmental delays. Participants in this study included 22 special education students in grades kindergarten through 6. Methodology included a…

  15. Development of Usability Criteria for E-Learning Content Development Software

    ERIC Educational Resources Information Center

    Celik, Serkan

    2012-01-01

    Revolutionary advancements have been observed in e-learning technologies though an amalgamated evaluation methodology for new generation e-learning content development tools is not available. The evaluation of educational software for online use must consider its usability and as well as its pedagogic effectiveness. This study is a first step…

  16. PBL-SEE: An Authentic Assessment Model for PBL-Based Software Engineering Education

    ERIC Educational Resources Information Center

    dos Santos, Simone C.

    2017-01-01

    The problem-based learning (PBL) approach has been successfully applied to teaching software engineering thanks to its principles of group work, learning by solving real problems, and learning environments that match the market realities. However, the lack of well-defined methodologies and processes for implementing the PBL approach represents a…

  17. How In-Service Teachers Develop Electronic Lessons

    ERIC Educational Resources Information Center

    Zsoldos-Marchis, Iuliana

    2014-01-01

    Computer assisted teaching (CAL) is considered to be a modern teaching method, but it is not widely used by teachers because lack of technology and adequate educational softwares in schools; or lack of teachers' knowledge on methodology and computer use. In order to select the most efficient educational software for their class, teachers should…

  18. Section 508 Electronic Information Accessibility Requirements for Software Development

    NASA Technical Reports Server (NTRS)

    Ellis, Rebecca

    2014-01-01

    Section 508 Subpart B 1194.21 outlines requirements for operating system and software development in order to create a product that is accessible to users with various disabilities. This portion of Section 508 contains a variety of standards to enable those using assistive technology and with visual, hearing, cognitive and motor difficulties to access all information provided in software. The focus on requirements was limited to the Microsoft Windows® operating system as it is the predominant operating system used at this center. Compliance with this portion of the requirements can be obtained by integrating the requirements into the software development cycle early and by remediating issues in legacy software if possible. There are certain circumstances with software that may arise necessitating an exemption from these requirements, such as design or engineering software using dynamically changing graphics or numbers to convey information. These exceptions can be discussed with the Section 508 Coordinator and another method of accommodation used.

  19. An Alternative Flight Software Trigger Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions Using Inaccurate or Scarce Information

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.; Gay, Robert S.; Stachowiak, Susan J.

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter to improve altitude knowledge. In order to increase overall robustness, the vehicle also has an alternate method of triggering the parachute deployment sequence based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this backup trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to semi-automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a statistical classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers improved performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles.

  20. An Alternative Flight Software Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions using Inaccurate or Scarce Information

    NASA Technical Reports Server (NTRS)

    Smith, Kelly; Gay, Robert; Stachowiak, Susan

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter to improve altitude knowledge. In order to increase overall robustness, the vehicle also has an alternate method of triggering the parachute deployment sequence based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this backup trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to semi-automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a statistical classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers improved performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles

  1. HRVanalysis: A Free Software for Analyzing Cardiac Autonomic Activity

    PubMed Central

    Pichot, Vincent; Roche, Frédéric; Celle, Sébastien; Barthélémy, Jean-Claude; Chouchou, Florian

    2016-01-01

    Since the pioneering studies of the 1960s, heart rate variability (HRV) has become an increasingly used non-invasive tool for examining cardiac autonomic functions and dysfunctions in various populations and conditions. Many calculation methods have been developed to address these issues, each with their strengths and weaknesses. Although, its interpretation may remain difficult, this technique provides, from a non-invasive approach, reliable physiological information that was previously inaccessible, in many fields including death and health prediction, training and overtraining, cardiac and respiratory rehabilitation, sleep-disordered breathing, large cohort follow-ups, children's autonomic status, anesthesia, or neurophysiological studies. In this context, we developed HRVanalysis, a software to analyse HRV, used and improved for over 20 years and, thus, designed to meet laboratory requirements. The main strength of HRVanalysis is its wide application scope. In addition to standard analysis over short and long periods of RR intervals, the software allows time-frequency analysis using wavelet transform as well as analysis of autonomic nervous system status on surrounding scored events and on preselected labeled areas. Moreover, the interface is designed for easy study of large cohorts, including batch mode signal processing to avoid running repetitive operations. Results are displayed as figures or saved in TXT files directly employable in statistical softwares. Recordings can arise from RR or EKG files of different types such as cardiofrequencemeters, holters EKG, polygraphs, and data acquisition systems. HRVanalysis can be downloaded freely from the Web page at: https://anslabtools.univ-st-etienne.fr HRVanalysis is meticulously maintained and developed for in-house laboratory use. In this article, after a brief description of the context, we present an overall view of HRV analysis and we describe the methodological approach of the different techniques provided by the software. PMID:27920726

  2. An Alternative Flight Software Trigger Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions using Inaccurate or Scarce Information

    NASA Technical Reports Server (NTRS)

    Smith, Kelly M.; Gay, Robert S.; Stachowiak, Susan J.

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter. In order to increase overall robustness, the vehicle also has an alternate method of triggering the drogue parachute deployment based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this velocity-based trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers excellent performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles.

  3. Analyzing and designing object-oriented missile simulations with concurrency

    NASA Astrophysics Data System (ADS)

    Randorf, Jeffrey Allen

    2000-11-01

    A software object model for the six degree-of-freedom missile modeling domain is presented. As a precursor, a domain analysis of the missile modeling domain was started, based on the Feature-Oriented Domain Analysis (FODA) technique described by the Software Engineering Institute (SEI). It was subsequently determined the FODA methodology is functionally equivalent to the Object Modeling Technique. The analysis used legacy software documentation and code from the ENDOSIM, KDEC, and TFrames 6-DOF modeling tools, including other technical literature. The SEI Object Connection Architecture (OCA) was the template for designing the object model. Three variants of the OCA were considered---a reference structure, a recursive structure, and a reference structure with augmentation for flight vehicle modeling. The reference OCA design option was chosen for maintaining simplicity while not compromising the expressive power of the OMT model. The missile architecture was then analyzed for potential areas of concurrent computing. It was shown how protected objects could be used for data passing between OCA object managers, allowing concurrent access without changing the OCA reference design intent or structure. The implementation language was the 1995 release of Ada. OCA software components were shown how to be expressed as Ada child packages. While acceleration of several low level and other high operations level are possible on proper hardware, there was a 33% degradation of 4th order Runge-Kutta integrator performance of two simultaneous ordinary differential equations using Ada tasking on a single processor machine. The Defense Department's High Level Architecture was introduced and explained in context with the OCA. It was shown the HLA and OCA were not mutually exclusive architectures, but complimentary. HLA was shown as an interoperability solution, with the OCA as an architectural vehicle for software reuse. Further directions for implementing a 6-DOF missile modeling environment are discussed.

  4. The comparison of the use of holonic and agent-based methods in modelling of manufacturing systems

    NASA Astrophysics Data System (ADS)

    Foit, K.; Banaś, W.; Gwiazda, A.; Hryniewicz, P.

    2017-08-01

    The rapid evolution in the field of industrial automation and manufacturing is often called the 4th Industry Revolution. Worldwide availability of the internet access contributes to the competition between manufacturers, gives the opportunity for buying materials, parts and for creating the partnership networks, like cloud manufacturing, grid manufacturing (MGrid), virtual enterprises etc. The effect of the industry evolution is the need to search for new solutions in the field of manufacturing systems modelling and simulation. During the last decade researchers have developed the agent-based approach of modelling. This methodology have been taken from the computer science, but was adapted to the philosophy of industrial automation and robotization. The operation of the agent-based system depends on the simultaneous acting of different agents that may have different roles. On the other hand, there is the holon-based approach that uses the structures created by holons. It differs from the agent-based structure in some aspects, while the other ones are quite similar in both methodologies. The aim of this paper is to present the both methodologies and discuss the similarities and the differences. This may could help to select the optimal method of modelling, according to the considered problem and software resources.

  5. Execution of a self-directed risk assessment methodology to address HIPAA data security requirements

    NASA Astrophysics Data System (ADS)

    Coleman, Johnathan

    2003-05-01

    This paper analyzes the method and training of a self directed risk assessment methodology entitled OCTAVE (Operationally Critical Threat Asset and Vulnerability Evaluation) at over 170 DOD medical treatment facilities. It focuses specifically on how OCTAVE built interdisciplinary, inter-hierarchical consensus and enhanced local capabilities to perform Health Information Assurance. The Risk Assessment Methodology was developed by the Software Engineering Institute at Carnegie Mellon University as part of the Defense Health Information Assurance Program (DHIAP). The basis for its success is the combination of analysis of organizational practices and technological vulnerabilities. Together, these areas address the core implications behind the HIPAA Security Rule and can be used to develop Organizational Protection Strategies and Technological Mitigation Plans. A key component of OCTAVE is the inter-disciplinary composition of the analysis team (Patient Administration, IT staff and Clinician). It is this unique composition of analysis team members, along with organizational and technical analysis of business practices, assets and threats, which enables facilities to create sound and effective security policies. The Risk Assessment is conducted in-house, and therefore the process, results and knowledge remain within the organization, helping to build consensus in an environment of differing organizational and disciplinary perspectives on Health Information Assurance.

  6. Hardware and software reliability estimation using simulations

    NASA Technical Reports Server (NTRS)

    Swern, Frederic L.

    1994-01-01

    The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.

  7. Quality measures and assurance for AI (Artificial Intelligence) software

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1988-01-01

    This report is concerned with the application of software quality and evaluation measures to AI software and, more broadly, with the question of quality assurance for AI software. Considered are not only the metrics that attempt to measure some aspect of software quality, but also the methodologies and techniques (such as systematic testing) that attempt to improve some dimension of quality, without necessarily quantifying the extent of the improvement. The report is divided into three parts Part 1 reviews existing software quality measures, i.e., those that have been developed for, and applied to, conventional software. Part 2 considers the characteristics of AI software, the applicability and potential utility of measures and techniques identified in the first part, and reviews those few methods developed specifically for AI software. Part 3 presents an assessment and recommendations for the further exploration of this important area.

  8. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  9. The design and evaluation of an antimicrobial resistance surveillance system for neonatal intensive care units in Iran.

    PubMed

    Rezaei-Hachesu, Peyman; Samad-Soltani, Taha; Yaghoubi, Sajad; GhaziSaeedi, Marjan; Mirnia, Kayvan; Masoumi-Asl, Hossein; Safdari, Reza

    2018-07-01

    Neonatal intensive care units (NICUs) have complex patients in terms of their diagnoses and required treatments. Antimicrobial treatment is a common therapy for patients in NICUs. To solve problems pertaining to empirical therapy, antimicrobial stewardship programs have recently been introduced. Despite the success of these programs in terms of data collection, there is still inefficiency in terms of analyzing and reporting the data. Thus, to successfully implement these stewardship programs, the design of antimicrobial resistance (AMR) surveillance systems is recommended as a first step. As a result, this study aimed to design an AMR surveillance system for use in the NICUs in northwestern Iranian hospitals to cover these information gaps. The recommended system is compatible with the World Health Organization (WHO) guidelines. The business intelligence (BI) requirements were extracted in an interview with a product owner (PO) using a valid and reliable checklist. Following this, an AMR surveillance system was designed and evaluated in relation to user experiences via a user experience questionnaire (UEQ). Finally, an association analysis was performed on the database, and the results were reported by identifying the important multidrug resistances in the database. A customized software development methodology was proposed. The three major modules of the AMR surveillance are the data registry, dashboard, and decision support modules. The data registry module was implemented based on a three-tier architecture, and the Clinical Decision Support System (CDSS) and dashboard modules were designed based on the BI requirements of the Scrum product owner (PO). The mean values of UEQ measures were in a good range. This measures showed the suitable usability of the AMR surveillance system. Applying efficient software development methodologies allows for the systems' compatibility with users' opinions and requirements. In addition, the construction of interdisciplinary communication models for research and software engineering allows for research and development concepts to be used in operational environments. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. A prototype computerized synthesis methodology for generic space access vehicle (SAV) conceptual design

    NASA Astrophysics Data System (ADS)

    Huang, Xiao

    2006-04-01

    Today's and especially tomorrow's competitive launch vehicle design environment requires the development of a dedicated generic Space Access Vehicle (SAV) design methodology. A total of 115 industrial, research, and academic aircraft, helicopter, missile, and launch vehicle design synthesis methodologies have been evaluated. As the survey indicates, each synthesis methodology tends to focus on a specific flight vehicle configuration, thus precluding the key capability to systematically compare flight vehicle design alternatives. The aim of the research investigation is to provide decision-making bodies and the practicing engineer a design process and tool box for robust modeling and simulation of flight vehicles where the ultimate performance characteristics may hinge on numerical subtleties. This will enable the designer of a SAV for the first time to consistently compare different classes of SAV configurations on an impartial basis. This dissertation presents the development steps required towards a generic (configuration independent) hands-on flight vehicle conceptual design synthesis methodology. This process is developed such that it can be applied to any flight vehicle class if desired. In the present context, the methodology has been put into operation for the conceptual design of a tourist Space Access Vehicle. The case study illustrates elements of the design methodology & algorithm for the class of Horizontal Takeoff and Horizontal Landing (HTHL) SAVs. The HTHL SAV design application clearly outlines how the conceptual design process can be centrally organized, executed and documented with focus on design transparency, physical understanding and the capability to reproduce results. This approach offers the project lead and creative design team a management process and tool which iteratively refines the individual design logic chosen, leading to mature design methods and algorithms. As illustrated, the HTHL SAV hands-on design methodology offers growth potential in that the same methodology can be continually updated and extended to other SAV configuration concepts, such as the Vertical Takeoff and Vertical Landing (VTVL) SAV class. Having developed, validated and calibrated the methodology for HTHL designs in the 'hands-on' mode, the report provides an outlook how the methodology will be integrated into a prototype computerized design synthesis software AVDS-PrADOSAV in a follow-on step.

  11. A posteriori operation detection in evolving software models

    PubMed Central

    Langer, Philip; Wimmer, Manuel; Brosch, Petra; Herrmannsdörfer, Markus; Seidl, Martina; Wieland, Konrad; Kappel, Gerti

    2013-01-01

    As every software artifact, also software models are subject to continuous evolution. The operations applied between two successive versions of a model are crucial for understanding its evolution. Generic approaches for detecting operations a posteriori identify atomic operations, but neglect composite operations, such as refactorings, which leads to cluttered difference reports. To tackle this limitation, we present an orthogonal extension of existing atomic operation detection approaches for detecting also composite operations. Our approach searches for occurrences of composite operations within a set of detected atomic operations in a post-processing manner. One major benefit is the reuse of specifications available for executing composite operations also for detecting applications of them. We evaluate the accuracy of the approach in a real-world case study and investigate the scalability of our implementation in an experiment. PMID:23471366

  12. Search and retrieval of office files using dBASE 3

    NASA Technical Reports Server (NTRS)

    Breazeale, W. L.; Talley, C. R.

    1986-01-01

    Described is a method of automating the office files retrieval process using a commercially available software package (dBASE III). The resulting product is a menu-driven computer program which requires no computer skills to operate. One part of the document is written for the potential user who has minimal computer experience and uses sample menu screens to explain the program; while a second part is oriented towards the computer literate individual and includes rather detailed descriptions of the methodology and search routines. Although much of the programming techniques are explained, this document is not intended to be a tutorial on dBASE III. It is hoped that the document will serve as a stimulus for other applications of dBASE III.

  13. Integrated Main Propulsion System Performance Reconstruction Process/Models

    NASA Technical Reports Server (NTRS)

    Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael

    2013-01-01

    The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.

  14. Simulating Effects of High Angle of Attack on Turbofan Engine Performance

    NASA Technical Reports Server (NTRS)

    Liu, Yuan; Claus, Russell W.; Litt, Jonathan S.; Guo, Ten-Huei

    2013-01-01

    A method of investigating the effects of high angle of attack (AOA) flight on turbofan engine performance is presented. The methodology involves combining a suite of diverse simulation tools. Three-dimensional, steady-state computational fluid dynamics (CFD) software is used to model the change in performance of a commercial aircraft-type inlet and fan geometry due to various levels of AOA. Parallel compressor theory is then applied to assimilate the CFD data with a zero-dimensional, nonlinear, dynamic turbofan engine model. The combined model shows that high AOA operation degrades fan performance and, thus, negatively impacts compressor stability margins and engine thrust. In addition, the engine response to high AOA conditions is shown to be highly dependent upon the type of control system employed.

  15. Modeling contamination migration on the Chandra X-Ray Observatory

    NASA Technical Reports Server (NTRS)

    O'Dell, Stephen L.; Swartz, Douglas A.; Anderson, Scot K.; Chen, Kenny C.; Giordano, Rino J.; Knollenberg, Perry J.; Morris, Peter A.; Plucinsky, Paul P.; Tice, Neil W.; Tran, Hien

    2005-01-01

    During its first 5 years of operation, the cold (-60 C) optical blocking filter of the Advanced CCD Imaging Spectrometer (ACIS), on board the Chandra X-ray Observatory, has accumulated a contaminating layer that attenuates the low-energy x rays. To assist in assessing the likelihood of successfully baking off the contaminant, members of the Chandra Team developed contamination-migration simulation software. The simulation follows deposition onto and (temperature-dependent) vaporization from surfaces comprising a geometrical model of the Observatory. A separate thermal analysis, augmented by on-board temperature monitoring, provides temperatures for each surface of the same geometrical model. This paper describes the physical basis for the simulations, the methodologies, and the predicted migration of the contaminant for various bake-out scenarios and assumptions.

  16. Prediction of Regulation Reserve Requirements in California ISO Control Area based on BAAL Standard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Etingov, Pavel V.; Makarov, Yuri V.; Samaan, Nader A.

    This paper presents new methodologies developed at Pacific Northwest National Laboratory (PNNL) to estimate regulation capacity requirements in the California ISO control area. Two approaches have been developed: (1) an approach based on statistical analysis of actual historical area control error (ACE) and regulation data, and (2) an approach based on balancing authority ACE limit control performance standard. The approaches predict regulation reserve requirements on a day-ahead basis including upward and downward requirements, for each operating hour of a day. California ISO data has been used to test the performance of the proposed algorithms. Results show that software tool allowsmore » saving up to 30% on the regulation procurements cost .« less

  17. Network, system, and status software enhancements for the autonomously managed electrical power system breadboard. Volume 1: Project summary

    NASA Technical Reports Server (NTRS)

    Mckee, James W.

    1990-01-01

    This volume (1 of 4) gives a summary of the original AMPS software system configuration, points out some of the problem areas in the original software design that this project is to address, and in the appendix collects all the bimonthly status reports. The purpose of AMPS is to provide a self reliant system to control the generation and distribution of power in the space station. The software in the AMPS breadboard can be divided into three levels: the operating environment software, the protocol software, and the station specific software. This project deals only with the operating environment software and the protocol software. The present station specific software will not change except as necessary to conform to new data formats.

  18. C-Based Design Methodology and Topological Change for an Indian Agricultural Tractor Component

    NASA Astrophysics Data System (ADS)

    Matta, Anil Kumar; Raju, D. Ranga; Suman, K. N. S.; Kranthi, A. S.

    2018-06-01

    The failure of tractor components and their replacement has now become very common in India because of re-cycling, re-sale, and duplication. To over come the problem of failure we propose a design methodology for topological change co-simulating with software's. In the proposed Design methodology, the designer checks Paxial, Pcr, Pfailue, τ by hand calculations, from which refined topological changes of R.S.Arm are formed. We explained several techniques employed in the component for reduction, removal of rib material to change center of gravity and centroid point by using system C for mixed level simulation and faster topological changes. The design process in system C can be compiled and executed with software, TURBO C7. The modified component is developed in proE and analyzed in ANSYS. The topologically changed component with slot 120 × 4.75 × 32.5 mm at the center showed greater effectiveness than the original component.

  19. Usability: Human Research Program - Space Human Factors and Habitability

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Holden, Kritina L.

    2009-01-01

    The Usability project addresses the need for research in the area of metrics and methodologies used in hardware and software usability testing in order to define quantifiable and verifiable usability requirements. A usability test is a human-in-the-loop evaluation where a participant works through a realistic set of representative tasks using the hardware/software under investigation. The purpose of this research is to define metrics and methodologies for measuring and verifying usability in the aerospace domain in accordance with FY09 focus on errors, consistency, and mobility/maneuverability. Usability metrics must be predictive of success with the interfaces, must be easy to obtain and/or calculate, and must meet the intent of current Human Systems Integration Requirements (HSIR). Methodologies must work within the constraints of the aerospace domain, be cost and time efficient, and be able to be applied without extensive specialized training.

  20. Specializing architectures for the type 2 diabetes mellitus care use cases with a focus on process management.

    PubMed

    Uribe, Gustavo A; Blobel, Bernd; López, Diego M; Ruiz, Alonso A

    2015-01-01

    The development of software supporting inter-disciplinary systems like the type 2 diabetes mellitus care requires the deployment of methodologies designed for this type of interoperability. The GCM framework allows the architectural description of such systems and the development of software solutions based on it. A first step of the GCM methodology is the definition of a generic architecture, followed by its specialization for specific use cases. This paper describes the specialization of the generic architecture of a system, supporting Type 2 diabetes mellitus glycemic control, for a pharmacotherapy use case. It focuses on the behavioral aspect of the system, i.e. the policy domain and the definition of the rules governing the system. The design of this architecture reflects the inter-disciplinary feature of the methodology. Finally, the resulting architecture allows building adaptive, intelligent and complete systems.

Top