Sample records for reliable software technologies

  1. Technical Concept Document. Central Archive for Reusable Defense Software (CARDS)

    DTIC Science & Technology

    1994-02-28

    FeNbry 1994 INFORMAL TECHNICAL REPORT For The SOFTWARE TECHNOLOGY FOR ADAPTABLE, RELIABLE SYSTEMS (STARS) Technical Concept Document Central Archive for...February 1994 INFORMAL TECHNICAL REPORT For The SOFTWARE TECHNOLOGY FOR ADAPTABLE, RELIABLE SYSTEMS (STARS) Technical Concept Document Central Archive...accordance with the DFARS Special Works Clause Developed by: This document, developed under the Software Technology for Adaptable, Reliable Systems

  2. Application of Artificial Intelligence technology to the analysis and synthesis of reliable software systems

    NASA Technical Reports Server (NTRS)

    Wild, Christian; Eckhardt, Dave

    1987-01-01

    The development of a methodology for the production of highly reliable software is one of the greatest challenges facing the computer industry. Meeting this challenge will undoubtably involve the integration of many technologies. This paper describes the use of Artificial Intelligence technologies in the automated analysis of the formal algebraic specifications of abstract data types. These technologies include symbolic execution of specifications using techniques of automated deduction and machine learning through the use of examples. On-going research into the role of knowledge representation and problem solving in the process of developing software is also discussed.

  3. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  4. Data systems and computer science: Software Engineering Program

    NASA Technical Reports Server (NTRS)

    Zygielbaum, Arthur I.

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. This review is specifically concerned with the Software Engineering Program. The goals of the Software Engineering Program are as follows: (1) improve NASA's ability to manage development, operation, and maintenance of complex software systems; (2) decrease NASA's cost and risk in engineering complex software systems; and (3) provide technology to assure safety and reliability of software in mission critical applications.

  5. Software IV and V Research Priorities and Applied Program Accomplishments Within NASA

    NASA Technical Reports Server (NTRS)

    Blazy, Louis J.

    2000-01-01

    The mission of this research is to be world-class creators and facilitators of innovative, intelligent, high performance, reliable information technologies that enable NASA missions to (1) increase software safety and quality through error avoidance, early detection and resolution of errors, by utilizing and applying empirically based software engineering best practices; (2) ensure customer software risks are identified and/or that requirements are met and/or exceeded; (3) research, develop, apply, verify, and publish software technologies for competitive advantage and the advancement of science; and (4) facilitate the transfer of science and engineering data, methods, and practices to NASA, educational institutions, state agencies, and commercial organizations. The goals are to become a national Center Of Excellence (COE) in software and system independent verification and validation, and to become an international leading force in the field of software engineering for improving the safety, quality, reliability, and cost performance of software systems. This project addresses the following problems: Ensure safety of NASA missions, ensure requirements are met, minimize programmatic and technological risks of software development and operations, improve software quality, reduce costs and time to delivery, and improve the science of software engineering

  6. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  7. Improving the Effectiveness of Program Managers

    DTIC Science & Technology

    2006-05-03

    Improving the Effectiveness of Program Managers Systems and Software Technology Conference Salt Lake City, Utah May 3, 2006 Presented by GAO’s...Companies’ best practices Motorola Caterpillar Toyota FedEx NCR Teradata Boeing Hughes Space and Communications Disciplined software and management...and total ownership costs Collection of metrics data to improve software reliability Technology readiness levels and design maturity Statistical

  8. Technology Infusion of CodeSonar into the Space Network Ground Segment (RII07)

    NASA Technical Reports Server (NTRS)

    Benson, Markland

    2008-01-01

    The NASA Software Assurance Research Program (in part) performs studies as to the feasibility of technologies for improving the safety, quality, reliability, cost, and performance of NASA software. This study considers the application of commercial automated source code analysis tools to mission critical ground software that is in the operations and sustainment portion of the product lifecycle.

  9. Use of Soft Computing Technologies for a Qualitative and Reliable Engine Control System for Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Brown, Terry; Crumbley, R. T. (Technical Monitor)

    2001-01-01

    The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to improve overall vehicle system safety, reliability, and rocket engine performance by development of a qualitative and reliable engine control system (QRECS). Specifically, this will be addressed by enhancing rocket engine control using SCT, innovative data mining tools, and sound software engineering practices used in Marshall's Flight Software Group (FSG). The principle goals for addressing the issue of quality are to improve software management, software development time, software maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control methodologies, but to provide alternative design choices for control, implementation, performance, and sustaining engineering, all relative to addressing the issue of reliability. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion (system level), software engineering for embedded flight software systems, and soft computing technologies (i.e., neural networks, fuzzy logic, data mining, and Bayesian belief networks); some of which are briefed in this paper. For this effort, the targeted demonstration rocket engine testbed is the MC-1 engine (formerly FASTRAC) which is simulated with hardware and software in the Marshall Avionics & Software Testbed (MAST) laboratory that currently resides at NASA's Marshall Space Flight Center, building 4476, and is managed by the Avionics Department. A brief plan of action for design, development, implementation, and testing a Phase One effort for QRECS is given, along with expected results. Phase One will focus on development of a Smart Start Engine Module and a Mainstage Engine Module for proper engine start and mainstage engine operations. The overall intent is to demonstrate that by employing soft computing technologies, the quality and reliability of the overall scheme to engine controller development is further improved and vehicle safety is further insured. The final product that this paper proposes is an approach to development of an alternative low cost engine controller that would be capable of performing in unique vision spacecraft vehicles requiring low cost advanced avionics architectures for autonomous operations from engine pre-start to engine shutdown.

  10. Assessing Survivability Using Software Fault Injection

    DTIC Science & Technology

    2001-04-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO10875 TITLE: Assessing Survivability Using Software Fault Injection...Esc to exit .......................................................................... = 11-1 Assessing Survivability Using Software Fault Injection...Jeffrey Voas Reliable Software Technologies 21351 Ridgetop Circle, #400 Dulles, VA 20166 jmvoas@rstcorp.crom Abstract approved sources have the

  11. Analysis of key technologies for virtual instruments metrology

    NASA Astrophysics Data System (ADS)

    Liu, Guixiong; Xu, Qingui; Gao, Furong; Guan, Qiuju; Fang, Qiang

    2008-12-01

    Virtual instruments (VIs) require metrological verification when applied as measuring instruments. Owing to the software-centered architecture, metrological evaluation of VIs includes two aspects: measurement functions and software characteristics. Complexity of software imposes difficulties on metrological testing of VIs. Key approaches and technologies for metrology evaluation of virtual instruments are investigated and analyzed in this paper. The principal issue is evaluation of measurement uncertainty. The nature and regularity of measurement uncertainty caused by software and algorithms can be evaluated by modeling, simulation, analysis, testing and statistics with support of powerful computing capability of PC. Another concern is evaluation of software features like correctness, reliability, stability, security and real-time of VIs. Technologies from software engineering, software testing and computer security domain can be used for these purposes. For example, a variety of black-box testing, white-box testing and modeling approaches can be used to evaluate the reliability of modules, components, applications and the whole VI software. The security of a VI can be assessed by methods like vulnerability scanning and penetration analysis. In order to facilitate metrology institutions to perform metrological verification of VIs efficiently, an automatic metrological tool for the above validation is essential. Based on technologies of numerical simulation, software testing and system benchmarking, a framework for the automatic tool is proposed in this paper. Investigation on implementation of existing automatic tools that perform calculation of measurement uncertainty, software testing and security assessment demonstrates the feasibility of the automatic framework advanced.

  12. NoSQL Data Store Technologies

    DTIC Science & Technology

    2014-09-01

    NoSQL Data Store Technologies John Klein, Software Engineering Institute Patrick Donohoe, Software Engineering Institute Neil Ernst...REPORT TYPE N/A 3. DATES COVERED 4. TITLE AND SUBTITLE NoSQL Data Store Technologies 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...distribute data 4. Data Replication – determines how a NoSQL database facilitates reliable, high performance data replication to build

  13. Seamless transitions from early prototypes to mature operational software - A technology that enables the process for planning and scheduling applications

    NASA Technical Reports Server (NTRS)

    Hornstein, Rhoda S.; Wunderlich, Dana A.; Willoughby, John K.

    1992-01-01

    New and innovative software technology is presented that provides a cost effective bridge for smoothly transitioning prototype software, in the field of planning and scheduling, into an operational environment. Specifically, this technology mixes the flexibility and human design efficiency of dynamic data typing with the rigor and run-time efficiencies of static data typing. This new technology provides a very valuable tool for conducting the extensive, up-front system prototyping that leads to specifying the correct system and producing a reliable, efficient version that will be operationally effective and will be accepted by the intended users.

  14. Software Reliability Issues Concerning Large and Safety Critical Software Systems

    NASA Technical Reports Server (NTRS)

    Kamel, Khaled; Brown, Barbara

    1996-01-01

    This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.

  15. Software Technology for Adaptable, Reliable Systems (STARS)

    DTIC Science & Technology

    1994-03-25

    Tmeline(3), SECOMO(3), SEER(3), GSFC Software Engineering Lab Model(l), SLIM(4), SEER-SEM(l), SPQR (2), PRICE-S(2), internally-developed models(3), APMSS(1...3 " Timeline - 3 " SASET (Software Architecture Sizing Estimating Tool) - 2 " MicroMan 11- 2 * LCM (Logistics Cost Model) - 2 * SPQR - 2 * PRICE-S - 2

  16. Automation Hooks Architecture Trade Study for Flexible Test Orchestration

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.

    2010-01-01

    We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.

  17. Overview of Probabilistic Methods for SAE G-11 Meeting for Reliability and Uncertainty Quantification for DoD TACOM Initiative with SAE G-11 Division

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL Division and Probabilistic Methods Committee meeting during October 6-8 at the Best Western Sterling Inn, Sterling Heights (Detroit), Michigan is co-sponsored by US Army Tank-automotive & Armaments Command (TACOM). The meeting will provide an industry/government/academia forum to review RMSL technology; reliability and probabilistic technology; reliability-based design methods; software reliability; and maintainability standards. With over 100 members including members with national/international standing, the mission of the G-11's Probabilistic Methods Committee is to "enable/facilitate rapid deployment of probabilistic technology to enhance the competitiveness of our industries by better, faster, greener, smarter, affordable and reliable product development."

  18. Assuring Software Reliability

    DTIC Science & Technology

    2014-08-01

    technologies and processes to achieve a required level of confidence that software systems and services function in the intended manner. 1.3 Security Example...that took three high-voltage lines out of service and a software fail- ure (a race condition3) that disabled the computing service that notified the... service had failed. Instead of analyzing the details of the alarm server failure, the reviewers asked why the following software assurance claim had

  19. Advanced Computing Technologies for Rocket Engine Propulsion Systems: Object-Oriented Design with C++

    NASA Technical Reports Server (NTRS)

    Bekele, Gete

    2002-01-01

    This document explores the use of advanced computer technologies with an emphasis on object-oriented design to be applied in the development of software for a rocket engine to improve vehicle safety and reliability. The primary focus is on phase one of this project, the smart start sequence module. The objectives are: 1) To use current sound software engineering practices, object-orientation; 2) To improve on software development time, maintenance, execution and management; 3) To provide an alternate design choice for control, implementation, and performance.

  20. Error-Free Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  1. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  2. Software Cuts Homebuilding Costs, Increases Energy Efficiency

    NASA Technical Reports Server (NTRS)

    2015-01-01

    To sort out the best combinations of technologies for a crewed mission to Mars, NASA Headquarters awarded grants to MIT's Department of Aeronautics and Astronautics to develop an algorithm-based software tool that highlights the most reliable and cost-effective options. Utilizing the software, Professor Edward Crawley founded Cambridge, Massachussetts-based Ekotrope, which helps homebuilders choose cost- and energy-efficient floor plans and materials.

  3. Lessons learned in deploying software estimation technology and tools

    NASA Technical Reports Server (NTRS)

    Panlilio-Yap, Nikki; Ho, Danny

    1994-01-01

    Developing a software product involves estimating various project parameters. This is typically done in the planning stages of the project when there is much uncertainty and very little information. Coming up with accurate estimates of effort, cost, schedule, and reliability is a critical problem faced by all software project managers. The use of estimation models and commercially available tools in conjunction with the best bottom-up estimates of software-development experts enhances the ability of a product development group to derive reasonable estimates of important project parameters. This paper describes the experience of the IBM Software Solutions (SWS) Toronto Laboratory in selecting software estimation models and tools and deploying their use to the laboratory's product development groups. It introduces the SLIM and COSTAR products, the software estimation tools selected for deployment to the product areas, and discusses the rationale for their selection. The paper also describes the mechanisms used for technology injection and tool deployment, and concludes with a discussion of important lessons learned in the technology and tool insertion process.

  4. A Robust Compositional Architecture for Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Deney, Ewen; Farrell, Kimberley; Giannakopoulos, Dimitra; Jonsson, Ari; Frank, Jeremy; Bobby, Mark; Carpenter, Todd; Estlin, Tara

    2006-01-01

    Space exploration applications can benefit greatly from autonomous systems. Great distances, limited communications and high costs make direct operations impossible while mandating operations reliability and efficiency beyond what traditional commanding can provide. Autonomous systems can improve reliability and enhance spacecraft capability significantly. However, there is reluctance to utilizing autonomous systems. In part this is due to general hesitation about new technologies, but a more tangible concern is that of reliability of predictability of autonomous software. In this paper, we describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems. The work combines state-of-the-art technologies and capabilities in autonomous systems with advanced validation and synthesis techniques. The focus of this paper is on the autonomous system architecture that has been defined, and on how it enables the application of validation techniques for resulting autonomous systems.

  5. Inter- and Intrarater Reliability Using Different Software Versions of E4D Compare in Dental Education.

    PubMed

    Callan, Richard S; Cooper, Jeril R; Young, Nancy B; Mollica, Anthony G; Furness, Alan R; Looney, Stephen W

    2015-06-01

    The problems associated with intra- and interexaminer reliability when assessing preclinical performance continue to hinder dental educators' ability to provide accurate and meaningful feedback to students. Many studies have been conducted to evaluate the validity of utilizing various technologies to assist educators in achieving that goal. The purpose of this study was to compare two different versions of E4D Compare software to determine if either could be expected to deliver consistent and reliable comparative results, independent of the individual utilizing the technology. Five faculty members obtained E4D digital images of students' attempts (sample model) at ideal gold crown preparations for tooth #30 performed on typodont teeth. These images were compared to an ideal (master model) preparation utilizing two versions of E4D Compare software. The percent correlations between and within these faculty members were recorded and averaged. The intraclass correlation coefficient was used to measure both inter- and intrarater agreement among the examiners. The study found that using the older version of E4D Compare did not result in acceptable intra- or interrater agreement among the examiners. However, the newer version of E4D Compare, when combined with the Nevo scanner, resulted in a remarkable degree of agreement both between and within the examiners. These results suggest that consistent and reliable results can be expected when utilizing this technology under the protocol described in this study.

  6. Optimizing the Reliability and Performance of Service Composition Applications with Fault Tolerance in Wireless Sensor Networks

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang

    2015-01-01

    The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818

  7. [The Development and Application of the Orthopaedics Implants Failure Database Software Based on WEB].

    PubMed

    Huang, Jiahua; Zhou, Hai; Zhang, Binbin; Ding, Biao

    2015-09-01

    This article develops a new failure database software for orthopaedics implants based on WEB. The software is based on B/S mode, ASP dynamic web technology is used as its main development language to achieve data interactivity, Microsoft Access is used to create a database, these mature technologies make the software extend function or upgrade easily. In this article, the design and development idea of the software, the software working process and functions as well as relative technical features are presented. With this software, we can store many different types of the fault events of orthopaedics implants, the failure data can be statistically analyzed, and in the macroscopic view, it can be used to evaluate the reliability of orthopaedics implants and operations, it also can ultimately guide the doctors to improve the clinical treatment level.

  8. Overview of Future of Probabilistic Methods and RMSL Technology and the Probabilistic Methods Education Initiative for the US Army at the SAE G-11 Meeting

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL Division and Probabilistic Methods Committee meeting sponsored by the Picatinny Arsenal during March 1-3, 2004 at Westin Morristown, will report progress on projects for probabilistic assessment of Army system and launch an initiative for probabilistic education. The meeting features several Army and industry Senior executives and Ivy League Professor to provide an industry/government/academia forum to review RMSL technology; reliability and probabilistic technology; reliability-based design methods; software reliability; and maintainability standards. With over 100 members including members with national/international standing, the mission of the G-11s Probabilistic Methods Committee is to enable/facilitate rapid deployment of probabilistic technology to enhance the competitiveness of our industries by better, faster, greener, smarter, affordable and reliable product development.

  9. Use of Soft Computing Technologies For Rocket Engine Control

    NASA Technical Reports Server (NTRS)

    Trevino, Luis C.; Olcmen, Semih; Polites, Michael

    2003-01-01

    The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to further improve overall engine system reliability and performance. Specifically, this will be presented by enhancing rocket engine control and engine health management (EHM) using SCT coupled with conventional control technologies, and sound software engineering practices used in Marshall s Flight Software Group. The principle goals are to improve software management, software development time and maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control and EHM methodologies, but to provide alternative design choices for control, EHM, implementation, performance, and sustaining engineering. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion, software engineering for embedded systems, and soft computing technologies (i.e., neural networks, fuzzy logic, and Bayesian belief networks), much of which is presented in this paper. The first targeted demonstration rocket engine platform is the MC-1 (formerly FASTRAC Engine) which is simulated with hardware and software in the Marshall Avionics & Software Testbed laboratory that

  10. Software life cycle methodologies and environments

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest

    1991-01-01

    Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.

  11. A Statistical Testing Approach for Quantifying Software Reliability; Application to an Example System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, Tsong-Lun; Varuttamaseni, Athi; Baek, Joo-Seok

    The U.S. Nuclear Regulatory Commission (NRC) encourages the use of probabilistic risk assessment (PRA) technology in all regulatory matters, to the extent supported by the state-of-the-art in PRA methods and data. Although much has been accomplished in the area of risk-informed regulation, risk assessment for digital systems has not been fully developed. The NRC established a plan for research on digital systems to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems in the PRAs of nuclear power plants (NPPs), and (2) incorporating digital systems in the NRC's risk-informed licensing and oversight activities.more » Under NRC's sponsorship, Brookhaven National Laboratory (BNL) explored approaches for addressing the failures of digital instrumentation and control (I and C) systems in the current NPP PRA framework. Specific areas investigated included PRA modeling digital hardware, development of a philosophical basis for defining software failure, and identification of desirable attributes of quantitative software reliability methods. Based on the earlier research, statistical testing is considered a promising method for quantifying software reliability. This paper describes a statistical software testing approach for quantifying software reliability and applies it to the loop-operating control system (LOCS) of an experimental loop of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL).« less

  12. Component Verification and Certification in NASA Missions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Penix, John; Norvig, Peter (Technical Monitor)

    2001-01-01

    Software development for NASA missions is a particularly challenging task. Missions are extremely ambitious scientifically, have very strict time frames, and must be accomplished with a maximum degree of reliability. Verification technologies must therefore be pushed far beyond their current capabilities. Moreover, reuse and adaptation of software architectures and components must be incorporated in software development within and across missions. This paper discusses NASA applications that we are currently investigating from these perspectives.

  13. Software Technology for Adaptable Reliable Systems (STARS) Workshop Held at the Naval Research Laboratory, Washington, DC on April 9-12 1985

    DTIC Science & Technology

    1985-01-01

    paths? .%* vii * * ... * r -. . . .W. -t. ’ PREFACE ......... H*o . .. . ON.........................NT .. . . . . . . . . . . ............ . ........... l...REUSE ................................................ 83 Dr. Bruce A. Burton and Mr. Michael D. Broido REUSABLE COMPONENT DEFINITION (A TUTORIAL...209 Michael R . Miller, Hans L. Hiabereder, and L.O. Keeler REUSABLE SOFTWARE IN SIMULATION APPLICATIONS

  14. Final Report to the National Energy Technology Laboratory on FY09-FY13 Cooperative Research with the Consortium for Electric Reliability Technology Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vittal, Vijay

    2015-11-04

    The Consortium for Electric Reliability Technology Solutions (CERTS) was formed in 1999 in response to a call from U.S. Congress to restart a federal transmission reliability R&D program to address concerns about the reliability of the U.S. electric power grid. CERTS is a partnership between industry, universities, national laboratories, and government agencies. It researches, develops, and disseminates new methods, tools, and technologies to protect and enhance the reliability of the U.S. electric power system and the efficiency of competitive electricity markets. It is funded by the U.S. Department of Energy’s Office of Electricity Delivery and Energy Reliability (OE). This reportmore » provides an overview of PSERC and CERTS, of the overall objectives and scope of the research, a summary of the major research accomplishments, highlights of the work done under the various elements of the NETL cooperative agreement, and brief reports written by the PSERC researchers on their accomplishments, including research results, publications, and software tools.« less

  15. Microgrid Design Analysis Using Technology Management Optimization and the Performance Reliability Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamp, Jason E.; Eddy, John P.; Jensen, Richard P.

    Microgrids are a focus of localized energy production that support resiliency, security, local con- trol, and increased access to renewable resources (among other potential benefits). The Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) Joint Capa- bility Technology Demonstration (JCTD) program between the Department of Defense (DOD), Department of Energy (DOE), and Department of Homeland Security (DHS) resulted in the pre- liminary design and deployment of three microgrids at military installations. This paper is focused on the analysis process and supporting software used to determine optimal designs for energy surety microgrids (ESMs) in the SPIDERS project. There aremore » two key pieces of software, an ex- isting software application developed by Sandia National Laboratories (SNL) called Technology Management Optimization (TMO) and a new simulation developed for SPIDERS called the per- formance reliability model (PRM). TMO is a decision support tool that performs multi-objective optimization over a mixed discrete/continuous search space for which the performance measures are unrestricted in form. The PRM is able to statistically quantify the performance and reliability of a microgrid operating in islanded mode (disconnected from any utility power source). Together, these two software applications were used as part of the ESM process to generate the preliminary designs presented by SNL-led DOE team to the DOD. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military instal- lations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Tarek Abdallah, Melanie Johnson, and Harold Sanborn of the U.S. Army Corps of Engineers Construction Engineering Research Laboratory * Colleagues from Sandia National Laboratories (SNL) for their reviews, suggestions, and participation in the work.« less

  16. Architecture for Survivable System Processing (ASSP)

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.

    1991-11-01

    The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.

  17. Architecture for Survivable System Processing (ASSP)

    NASA Technical Reports Server (NTRS)

    Wood, Richard J.

    1991-01-01

    The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.

  18. Enhancing E-Health Information Systems with Agent Technology

    PubMed Central

    Nguyen, Minh Tuan; Fuhrer, Patrik; Pasquier-Rocha, Jacques

    2009-01-01

    Agent Technology is an emerging and promising research area in software technology, which increasingly contributes to the development of value-added information systems for large healthcare organizations. Through the MediMAS prototype, resulting from a case study conducted at a local Swiss hospital, this paper aims at presenting the advantages of reinforcing such a complex E-health man-machine information organization with software agents. The latter will work on behalf of human agents, taking care of routine tasks, and thus increasing the speed, the systematic, and ultimately the reliability of the information exchanges. We further claim that the modeling of the software agent layer can be methodically derived from the actual “classical” laboratory organization and practices, as well as seamlessly integrated with the existing information system. PMID:19096509

  19. Rail-CR : railroad cognitive radio.

    DOT National Transportation Integrated Search

    2012-12-01

    Robust, reliable, and interoperable wireless communication devices or technologies are vital to the success of positive train control (PTC) systems. Accordingly, the railway industry has started adopting software-defined radios (SDRs) for packet-data...

  20. A Benefit Analysis of Infusing Wireless into Aircraft and Fleet Operations - Report to Seedling Project Efficient Reconfigurable Cockpit Design and Fleet Operations Using Software Intensive, Network Enabled, Wireless Architecture (ECON)

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Holmes, Bruce J.; Hahn, Andrew S.

    2016-01-01

    We report on an examination of potential benefits of infusing wireless technologies into various areas of aircraft and airspace operations. The analysis is done in support of a NASA seedling project Efficient Reconfigurable Cockpit Design and Fleet Operations Using Software Intensive, Network Enabled Wireless Architecture (ECON). The study has two objectives. First, we investigate one of the main benefit hypotheses of the ECON proposal: that the replacement of wired technologies with wireless would lead to significant weight reductions on an aircraft, among other benefits. Second, we advance a list of wireless technology applications and discuss their system benefits. With regard to the primary hypothesis, we conclude that the promise of weight reduction is premature. Specificity of the system domain and aircraft, criticality of components, reliability of wireless technologies, the weight of replacement or augmentation equipment, and the cost of infusion must all be taken into account among other considerations, to produce a reliable estimate of weight savings or increase.

  1. Computer-assisted design of flux-cored wires

    NASA Astrophysics Data System (ADS)

    Dubtsov, Yu N.; Zorin, I. V.; Sokolov, G. N.; Antonov, A. A.; Artem'ev, A. A.; Lysak, V. I.

    2017-02-01

    The algorithm and description of the AlMe-WireLaB software for the computer-assisted design of flux-cored wires are introduced. The software functionality is illustrated with the selection of the components for the flux-cored wire, ensuring the acquisition of the deposited metal of the Fe-Cr-C-Mo-Ni-Ti-B system. It is demonstrated that the developed software enables the technologically reliable flux-cored wire to be designed for surfacing, resulting in a metal of an ordered composition.

  2. Software Technology for Adaptable, Reliable Systems (STARS). Software Architecture Seminar Report: Central Archive for Reusable Defense Software (CARDS)

    DTIC Science & Technology

    1994-01-29

    other processes, but that he arrived at his results in a different manner. Batory didn’t start with idioms; he performed a domain analysis and...abstracted idioms. Through domain analysis and domain modeling, new idioms can be found and the form of architecture can be the same. It was also questioned...Programming 5. Consensus Definition of Architecture 6. Inductive Analysis of Current Exemplars 7. VHDL (Bailor) 8. Ontological Structuring 3.3.3

  3. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  4. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  5. Analysis of whisker-toughened CMC structural components using an interactive reliability model

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.

    1992-01-01

    Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.

  6. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  7. A Review of Software Maintenance Technology.

    DTIC Science & Technology

    1980-02-01

    LABS HONEYWELL, REF BURROUGHS, OTHERS BOOLE £ LANGUAGE "PE BABBAGE , OPERATIONAL INDEPENDENT 4.2.17 INC. IBM H M RELIABILITY MOST LARGE U. MEASUREMENT...80-13 ML llluuluunuuuuu SmeeI..... f. Maintenance Experience (1) Multiple Implementation Charles Holmes (Source 2) described two attempts at McDonnell...proprietary software monitor package distributed by Boole and Babbage , Inc., Sunnyvale, California. it has been implemented on IBM computers and is language

  8. A Computer in Your Lap.

    ERIC Educational Resources Information Center

    Byers, Joseph W.

    1991-01-01

    The most useful feature of laptop computers is portability, as one elementary school principal notes. IBM and Apple are not leaders in laptop technology. Tandy and Toshiba market relatively inexpensive models offering durability, reliable software, and sufficient memory space. (MLH)

  9. Highly Survivable Avionics Systems for Long-Term Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Alkalai, L.; Chau, S.; Tai, A. T.

    2001-01-01

    The design of highly survivable avionics systems for long-term (> 10 years) exploration of space is an essential technology for all current and future missions in the Outer Planets roadmap. Long-term exposure to extreme environmental conditions such as high radiation and low-temperatures make survivability in space a major challenge. Moreover, current and future missions are increasingly using commercial technology such as deep sub-micron (0.25 microns) fabrication processes with specialized circuit designs, commercial interfaces, processors, memory, and other commercial off the shelf components that were not designed for long-term survivability in space. Therefore, the design of highly reliable, and available systems for the exploration of Europa, Pluto and other destinations in deep-space require a comprehensive and fresh approach to this problem. This paper summarizes work in progress in three different areas: a framework for the design of highly reliable and highly available space avionics systems, distributed reliable computing architecture, and Guarded Software Upgrading (GSU) techniques for software upgrading during long-term missions. Additional information is contained in the original extended abstract.

  10. Health management and controls for Earth-to-orbit propulsion systems

    NASA Astrophysics Data System (ADS)

    Bickford, R. L.

    1995-03-01

    Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.

  11. Software reliability models for critical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, H.; Pham, M.

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the secondmore » place. 407 refs., 4 figs., 2 tabs.« less

  12. Software reliability models for critical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, H.; Pham, M.

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place.more » 407 refs., 4 figs., 2 tabs.« less

  13. Overview of the SAE G-11 RMSL (Reliability, Maintainability, Supportability, and Logistics) Division Activities and Technical Projects

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL (Reliability, Maintainability, Supportability, and Logistics) Division activities include identification and fulfillment of joint industry, government, and academia needs for development and implementation of RMSL technologies. Four Projects in the Probabilistic Methods area and two in the area of RMSL have been identified. These are: (1) Evaluation of Probabilistic Technology - progress has been made toward the selection of probabilistic application cases. Future effort will focus on assessment of multiple probabilistic softwares in solving selected engineering problems using probabilistic methods. Relevance to Industry & Government - Case studies of typical problems encountering uncertainties, results of solutions to these problems run by different codes, and recommendations on which code is applicable for what problems; (2) Probabilistic Input Preparation - progress has been made in identifying problem cases such as those with no data, little data and sufficient data. Future effort will focus on developing guidelines for preparing input for probabilistic analysis, especially with no or little data. Relevance to Industry & Government - Too often, we get bogged down thinking we need a lot of data before we can quantify uncertainties. Not True. There are ways to do credible probabilistic analysis with little data; (3) Probabilistic Reliability - probabilistic reliability literature search has been completed along with what differentiates it from statistical reliability. Work on computation of reliability based on quantification of uncertainties in primitive variables is in progress. Relevance to Industry & Government - Correct reliability computations both at the component and system level are needed so one can design an item based on its expected usage and life span; (4) Real World Applications of Probabilistic Methods (PM) - A draft of volume 1 comprising aerospace applications has been released. Volume 2, a compilation of real world applications of probabilistic methods with essential information demonstrating application type and timehost savings by the use of probabilistic methods for generic applications is in progress. Relevance to Industry & Government - Too often, we say, 'The Proof is in the Pudding'. With help from many contributors, we hope to produce such a document. Problem is - not too many people are coming forward due to proprietary nature. So, we are asking to document only minimum information including problem description, what method used, did it result in any savings, and how much?; (5) Software Reliability - software reliability concept, program, implementation, guidelines, and standards are being documented. Relevance to Industry & Government - software reliability is a complex issue that must be understood & addressed in all facets of business in industry, government, and other institutions. We address issues, concepts, ways to implement solutions, and guidelines for maximizing software reliability; (6) Maintainability Standards - maintainability/serviceability industry standard/guidelines and industry best practices and methodologies used in performing maintainability/ serviceability tasks are being documented. Relevance to Industry & Government - Any industry or government process, project, and/or tool must be maintained and serviced to realize the life and performance it was designed for. We address issues and develop guidelines for optimum performance & life.

  14. Sustainable, Reliable Mission-Systems Architecture

    NASA Technical Reports Server (NTRS)

    O'Neil, Graham; Orr, James K.; Watson, Steve

    2005-01-01

    A mission-systems architecture, based on a highly modular infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is essential for affordable md sustainable space exploration programs. This mission-systems architecture requires (8) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, end verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered systems are applied to define the model. Technology projections reaching out 5 years are made to refine model details.

  15. Sustainable, Reliable Mission-Systems Architecture

    NASA Technical Reports Server (NTRS)

    O'Neil, Graham; Orr, James K.; Watson, Steve

    2007-01-01

    A mission-systems architecture, based on a highly modular infrastructure utilizing: open-standards hardware and software interfaces as the enabling technology is essential for affordable and sustainable space exploration programs. This mission-systems architecture requires (a) robust communication between heterogeneous system, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered system are applied to define the model. Technology projections reaching out 5 years are mde to refine model details.

  16. Reliability improvement methods for sapphire fiber temperature sensors

    NASA Astrophysics Data System (ADS)

    Schietinger, C.; Adams, B.

    1991-08-01

    Mechanical, optical, electrical, and software design improvements can be brought to bear in the enhancement of fiber-optic sapphire-fiber temperature measurement tool reliability in harsh environments. The optical fiber thermometry (OFT) equipment discussed is used in numerous process industries and generally involves a sapphire sensor, an optical transmission cable, and a microprocessor-based signal analyzer. OFT technology incorporating sensors for corrosive environments, hybrid sensors, and two-wavelength measurements, are discussed.

  17. Advanced telemetry systems for payloads. Technology needs, objectives and issues

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The current trends in advanced payload telemetry are the new developments in advanced modulation/coding, the applications of intelligent techniques, data distribution processing, and advanced signal processing methodologies. Concerted efforts will be required to design ultra-reliable man-rated software to cope with these applications. The intelligence embedded and distributed throughout various segments of the telemetry system will need to be overridden by an operator in case of life-threatening situations, making it a real-time integration issue. Suitable MIL standards on physical interfaces and protocols will be adopted to suit the payload telemetry system. New technologies and techniques will be developed for fast retrieval of mass data. Currently, these technology issues are being addressed to provide more efficient, reliable, and reconfigurable systems. There is a need, however, to change the operation culture. The current role of NASA as a leader in developing all the new innovative hardware should be altered to save both time and money. We should use all the available hardware/software developed by the industry and use the existing standards rather than inventing our own.

  18. Software reliability models for fault-tolerant avionics computers and related topics

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1987-01-01

    Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.

  19. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather; Robinson, William H.; Rech, Paolo

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  20. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE PAGES

    Quinn, Heather; Robinson, William H.; Rech, Paolo; ...

    2015-12-17

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  1. Software Technology for Adaptable, Reliable Systems (STARS). Repository Integration AdaKNET Software User’s Manual

    DTIC Science & Technology

    1990-10-03

    9 4.1. Mapping the Conceptual Model to the Implementation ................................................ 9 4.2. Overview of...browser-editor application. Finally, appendix A provides a detailed description of the AdaKNET conceptual model; users of AdaKNET should fami...provide a brief summary of the semantics of the underlying conceptual model implemented by AdaKNET, use of the AdaKNET ADT will require a more thorough

  2. Integrating High-Reliability Principles to Transform Access and Throughput by Creating a Centralized Operations Center.

    PubMed

    Davenport, Paul B; Carter, Kimberly F; Echternach, Jeffrey M; Tuck, Christopher R

    2018-02-01

    High-reliability organizations (HROs) demonstrate unique and consistent characteristics, including operational sensitivity and control, situational awareness, hyperacute use of technology and data, and actionable process transformation. System complexity and reliance on information-based processes challenge healthcare organizations to replicate HRO processes. This article describes a healthcare organization's 3-year journey to achieve key HRO features to deliver high-quality, patient-centric care via an operations center powered by the principles of high-reliability data and software to impact patient throughput and flow.

  3. Development of Advanced Verification and Validation Procedures and Tools for the Certification of Learning Systems in Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola

    2005-01-01

    Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.

  4. Reliable and Fault-Tolerant Software-Defined Network Operations Scheme for Remote 3D Printing

    NASA Astrophysics Data System (ADS)

    Kim, Dongkyun; Gil, Joon-Min

    2015-03-01

    The recent wide expansion of applicable three-dimensional (3D) printing and software-defined networking (SDN) technologies has led to a great deal of attention being focused on efficient remote control of manufacturing processes. SDN is a renowned paradigm for network softwarization, which has helped facilitate remote manufacturing in association with high network performance, since SDN is designed to control network paths and traffic flows, guaranteeing improved quality of services by obtaining network requests from end-applications on demand through the separated SDN controller or control plane. However, current SDN approaches are generally focused on the controls and automation of the networks, which indicates that there is a lack of management plane development designed for a reliable and fault-tolerant SDN environment. Therefore, in addition to the inherent advantage of SDN, this paper proposes a new software-defined network operations center (SD-NOC) architecture to strengthen the reliability and fault-tolerance of SDN in terms of network operations and management in particular. The cooperation and orchestration between SDN and SD-NOC are also introduced for the SDN failover processes based on four principal SDN breakdown scenarios derived from the failures of the controller, SDN nodes, and connected links. The abovementioned SDN troubles significantly reduce the network reachability to remote devices (e.g., 3D printers, super high-definition cameras, etc.) and the reliability of relevant control processes. Our performance consideration and analysis results show that the proposed scheme can shrink operations and management overheads of SDN, which leads to the enhancement of responsiveness and reliability of SDN for remote 3D printing and control processes.

  5. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  6. Automated software development workstation

    NASA Technical Reports Server (NTRS)

    Prouty, Dale A.; Klahr, Philip

    1988-01-01

    A workstation is being developed that provides a computational environment for all NASA engineers across application boundaries, which automates reuse of existing NASA software and designs, and efficiently and effectively allows new programs and/or designs to be developed, catalogued, and reused. The generic workstation is made domain specific by specialization of the user interface, capturing engineering design expertise for the domain, and by constructing/using a library of pertinent information. The incorporation of software reusability principles and expert system technology into this workstation provide the obvious benefits of increased productivity, improved software use and design reliability, and enhanced engineering quality by bringing engineering to higher levels of abstraction based on a well tested and classified library.

  7. The growing need for microservices in bioinformatics.

    PubMed

    Williams, Christopher L; Sica, Jeffrey C; Killen, Robert T; Balis, Ulysses G J

    2016-01-01

    Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Bioinformatics relies on nimble IT framework which can adapt to changing requirements. To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics. Use of the microservices framework is an effective methodology for the fabrication and implementation of reliable and innovative software, made possible in a highly collaborative setting.

  8. The growing need for microservices in bioinformatics

    PubMed Central

    Williams, Christopher L.; Sica, Jeffrey C.; Killen, Robert T.; Balis, Ulysses G. J.

    2016-01-01

    Objective: Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework is an effective methodology for the fabrication and implementation of reliable and innovative software, made possible in a highly collaborative setting. PMID:27994937

  9. Application of Nexus copy number software for CNV detection and analysis.

    PubMed

    Darvishi, Katayoon

    2010-04-01

    Among human structural genomic variation, copy number variants (CNVs) are the most frequently known component, comprised of gains/losses of DNA segments that are generally 1 kb in length or longer. Array-based comparative genomic hybridization (aCGH) has emerged as a powerful tool for detecting genomic copy number variants (CNVs). With the rapid increase in the density of array technology and with the adaptation of new high-throughput technology, a reliable and computationally scalable method for accurate mapping of recurring DNA copy number aberrations has become a main focus in research. Here we introduce Nexus Copy Number software, a platform-independent tool, to analyze the output files of all types of commercial and custom-made comparative genomic hybridization (CGH) and single-nucleotide polymorphism (SNP) arrays, such as those manufactured by Affymetrix, Agilent Technologies, Illumina, and Roche NimbleGen. It also supports data generated by various array image-analysis software tools such as GenePix, ImaGene, and BlueFuse. (c) 2010 by John Wiley & Sons, Inc.

  10. Identification of New Potential Scientific and Technology Areas for DoD Application. Summary of Activities

    DTIC Science & Technology

    1986-07-31

    designer will be able to more rapid- ly assemble a total software package from perfected modules that can be easily de - bugged or replaced with more...antinuclear interactions e. gravitational effects of antimatter 2. possible machine parameters and lattice design 3. electron and stochastic cooling needs 4...implementation, reliability requirements; development of design environments and of experimental methodology; technology transfer methods from

  11. Getting Past the "Digital Divide"

    ERIC Educational Resources Information Center

    McCollum, Sean

    2011-01-01

    As most educators know, there is a lot more to addressing the so-called "digital divide" than having enough working machines in classrooms. Effective information technology (IT) in schools requires useful software, reliable and speedy Internet access, effective teacher training, and well-considered goals with transformative outcomes. Educators who…

  12. Advanced information processing system: Hosting of advanced guidance, navigation and control algorithms on AIPS using ASTER

    NASA Technical Reports Server (NTRS)

    Brenner, Richard; Lala, Jaynarayan H.; Nagle, Gail A.; Schor, Andrei; Turkovich, John

    1994-01-01

    This program demonstrated the integration of a number of technologies that can increase the availability and reliability of launch vehicles while lowering costs. Availability is increased with an advanced guidance algorithm that adapts trajectories in real-time. Reliability is increased with fault-tolerant computers and communication protocols. Costs are reduced by automatically generating code and documentation. This program was realized through the cooperative efforts of academia, industry, and government. The NASA-LaRC coordinated the effort, while Draper performed the integration. Georgia Institute of Technology supplied a weak Hamiltonian finite element method for optimal control problems. Martin Marietta used MATLAB to apply this method to a launch vehicle (FENOC). Draper supplied the fault-tolerant computing and software automation technology. The fault-tolerant technology includes sequential and parallel fault-tolerant processors (FTP & FTPP) and authentication protocols (AP) for communication. Fault-tolerant technology was incrementally incorporated. Development culminated with a heterogeneous network of workstations and fault-tolerant computers using AP. Draper's software automation system, ASTER, was used to specify a static guidance system based on FENOC, navigation, flight control (GN&C), models, and the interface to a user interface for mission control. ASTER generated Ada code for GN&C and C code for models. An algebraic transform engine (ATE) was developed to automatically translate MATLAB scripts into ASTER.

  13. Computer sciences

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  14. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  15. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  16. Analysis of linear measurements on 3D surface models using CBCT data segmentation obtained by automatic standard pre-set thresholds in two segmentation software programs: an in vitro study.

    PubMed

    Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer

    2016-01-01

    The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.

  17. Developing Individualized IEP Goals in the Age of Technology: Quality Challenges and Solutions

    ERIC Educational Resources Information Center

    More, Cori M.; Hart Barnett, Juliet E.

    2014-01-01

    Many school districts have adopted commercially available software or templates for electronic Individualized Education Program (IEP) development. These programs have useful features that allow Individualized Education Programs to be electronically developed and reliably stored for each student. Although the program features are designed to…

  18. Methodology for Software Reliability Prediction. Volume 1.

    DTIC Science & Technology

    1987-11-01

    SPACECRAFT 0 MANNED SPACECRAFT B ATCH SYSTEM AIRBORNE AVIONICS 0 UNMANNED EVENT C014TROL a REAL TIME CLOSED 0 UNMANNED SPACECRAFT LOOP OPERATINS SPACECRAFT...software reliability. A Software Reliability Measurement Framework was established which spans the life cycle of a software system and includes the...specification, prediction, estimation, and assessment of software reliability. Data from 59 systems , representing over 5 million lines of code, were

  19. Demonstrating a Realistic IP Mission Prototype

    NASA Technical Reports Server (NTRS)

    Rash, James; Ferrer, Arturo B.; Goodman, Nancy; Ghazi-Tehrani, Samira; Polk, Joe; Johnson, Lorin; Menke, Greg; Miller, Bill; Criscuolo, Ed; Hogie, Keith

    2003-01-01

    Flight software and hardware and realistic space communications environments were elements of recent demonstrations of the Internet Protocol (IP) mission concept in the lab. The Operating Missions as Nodes on the Internet (OMNI) Project and the Flight Software Branch at NASA/GSFC collaborated to build the prototype of a representative space mission that employed unmodified off-the-shelf Internet protocols and technologies for end-to-end communications between the spacecraft/instruments and the ground system/users. The realistic elements used in the prototype included an RF communications link simulator and components of the TRIANA mission flight software and ground support system. A web-enabled camera connected to the spacecraft computer via an Ethernet LAN represented an on-board instrument creating image data. In addition to the protocols at the link layer (HDLC), transport layer (UDP, TCP), and network (IP) layer, a reliable file delivery protocol (MDP) at the application layer enabled reliable data delivery both to and from the spacecraft. The standard Network Time Protocol (NTP) performed on-board clock synchronization with a ground time standard. The demonstrations of the prototype mission illustrated some of the advantages of using Internet standards and technologies for space missions, but also helped identify issues that must be addressed. These issues include applicability to embedded real-time systems on flight-qualified hardware, range of applicability of TCP, and liability for and maintenance of commercial off-the-shelf (COTS) products. The NASA Earth Science Technology Office (ESTO) funded the collaboration to build and demonstrate the prototype IP mission.

  20. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  1. Gearbox Reliability Collaborative Phase 3 Gearbox 3 Test

    DOE Data Explorer

    Keller, Jonathan (ORCID:0000000177243885)

    2016-12-28

    The GRC uses a combined gearbox testing, modeling, and analysis approach disseminating data and results to the industry and facilitating improvement of gearbox reliability. This test data describes the tests of GRC gearbox 3 in the National Wind Technology Center dynamometer and documents any modifications to the original test plan. It serves as a guide to interpret the publicly released data sets with brief analyses to illustrate the data. TDMS viewer and Solidworks software required to view data files. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability.

  2. Gearbox Reliability Collaborative Phase 3 Gearbox 2 Test

    DOE Data Explorer

    Keller, Jonathan; Robb, Wallen

    2016-05-12

    The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability. The GRC uses a combined gearbox testing, modeling, and analysis approach disseminating data and results to the industry and facilitating improvement of gearbox reliability. This test data describes the tests of GRC gearbox 2 in the National Wind Technology Center dynamometer and documents any modifications to the original test plan. It serves as a guide to interpret the publicly released data sets with brief analyses to illustrate the data. TDMS viewer and Solidworks software required to view data files.

  3. Wireless Sensor Networks for Developmental and Flight Instrumentation

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Figueroa, Fernando; Becker, Jeffrey; Foster, Mark; Wang, Ray; Gamudevelli, Suman; Studor, George

    2011-01-01

    Wireless sensor networks (WSN) based on the IEEE 802.15.4 Personal Area Network and ZigBee Pro 2007 standards are finding increasing use in home automation and smart energy markets providing a framework for interoperable software. The Wireless Connections in Space Project, funded by the NASA Engineering and Safety Center, is developing technology, metrics and requirements for next-generation spacecraft avionics incorporating wireless data transport. The team from Stennis Space Center and Mobitrum Corporation, working under a NASA SBIR grant, has developed techniques for embedding plug-and-play software into ZigBee WSN prototypes implementing the IEEE 1451 Transducer Electronic Datasheet (TEDS) standard. The TEDS provides meta-information regarding sensors such as serial number, calibration curve and operational status. Incorporation of TEDS into wireless sensors leads directly to building application level software that can recognize sensors at run-time, dynamically instantiating sensors as they are added or removed. The Ames Research Center team has been experimenting with this technology building demonstration prototypes for on-board health monitoring. Innovations in technology, software and process can lead to dramatic improvements for managing sensor systems applied to Developmental and Flight Instrumentation (DFI) aboard aerospace vehicles. A brief overview of the plug-and-play ZigBee WSN technology is presented along with specific targets for application within the aerospace DFI market. The software architecture for the sensor nodes incorporating the TEDS information is described along with the functions of the Network Capable Gateway processor which bridges 802.15.4 PAN to the TCP/IP network. Client application software connects to the Gateway and is used to display TEDS information and real-time sensor data values updated every few seconds, incorporating error detection and logging to help measure performance and reliability in relevant target environments. Test results from our prototype WSN running the Mobitrum software system are summarized and the implications to the scalability and reliability for DFI applications are discussed. Our demonstration system, incorporating sensors for life support system and structural health monitoring is described along with test results obtained by running the demonstration prototype in relevant environments such as the Wireless Habitat Testbed at Johnson Space Center in Houston. An operations concept for improved sensor process flow from design to flight test is outlined specific to the areas of Environmental Control and Life Support System performance characterization and structural health monitoring of human-rated spacecraft. This operations concept will be used to highlight the areas where WSN technology, particularly plug-and-play software based on IEEE 1451, can improve the current process, resulting in significant reductions in the technical effort, overall cost and schedule for providing DFI capability for future spacecraft. RELEASED -

  4. The development of data acquisition and processing application system for RF ion source

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodan; Wang, Xiaoying; Hu, Chundong; Jiang, Caichao; Xie, Yahong; Zhao, Yuanzhe

    2017-07-01

    As the key ion source component of nuclear fusion auxiliary heating devices, the radio frequency (RF) ion source is developed and applied gradually to offer a source plasma with the advantages of ease of control and high reliability. In addition, it easily achieves long-pulse steady-state operation. During the process of the development and testing of the RF ion source, a lot of original experimental data will be generated. Therefore, it is necessary to develop a stable and reliable computer data acquisition and processing application system for realizing the functions of data acquisition, storage, access, and real-time monitoring. In this paper, the development of a data acquisition and processing application system for the RF ion source is presented. The hardware platform is based on the PXI system and the software is programmed on the LabVIEW development environment. The key technologies that are used for the implementation of this software programming mainly include the long-pulse data acquisition technology, multi-threading processing technology, transmission control communication protocol, and the Lempel-Ziv-Oberhumer data compression algorithm. Now, this design has been tested and applied on the RF ion source. The test results show that it can work reliably and steadily. With the help of this design, the stable plasma discharge data of the RF ion source are collected, stored, accessed, and monitored in real-time. It is shown that it has a very practical application significance for the RF experiments.

  5. An application of machine learning to the organization of institutional software repositories

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney; Henderson, Scott; Truszkowski, Walt

    1993-01-01

    Software reuse has become a major goal in the development of space systems, as a recent NASA-wide workshop on the subject made clear. The Data Systems Technology Division of Goddard Space Flight Center has been working on tools and techniques for promoting reuse, in particular in the development of satellite ground support software. One of these tools is the Experiment in Libraries via Incremental Schemata and Cobweb (ElvisC). ElvisC applies machine learning to the problem of organizing a reusable software component library for efficient and reliable retrieval. In this paper we describe the background factors that have motivated this work, present the design of the system, and evaluate the results of its application.

  6. Lessons learned applying CASE methods/tools to Ada software development projects

    NASA Technical Reports Server (NTRS)

    Blumberg, Maurice H.; Randall, Richard L.

    1993-01-01

    This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.

  7. Software reliability experiments data analysis and investigation

    NASA Technical Reports Server (NTRS)

    Walker, J. Leslie; Caglayan, Alper K.

    1991-01-01

    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  8. Computer science: Key to a space program renaissance. The 1981 NASA/ASEE summer study on the use of computer science and technology in NASA. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Freitas, R. A., Jr. (Editor); Carlson, P. A. (Editor)

    1983-01-01

    Adoption of an aggressive computer science research and technology program within NASA will: (1) enable new mission capabilities such as autonomous spacecraft, reliability and self-repair, and low-bandwidth intelligent Earth sensing; (2) lower manpower requirements, especially in the areas of Space Shuttle operations, by making fuller use of control center automation, technical support, and internal utilization of state-of-the-art computer techniques; (3) reduce project costs via improved software verification, software engineering, enhanced scientist/engineer productivity, and increased managerial effectiveness; and (4) significantly improve internal operations within NASA with electronic mail, managerial computer aids, an automated bureaucracy and uniform program operating plans.

  9. Crew Exercise Fact Sheet

    NASA Technical Reports Server (NTRS)

    Rafalik, Kerrie

    2017-01-01

    Johnson Space Center (JSC) provides research, engineering, development, integration, and testing of hardware and software technologies for exercise systems applications in support of human spaceflight. This includes sustaining the current suite of on-orbit exercise devices by reducing maintenance, addressing obsolescence, and increasing reliability through creative engineering solutions. Advanced exercise systems technology development efforts focus on the sustainment of crew's physical condition beyond Low Earth Orbit for extended mission durations with significantly reduced mass, volume, and power consumption when compared to the ISS.

  10. Crew Exercise

    NASA Technical Reports Server (NTRS)

    Rafalik, Kerrie K.

    2017-01-01

    Johnson Space Center (JSC) provides research, engineering, development, integration, and testing of hardware and software technologies for exercise systems applications in support of human spaceflight. This includes sustaining the current suite of on-orbit exercise devices by reducing maintenance, addressing obsolescence, and increasing reliability through creative engineering solutions. Advanced exercise systems technology development efforts focus on the sustainment of crew's physical condition beyond Low Earth Orbit for extended mission durations with significantly reduced mass, volume, and power consumption when compared to the ISS.

  11. An Introduction to Flight Software Development: FSW Today, FSW 2010

    NASA Technical Reports Server (NTRS)

    Gouvela, John

    2004-01-01

    Experience and knowledge gained from ongoing maintenance of Space Shuttle Flight Software and new development projects including Cockpit Avionics Upgrade are applied to projected needs of the National Space Exploration Vision through Spiral 2. Lessons learned from these current activities are applied to create a sustainable, reliable model for development of critical software to support Project Constellation. This presentation introduces the technologies, methodologies, and infrastructure needed to produce and sustain high quality software. It will propose what is needed to support a Vision for Space Exploration that places demands on the innovation and productivity needed to support future space exploration. The technologies in use today within FSW development include tools that provide requirements tracking, integrated change management, modeling and simulation software. Specific challenges that have been met include the introduction and integration of Commercial Off the Shelf (COTS) Real Time Operating System for critical functions. Though technology prediction has proved to be imprecise, Project Constellation requirements will need continued integration of new technology with evolving methodologies and changing project infrastructure. Targets for continued technology investment are integrated health monitoring and management, self healing software, standard payload interfaces, autonomous operation, and improvements in training. Emulation of the target hardware will also allow significant streamlining of development and testing. The methodologies in use today for FSW development are object oriented UML design, iterative development using independent components, as well as rapid prototyping . In addition, Lean Six Sigma and CMMI play a critical role in the quality and efficiency of the workforce processes. Over the next six years, we expect these methodologies to merge with other improvements into a consolidated office culture with all processes being guided by automated office assistants. The infrastructure in use today includes strict software development and configuration management procedures, including strong control of resource management and critical skills coverage. This will evolve to a fully integrated staff organization with efficient and effective communication throughout all levels guided by a Mission-Systems Architecture framework with focus on risk management and attention toward inevitable product obsolescence. This infrastructure of computing equipment, software and processes will itself be subject to technological change and need for management of change and improvement,

  12. Using software metrics and software reliability models to attain acceptable quality software for flight and ground support software for avionic systems

    NASA Technical Reports Server (NTRS)

    Lawrence, Stella

    1992-01-01

    This paper is concerned with methods of measuring and developing quality software. Reliable flight and ground support software is a highly important factor in the successful operation of the space shuttle program. Reliability is probably the most important of the characteristics inherent in the concept of 'software quality'. It is the probability of failure free operation of a computer program for a specified time and environment.

  13. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  14. The Legacy of Space Shuttle Flight Software

    NASA Technical Reports Server (NTRS)

    Hickey, Christopher J.; Loveall, James B.; Orr, James K.; Klausman, Andrew L.

    2011-01-01

    The initial goals of the Space Shuttle Program required that the avionics and software systems blaze new trails in advancing avionics system technology. Many of the requirements placed on avionics and software were accomplished for the first time on this program. Examples include comprehensive digital fly-by-wire technology, use of a digital databus for flight critical functions, fail operational/fail safe requirements, complex automated redundancy management, and the use of a high-order software language for flight software development. In order to meet the operational and safety goals of the program, the Space Shuttle software had to be extremely high quality, reliable, robust, reconfigurable and maintainable. To achieve this, the software development team evolved a software process focused on continuous process improvement and defect elimination that consistently produced highly predictable and top quality results, providing software managers the confidence needed to sign each Certificate of Flight Readiness (COFR). This process, which has been appraised at Capability Maturity Model (CMM)/Capability Maturity Model Integration (CMMI) Level 5, has resulted in one of the lowest software defect rates in the industry. This paper will present an overview of the evolution of the Primary Avionics Software System (PASS) project and processes over thirty years, an argument for strong statistical control of software processes with examples, an overview of the success story for identifying and driving out errors before flight, a case study of the few significant software issues and how they were either identified before flight or slipped through the process onto a flight vehicle, and identification of the valuable lessons learned over the life of the project.

  15. A methodology for producing reliable software, volume 1

    NASA Technical Reports Server (NTRS)

    Stucki, L. G.; Moranda, P. B.; Foshee, G.; Kirchoff, M.; Omre, R.

    1976-01-01

    An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software.

  16. Patients' experiences with technology during inpatient rehabilitation: opportunities to support independence and therapeutic engagement.

    PubMed

    Fager, Susan Koch; Burnfield, Judith M

    2014-03-01

    To understand individuals' perceptions of technology use during inpatient rehabilitation. A qualitative phenomenological study using semi-structured interviews of 10 individuals with diverse underlying diagnoses and/or a close family member who participated in inpatient rehabilitation. Core themes focused on assistive technology usage (equipment set-up, reliability and fragility of equipment, expertise required to use assistive technology and use of mainstream technologies) and opportunities for using technology to increase therapeutic engagement (opportunities for practice outside of therapy, goals for therapeutic exercises and technology for therapeutic exercises: motivation and social interaction). Interviews revealed the need for durable, reliable and intuitive technology without requiring a high level of expertise to install and implement. A strong desire for the continued use of mainstream devices (e.g. cell phones, tablet computers) reinforces the need for a wider range of access options for those with limited physical function. Finally, opportunities to engage in therapeutically meaningful activities beyond the traditional treatment hours were identified as valuable for patients to not only improve function but to also promote social interaction. Assistive technology increases functional independence of severely disabled individuals. End-users (patients and families) identified a need for designs that are durable, reliable, intuitive, easy to consistently install and use. Technology use (adaptive or commercially available) provides a mechanism to extend therapeutic practice beyond the traditional therapy day. Adapting skeletal tracking technology used in gaming software could automate exercise tracking, documentation and feedback for patient motivation and clinical treatment planning and interventions.

  17. Award-Winning CARES/Life Ceramics Durability Evaluation Software Is Making Advanced Technology Accessible

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CARES/Life software developed at the NASA Lewis Research Center eases this by providing a tool that uses probabilistic reliability analysis techniques to optimize the design and manufacture of brittle material components. CARES/Life is an integrated package that predicts the probability of a monolithic ceramic component's failure as a function of its time in service. It couples commercial finite element programs--which resolve a component's temperature and stress distribution - with reliability evaluation and fracture mechanics routines for modeling strength - limiting defects. These routines are based on calculations of the probabilistic nature of the brittle material's strength.

  18. Integrated Systems Health Management (ISHM) Toolkit

    NASA Technical Reports Server (NTRS)

    Venkatesh, Meera; Kapadia, Ravi; Walker, Mark; Wilkins, Kim

    2013-01-01

    A framework of software components has been implemented to facilitate the development of ISHM systems according to a methodology based on Reliability Centered Maintenance (RCM). This framework is collectively referred to as the Toolkit and was developed using General Atomics' Health MAP (TM) technology. The toolkit is intended to provide assistance to software developers of mission-critical system health monitoring applications in the specification, implementation, configuration, and deployment of such applications. In addition to software tools designed to facilitate these objectives, the toolkit also provides direction to software developers in accordance with an ISHM specification and development methodology. The development tools are based on an RCM approach for the development of ISHM systems. This approach focuses on defining, detecting, and predicting the likelihood of system functional failures and their undesirable consequences.

  19. Seismology software: state of the practice

    NASA Astrophysics Data System (ADS)

    Smith, W. Spencer; Zeng, Zheng; Carette, Jacques

    2018-05-01

    We analyzed the state of practice for software development in the seismology domain by comparing 30 software packages on four aspects: product, implementation, design, and process. We found room for improvement in most seismology software packages. The principal areas of concern include a lack of adequate requirements and design specification documents, a lack of test data to assess reliability, a lack of examples to get new users started, and a lack of technological tools to assist with managing the development process. To assist going forward, we provide recommendations for a document-driven development process that includes a problem statement, development plan, requirement specification, verification and validation (V&V) plan, design specification, code, V&V report, and a user manual. We also provide advice on tool use, including issue tracking, version control, code documentation, and testing tools.

  20. Seismology software: state of the practice

    NASA Astrophysics Data System (ADS)

    Smith, W. Spencer; Zeng, Zheng; Carette, Jacques

    2018-02-01

    We analyzed the state of practice for software development in the seismology domain by comparing 30 software packages on four aspects: product, implementation, design, and process. We found room for improvement in most seismology software packages. The principal areas of concern include a lack of adequate requirements and design specification documents, a lack of test data to assess reliability, a lack of examples to get new users started, and a lack of technological tools to assist with managing the development process. To assist going forward, we provide recommendations for a document-driven development process that includes a problem statement, development plan, requirement specification, verification and validation (V&V) plan, design specification, code, V&V report, and a user manual. We also provide advice on tool use, including issue tracking, version control, code documentation, and testing tools.

  1. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  2. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience

    PubMed Central

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac

    2017-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255

  3. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.

    PubMed

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac

    2016-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.

  4. Verification Tools Secure Online Shopping, Banking

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Just like rover or rocket technology sent into space, the software that controls these technologies must be extensively tested to ensure reliability and effectiveness. Ames Research Center invented the open-source Java Pathfinder (JPF) toolset for the deep testing of Java-based programs. Fujitsu Labs of America Inc., based in Sunnyvale, California, improved the capabilities of the JPF Symbolic Pathfinder tool, establishing the tool as a means of thoroughly testing the functionality and security of Web-based Java applications such as those used for Internet shopping and banking.

  5. Software Technology for Adaptable, Reliable Systems (STARS): UUS40 - Risk-Reduction Reasoning-Based Development Paradigm Tailored to Navy C2 Systems

    DTIC Science & Technology

    1991-07-30

    4 Management reviews, engineering and WBS -Spiral 0 -5 *Risk Management Planning -Spiral 0-5 ,41.- Unrelsi ugt .Proper initial planning -Spiral 0.1...Reusability issues for trusted systems are associated closely with maintenance issues. Reuse theory and practice for highly trusted systems will require

  6. The Particle Physics Data Grid. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron

    2002-08-16

    The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less

  7. The Infeasibility of Experimental Quantification of Life-Critical Software Reliability

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The key assumption of software fault tolerance|separately programmed versions fail independently|is shown to be problematic. This assumption cannot be justified by experimentation in the ultra-reliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multi-version software experiments support this affirmation.

  8. The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that the quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exhorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance separately programmed versions fail independently is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation.

  9. Software Reliability, Measurement, and Testing. Volume 2. Guidebook for Software Reliability Measurement and Testing

    DTIC Science & Technology

    1992-04-01

    contractor’s existing data collection, analysis and corrective action system shall be utilized, with modification only as necessary to meet the...either from test or from analysis of field data . The procedures of MIL-STD-756B assume that the reliability of a 18 DEFINE IDENTIFY SOFTWARE LIFE CYCLE...to generate sufficient data to report a statistically valid reliability figure for a class of software. Casual data gathering accumulates data more

  10. An overview of platforms for cloud based development.

    PubMed

    Fylaktopoulos, G; Goumas, G; Skolarikis, M; Sotiropoulos, A; Maglogiannis, I

    2016-01-01

    This paper provides an overview of the state of the art technologies for software development in cloud environments. The surveyed systems cover the whole spectrum of cloud-based development including integrated programming environments, code repositories, software modeling, composition and documentation tools, and application management and orchestration. In this work we evaluate the existing cloud development ecosystem based on a wide number of characteristics like applicability (e.g. programming and database technologies supported), productivity enhancement (e.g. editor capabilities, debugging tools), support for collaboration (e.g. repository functionality, version control) and post-development application hosting and we compare the surveyed systems. The conducted survey proves that software engineering in the cloud era has made its initial steps showing potential to provide concrete implementation and execution environments for cloud-based applications. However, a number of important challenges need to be addressed for this approach to be viable. These challenges are discussed in the article, while a conclusion is drawn that although several steps have been made, a compact and reliable solution does not yet exist.

  11. Commercialization of NESSUS: Status

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Millwater, Harry R.

    1991-01-01

    A plan was initiated in 1988 to commercialize the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) probabilistic structural analysis software. The goal of the on-going commercialization effort is to begin the transfer of Probabilistic Structural Analysis Method (PSAM) developed technology into industry and to develop additional funding resources in the general area of structural reliability. The commercialization effort is summarized. The SwRI NESSUS Software System is a general purpose probabilistic finite element computer program using state of the art methods for predicting stochastic structural response due to random loads, material properties, part geometry, and boundary conditions. NESSUS can be used to assess structural reliability, to compute probability of failure, to rank the input random variables by importance, and to provide a more cost effective design than traditional methods. The goal is to develop a general probabilistic structural analysis methodology to assist in the certification of critical components in the next generation Space Shuttle Main Engine.

  12. Hadoop distributed batch processing for Gaia: a success story

    NASA Astrophysics Data System (ADS)

    Riello, Marco

    2015-12-01

    The DPAC Cambridge Data Processing Centre (DPCI) is responsible for the photometric calibration of the Gaia data including the low resolution spectra. The large data volume produced by Gaia (~26 billion transits/year), the complexity of its data stream and the self-calibrating approach pose unique challenges for scalability, reliability and robustness of both the software pipelines and the operations infrastructure. DPCI has been the first in DPAC to realise the potential of Hadoop and Map/Reduce and to adopt them as the core technologies for its infrastructure. This has proven a winning choice allowing DPCI unmatched processing throughput and reliability within DPAC to the point that other DPCs have started following our footsteps. In this talk we will present the software infrastructure developed to build the distributed and scalable batch data processing system that is currently used in production at DPCI and the excellent results in terms of performance of the system.

  13. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  14. Making statistical inferences about software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1988-01-01

    Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.

  15. Production of Reliable Flight Crucial Software: Validation Methods Research for Fault Tolerant Avionics and Control Systems Sub-Working Group Meeting

    NASA Technical Reports Server (NTRS)

    Dunham, J. R. (Editor); Knight, J. C. (Editor)

    1982-01-01

    The state of the art in the production of crucial software for flight control applications was addressed. The association between reliability metrics and software is considered. Thirteen software development projects are discussed. A short term need for research in the areas of tool development and software fault tolerance was indicated. For the long term, research in format verification or proof methods was recommended. Formal specification and software reliability modeling, were recommended as topics for both short and long term research.

  16. JPRS Report, Science & Technology, USSR: Computers, Control Systems and Machines

    DTIC Science & Technology

    1989-03-14

    optimizatsii slozhnykh sistem (Coding Theory and Complex System Optimization ). Alma-Ata, Nauka Press, 1977, pp. 8-16. 11. Author’s certificate number...Interpreter Specifics [0. I. Amvrosova] ............................................. 141 Creation of Modern Computer Systems for Complex Ecological...processor can be designed to decrease degradation upon failure and assure more reliable processor operation, without requiring more complex software or

  17. Loose, Falling Characters and Sentences: The Persistence of the OCR Problem in Digital Repository E-Books

    ERIC Educational Resources Information Center

    Kichuk, Diana

    2015-01-01

    The electronic conversion of scanned image files to readable text using optical character recognition (OCR) software and the subsequent migration of raw OCR text to e-book text file formats are key remediation or media conversion technologies used in digital repository e-book production. Despite real progress, the OCR problem of reliability and…

  18. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  19. An experiment in software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.

  20. An experiment in software reliability: Additional analyses using data from automated replications

    NASA Technical Reports Server (NTRS)

    Dunham, Janet R.; Lauterbach, Linda A.

    1988-01-01

    A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.

  1. Transformation as a Design Process and Runtime Architecture for High Integrity Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bespalko, S.J.; Winter, V.L.

    1999-04-05

    We have discussed two aspects of creating high integrity software that greatly benefit from the availability of transformation technology, which in this case is manifest by the requirement for a sophisticated backtracking parser. First, because of the potential for correctly manipulating programs via small changes, an automated non-procedural transformation system can be a valuable tool for constructing high assurance software. Second, modeling the processing of translating data into information as a, perhaps, context-dependent grammar leads to an efficient, compact implementation. From a practical perspective, the transformation process should begin in the domain language in which a problem is initially expressed.more » Thus in order for a transformation system to be practical it must be flexible with respect to domain-specific languages. We have argued that transformation applied to specification results in a highly reliable system. We also attempted to briefly demonstrate that transformation technology applied to the runtime environment will result in a safe and secure system. We thus believe that the sophisticated multi-lookahead backtracking parsing technology is central to the task of being in a position to demonstrate the existence of HIS.« less

  2. Survey of Software Assurance Techniques for Highly Reliable Systems

    NASA Technical Reports Server (NTRS)

    Nelson, Stacy

    2004-01-01

    This document provides a survey of software assurance techniques for highly reliable systems including a discussion of relevant safety standards for various industries in the United States and Europe, as well as examples of methods used during software development projects. It contains one section for each industry surveyed: Aerospace, Defense, Nuclear Power, Medical Devices and Transportation. Each section provides an overview of applicable standards and examples of a mission or software development project, software assurance techniques used and reliability achieved.

  3. Does cone beam CT actually ameliorate stab wound analysis in bone?

    PubMed

    Gaudio, D; Di Giancamillo, M; Gibelli, D; Galassi, A; Cerutti, E; Cattaneo, C

    2014-01-01

    This study aims at verifying the potential of a recent radiological technology, cone beam CT (CBCT), for the reproduction of digital 3D models which may allow the user to verify the inner morphology of sharp force wounds within the bone tissue. Several sharp force wounds were produced by both single and double cutting edge weapons on cancellous and cortical bone, and then acquired by cone beam CT scan. The lesions were analysed by different software (a DICOM file viewer and reverse engineering software). Results verified the limited performances of such technology for lesions made on cortical bone, whereas on cancellous bone reliable models were obtained, and the precise morphology within the bone tissues was visible. On the basis of such results, a method for differential diagnosis between cutmarks by sharp tools with a single and two cutting edges can be proposed. On the other hand, the metrical computerised analysis of lesions highlights a clear increase of error range for measurements under 3 mm. Metric data taken by different operators shows a strong dispersion (% relative standard deviation). This pilot study shows that the use of CBCT technology can improve the investigation of morphological stab wounds on cancellous bone. Conversely metric analysis of the lesions as well as morphological analysis of wound dimension under 3 mm do not seem to be reliable.

  4. The Preliminary Results of GMSTech: A Software Development for Microseismic Characterization

    NASA Astrophysics Data System (ADS)

    Rohaman, Maman; Suhendi, Cahli; Verdhora Ry, Rexha; Sugiartono Prabowo, Billy; Widiyantoro, Sri; Nugraha, Andri Dian; Yudistira, Tedi; Mujihardi, Bambang

    2017-04-01

    The processing of microseismic data requires reliable software for imaging the condition of subsurface related to occurring microseismicity. In general, the currently available software is only specific for certain processing module and developed by the different developer. However, the software with integrated processing modules will give a better value because the users can use it easier and faster. We developed GMSTech (Ganesha Microseismic Technology), a C# language-based standing-alone software consisting several modules for processing of microseismic data. Its function is to solve a non-linear inverse problem and imaging the subsurface. C# library is supported by ILNumerics to reduce time consumption and give good visualization. In this preliminary result, we will present four developed modules: (1) hypocenter determination, (2) moment magnitude calculation, and (3) 3D seismic tomography. In the first module, we provide four methods for locating the microseismic events that can be chosen by a user independently: simulated annealing method, guided grid-search method, Geiger’s method, and joint hypocenter determination (JHD). The second module can be used for calculating moment magnitude using Brune method and to estimate the released energy of the event. At last, we also provided the module of 3-D seismic tomography for imaging the velocity structures based on delay time tomography. We demonstrated the software using both a synthetic data and a real data from a certain geothermal field in Indonesia. The results for all modules are reliable and remarkable, reviewed statistically by RMS error. We will keep examining the software using another set of data and developing further modules of processing.

  5. A Sustainable, Reliable Mission-Systems Architecture that Supports a System of Systems Approach to Space Exploration

    NASA Technical Reports Server (NTRS)

    Watson, Steve; Orr, Jim; O'Neil, Graham

    2004-01-01

    A mission-systems architecture based on a highly modular "systems of systems" infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is absolutely essential for an affordable and sustainable space exploration program. This architecture requires (a) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimum sustaining engineering. This paper proposes such an architecture. Lessons learned from the space shuttle program are applied to help define and refine the model.

  6. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Quality and Reliability Date

    NASA Technical Reports Server (NTRS)

    Orr, James K.; Peltier, Daryl

    2010-01-01

    Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.

  7. Views of Health Information Management Staff on the Medical Coding Software in Mashhad, Iran.

    PubMed

    Kimiafar, Khalil; Hemmati, Fatemeh; Banaye Yazdipour, Alireza; Sarbaz, Masoumeh

    2018-01-01

    Systematic evaluation of Health Information Technology (HIT) and users' views leads to the modification and development of these technologies in accordance with their needs. The purpose of this study was to investigate the views of Health Information Management (HIM) staff on the quality of medical coding software. A descriptive cross-sectional study was conducted between May to July 2016 in 26 hospitals (academic and non-academic) in Mashhad, north-eastern Iran. The study population consisted of the chairs of HIM departments and medical coders (58 staff). Data were collected through a valid and reliable questionnaire. The data were analyzed using the SPSS version 16.0. From the views of staff, the advantages of coding software such as reducing coding time had the highest average (Mean=3.82) while cost reduction had the lowest average (Mean =3.20), respectively. Meanwhile, concern about losing job opportunities was the least important disadvantage (15.5%) to the use of coding software. In general, the results of this study showed that coding software in some cases have deficiencies. Designers and developers of health information coding software should pay more attention to technical aspects, in-work reminders, help in deciding on proper codes selection by access coding rules, maintenance services, link to other relevant databases and the possibility of providing brief and detailed reports in different formats.

  8. Interoperability of Neuroscience Modeling Software

    PubMed Central

    Cannon, Robert C.; Gewaltig, Marc-Oliver; Gleeson, Padraig; Bhalla, Upinder S.; Cornelis, Hugo; Hines, Michael L.; Howell, Fredrick W.; Muller, Eilif; Stiles, Joel R.; Wils, Stefan; De Schutter, Erik

    2009-01-01

    Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is invested in the development of software tools and technologies for numerical simulations and for the creation and publication of models. The diversity of related tools leads to the duplication of effort and hinders model reuse. Development practices and technologies that support interoperability between software systems therefore play an important role in making the modeling process more efficient and in ensuring that published models can be reliably and easily reused. Various forms of interoperability are possible including the development of portable model description standards, the adoption of common simulation languages or the use of standardized middleware. Each of these approaches finds applications within the broad range of current modeling activity. However more effort is required in many areas to enable new scientific questions to be addressed. Here we present the conclusions of the “Neuro-IT Interoperability of Simulators” workshop, held at the 11th computational neuroscience meeting in Edinburgh (July 19-20 2006; http://www.cnsorg.org). We assess the current state of interoperability of neural simulation software and explore the future directions that will enable the field to advance. PMID:17873374

  9. Development of a Remote Accessibility Assessment System through three-dimensional reconstruction technology.

    PubMed

    Kim, Jong Bae; Brienza, David M

    2006-01-01

    A Remote Accessibility Assessment System (RAAS) that uses three-dimensional (3-D) reconstruction technology is being developed; it enables clinicians to assess the wheelchair accessibility of users' built environments from a remote location. The RAAS uses commercial software to construct 3-D virtualized environments from photographs. We developed custom screening algorithms and instruments for analyzing accessibility. Characteristics of the camera and 3-D reconstruction software chosen for the system significantly affect its overall reliability. In this study, we performed an accuracy assessment to verify that commercial hardware and software can construct accurate 3-D models by analyzing the accuracy of dimensional measurements in a virtual environment and a comparison of dimensional measurements from 3-D models created with four cameras/settings. Based on these two analyses, we were able to specify a consumer-grade digital camera and PhotoModeler (EOS Systems, Inc, Vancouver, Canada) software for this system. Finally, we performed a feasibility analysis of the system in an actual environment to evaluate its ability to assess the accessibility of a wheelchair user's typical built environment. The field test resulted in an accurate accessibility assessment and thus validated our system.

  10. An experimental investigation of fault tolerant software structures in an avionics application

    NASA Technical Reports Server (NTRS)

    Caglayan, Alper K.; Eckhardt, Dave E., Jr.

    1989-01-01

    The objective of this experimental investigation is to compare the functional performance and software reliability of competing fault tolerant software structures utilizing software diversity. In this experiment, three versions of the redundancy management software for a skewed sensor array have been developed using three diverse failure detection and isolation algorithms and incorporated into various N-version, recovery block and hybrid software structures. The empirical results show that, for maximum functional performance improvement in the selected application domain, the results of diverse algorithms should be voted before being processed by multiple versions without enforced diversity. Results also suggest that when the reliability gain with an N-version structure is modest, recovery block structures are more feasible since higher reliability can be obtained using an acceptance check with a modest reliability.

  11. Measuring the impact of computer resource quality on the software development process and product

    NASA Technical Reports Server (NTRS)

    Mcgarry, Frank; Valett, Jon; Hall, Dana

    1985-01-01

    The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.

  12. Space station software reliability analysis based on failures observed during testing at the multisystem integration facility

    NASA Technical Reports Server (NTRS)

    Tamayo, Tak Chai

    1987-01-01

    Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.

  13. Indoor Navigation by People with Visual Impairment Using a Digital Sign System

    PubMed Central

    Legge, Gordon E.; Beckmann, Paul J.; Tjan, Bosco S.; Havey, Gary; Kramer, Kevin; Rolkosky, David; Gage, Rachel; Chen, Muzi; Puchakayala, Sravan; Rangarajan, Aravindhan

    2013-01-01

    There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects—blind, low vision, blindfolded sighted, and normally sighted controls—were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment. PMID:24116156

  14. Implementation of Autonomous Control Technology for Plant Growth Chambers

    NASA Technical Reports Server (NTRS)

    Costello, Thomas A.; Sager, John C.; Krumins, Valdis; Wheeler, Raymond M.

    2002-01-01

    The Kennedy Space Center has significant infrastructure for research using controlled environment plant growth chambers. Such research supports development of bioregenerative life support technology for long-term space missions. Most of the existing chambers in Hangar L and Little L will be moved to the new Space Experiment Research and Processing Laboratory (SERPL) in the summer of 2003. The impending move has created an opportunity to update the control system technologies to allow for greater flexibility, less labor for set-up and maintenance, better diagnostics, better reliability and easier data retrieval. Part of these improvements can be realized using hardware which communicates through an ethernet connection to a central computer for supervisory control but can be operated independently of the computer during routine run-time. Both the hardware and software functionality of an envisioned system were tested on a prototype plant growth chamber (CEC-4) in Hangar L. Based upon these tests, recommendations for hardware and software selection and system design for implementation in SERPL are included.

  15. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.

    1987-01-01

    The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.

  16. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    PubMed

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  17. Object-Oriented Technology-Based Software Library for Operations of Water Reclamation Centers

    NASA Astrophysics Data System (ADS)

    Otani, Tetsuo; Shimada, Takehiro; Yoshida, Norio; Abe, Wataru

    SCADA systems in water reclamation centers have been constructed based on hardware and software that each manufacturer produced according to their design. Even though this approach used to be effective to realize real-time and reliable execution, it is an obstacle to cost reduction about system construction and maintenance. A promising solution to address the problem is to set specifications that can be used commonly. In terms of software, information model approach has been adopted in SCADA systems in other field, such as telecommunications and power systems. An information model is a piece of software specification that describes a physical or logical object to be monitored. In this paper, we propose information models for operations of water reclamation centers, which have not ever existed. In addition, we show the feasibility of the information model in terms of common use and processing performance.

  18. The component-based architecture of the HELIOS medical software engineering environment.

    PubMed

    Degoulet, P; Jean, F C; Engelmann, U; Meinzer, H P; Baud, R; Sandblad, B; Wigertz, O; Le Meur, R; Jagermann, C

    1994-12-01

    The constitution of highly integrated health information networks and the growth of multimedia technologies raise new challenges for the development of medical applications. We describe in this paper the general architecture of the HELIOS medical software engineering environment devoted to the development and maintenance of multimedia distributed medical applications. HELIOS is made of a set of software components, federated by a communication channel called the HELIOS Unification Bus. The HELIOS kernel includes three main components, the Analysis-Design and Environment, the Object Information System and the Interface Manager. HELIOS services consist in a collection of toolkits providing the necessary facilities to medical application developers. They include Image Related services, a Natural Language Processor, a Decision Support System and Connection services. The project gives special attention to both object-oriented approaches and software re-usability that are considered crucial steps towards the development of more reliable, coherent and integrated applications.

  19. Proceedings of the American power conference: Volume 59-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McBride, A.E.

    1997-07-01

    This is Volume 59-1 of the proceedings of the American Power Conference, 1997. The contents include environmental protection; regulatory compliance and permitting; convergence of electric and gas industries; renewable/wind energy; improving operations and maintenance; globalization of renewable, generation, and distribution technologies; diagnostics; battery reliability; access to power transmission facilities; software for competitive decision making and operation; transmission and distribution; and nuclear operations and options.

  20. Fly-by-light technology development plan

    NASA Technical Reports Server (NTRS)

    Todd, J. R.; Williams, T.; Goldthorpe, S.; Hay, J.; Brennan, M.; Sherman, B.; Chen, J.; Yount, Larry J.; Hess, Richard F.; Kravetz, J.

    1990-01-01

    The driving factors and developments which make a fly-by-light (FBL) viable are discussed. Documentation, analyses, and recommendations are provided on the major issues pertinent to facilitating the U.S. implementation of commercial FBL aircraft before the turn of the century. Areas of particular concern include ultra-reliable computing (hardware/software); electromagnetic environment (EME); verification and validation; optical techniques; life-cycle maintenance; and basis and procedures for certification.

  1. Developing Decision-Making Skills Using Immersive VR

    DTIC Science & Technology

    2013-06-14

    Institution: Department of Otolaryngology Mailing Address: Level 2, Royal Victorian Eye and Ear Hospital, 32, Gisborne St, East Melbourne...reliability of this measure. We will also intend to integrate other data models into to the feedback system such as Pattern based models [8], and... Pattern -Based Real-Time Feedback for a Temporal Bone Simulator’, Proc. of the 19th ACM Symposium on Virtual Reality Software and Technology, 2013

  2. Transfer of infrared thermography predictive maintenance technologies to Soviet-designed nuclear power plants: experience at Chernobyl

    NASA Astrophysics Data System (ADS)

    Pugh, Ray; Huff, Roy

    1999-03-01

    The importance of infrared (IR) technology and analysis in today's world of predictive maintenance and reliability- centered maintenance cannot be understated. The use of infrared is especially important in facilities that are required to maintain a high degree of equipment reliability because of plant or public safety concerns. As with all maintenance tools, particularly those used in predictive maintenance approaches, training plays a key role in their effectiveness and the benefit gained from their use. This paper details an effort to transfer IR technology to Soviet- designed nuclear power plants in Russia, Ukraine, and Lithuania. Delivery of this technology and post-delivery training activities have been completed recently at the Chornobyl nuclear power plant in Ukraine. Many interesting challenges were encountered during this effort. Hardware procurement and delivery of IR technology to a sensitive country were complicated by United States regulations. Freight and shipping infrastructure and host-country customs policies complicated hardware transport. Training activities were complicated by special hardware, software and training material translation needs, limited communication opportunities, and site logistical concerns. These challenges and others encountered while supplying the Chornobyl plant with state-of-the-art IR technology are described in this paper.

  3. FAA center for aviation systems reliability: an overview

    NASA Astrophysics Data System (ADS)

    Brasche, Lisa J. H.

    1996-11-01

    The FAA Center for Aviation Systems Reliability has as its objectives: to develop quantitative nondestructive evaluation (NDE) methods for aircraft structures and materials, including prototype instrumentation, software, techniques and procedures; and to develop and maintain comprehensive education and training programs specific to the inspection of aviation structures. The program, which includes contributions from Iowa State University, Northwestern University, Wayne State University, Tuskegee University, AlliedSignal Propulsion Engines, General Electric Aircraft Engines and Pratt and Whitney, has been in existence since 1990. Efforts under way include: development of inspection for adhesively bonded structures; detection of corrosion; development of advanced NDE concepts that form the basis for an inspection simulator; improvements of titanium inspection as part of the Engine Titanium Consortium; development of education and training program. An overview of the efforts underway will be provided with focus on those technologies closest to technology transfer.

  4. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Wilson, Larry W.

    1989-01-01

    The longterm goal of this research is to identify or create a model for use in analyzing the reliability of flight control software. The immediate tasks addressed are the creation of data useful to the study of software reliability and production of results pertinent to software reliability through the analysis of existing reliability models and data. The completed data creation portion of this research consists of a Generic Checkout System (GCS) design document created in cooperation with NASA and Research Triangle Institute (RTI) experimenters. This will lead to design and code reviews with the resulting product being one of the versions used in the Terminal Descent Experiment being conducted by the Systems Validations Methods Branch (SVMB) of NASA/Langley. An appended paper details an investigation of the Jelinski-Moranda and Geometric models for software reliability. The models were given data from a process that they have correctly simulated and asked to make predictions about the reliability of that process. It was found that either model will usually fail to make good predictions. These problems were attributed to randomness in the data and replication of data was recommended.

  5. Software Fault Tolerance: A Tutorial

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2000-01-01

    Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.

  6. Simulation and animation of sensor-driven robots.

    PubMed

    Chen, C; Trivedi, M M; Bidlack, C R

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aid the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the users visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.

  7. Behavioral biometrics for verification and recognition of malicious software agents

    NASA Astrophysics Data System (ADS)

    Yampolskiy, Roman V.; Govindaraju, Venu

    2008-04-01

    Homeland security requires technologies capable of positive and reliable identification of humans for law enforcement, government, and commercial applications. As artificially intelligent agents improve in their abilities and become a part of our everyday life, the possibility of using such programs for undermining homeland security increases. Virtual assistants, shopping bots, and game playing programs are used daily by millions of people. We propose applying statistical behavior modeling techniques developed by us for recognition of humans to the identification and verification of intelligent and potentially malicious software agents. Our experimental results demonstrate feasibility of such methods for both artificial agent verification and even for recognition purposes.

  8. Evaluating software development characteristics: Assessment of software measures in the Software Engineering Laboratory. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Basili, V. R.

    1981-01-01

    Work on metrics is discussed. Factors that affect software quality are reviewed. Metrics is discussed in terms of criteria achievements, reliability, and fault tolerance. Subjective and objective metrics are distinguished. Product/process and cost/quality metrics are characterized and discussed.

  9. An empirical study of flight control software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a laboratory experiment in flight control software reliability are reported. The experiment tests a small sample of implementations of a pitch axis control law for a PA28 aircraft with over 14 million pitch commands with varying levels of additive input and feedback noise. The testing which uses the method of n-version programming for error detection surfaced four software faults in one implementation of the control law. The small number of detected faults precluded the conduct of the error burst analyses. The pitch axis problem provides data for use in constructing a model in the prediction of the reliability of software in systems with feedback. The study is undertaken to find means to perform reliability evaluations of flight control software.

  10. Trends in software reliability for digital flight control

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Hecht, M.

    1983-01-01

    Software error data of major recent Digital Flight Control Systems Development Programs. The report summarizes the data, compare these data with similar data from previous surveys and identifies trends and disciplines to improve software reliability.

  11. Cost Estimation of Software Development and the Implications for the Program Manager

    DTIC Science & Technology

    1992-06-01

    Software Lifecycle Model (SLIM), the Jensen System-4 model, the Software Productivity, Quality, and Reliability Estimator ( SPQR \\20), the Constructive...function models in current use are the Software Productivity, Quality, and Reliability Estimator ( SPQR /20) and the Software Architecture Sizing and...Estimator ( SPQR /20) was developed by T. Capers Jones of Software Productivity Research, Inc., in 1985. The model is intended to estimate the outcome

  12. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  13. Software development predictors, error analysis, reliability models and software metric analysis

    NASA Technical Reports Server (NTRS)

    Basili, Victor

    1983-01-01

    The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.

  14. Research on Occupational Safety, Health Management and Risk Control Technology in Coal Mines.

    PubMed

    Zhou, Lu-Jie; Cao, Qing-Gui; Yu, Kai; Wang, Lin-Lin; Wang, Hai-Bin

    2018-04-26

    This paper studies the occupational safety and health management methods as well as risk control technology associated with the coal mining industry, including daily management of occupational safety and health, identification and assessment of risks, early warning and dynamic monitoring of risks, etc.; also, a B/S mode software (Geting Coal Mine, Jining, Shandong, China), i.e., Coal Mine Occupational Safety and Health Management and Risk Control System, is developed to attain the aforementioned objectives, namely promoting the coal mine occupational safety and health management based on early warning and dynamic monitoring of risks. Furthermore, the practical effectiveness and the associated pattern for applying this software package to coal mining is analyzed. The study indicates that the presently developed coal mine occupational safety and health management and risk control technology and the associated software can support the occupational safety and health management efforts in coal mines in a standardized and effective manner. It can also control the accident risks scientifically and effectively; its effective implementation can further improve the coal mine occupational safety and health management mechanism, and further enhance the risk management approaches. Besides, its implementation indicates that the occupational safety and health management and risk control technology has been established based on a benign cycle involving dynamic feedback and scientific development, which can provide a reliable assurance to the safe operation of coal mines.

  15. Research on Occupational Safety, Health Management and Risk Control Technology in Coal Mines

    PubMed Central

    Zhou, Lu-jie; Cao, Qing-gui; Yu, Kai; Wang, Lin-lin; Wang, Hai-bin

    2018-01-01

    This paper studies the occupational safety and health management methods as well as risk control technology associated with the coal mining industry, including daily management of occupational safety and health, identification and assessment of risks, early warning and dynamic monitoring of risks, etc.; also, a B/S mode software (Geting Coal Mine, Jining, Shandong, China), i.e., Coal Mine Occupational Safety and Health Management and Risk Control System, is developed to attain the aforementioned objectives, namely promoting the coal mine occupational safety and health management based on early warning and dynamic monitoring of risks. Furthermore, the practical effectiveness and the associated pattern for applying this software package to coal mining is analyzed. The study indicates that the presently developed coal mine occupational safety and health management and risk control technology and the associated software can support the occupational safety and health management efforts in coal mines in a standardized and effective manner. It can also control the accident risks scientifically and effectively; its effective implementation can further improve the coal mine occupational safety and health management mechanism, and further enhance the risk management approaches. Besides, its implementation indicates that the occupational safety and health management and risk control technology has been established based on a benign cycle involving dynamic feedback and scientific development, which can provide a reliable assurance to the safe operation of coal mines. PMID:29701715

  16. Technology test results from an intelligent, free-flying robot for crew and equipment retrieval in space

    NASA Technical Reports Server (NTRS)

    Erickson, J.; Goode, R.; Grimm, K.; Hess, C.; Norsworthy, R.; Anderson, G.; Merkel, L.; Phinney, D.

    1992-01-01

    The ground-based demonstrations of Extra Vehicular Activity (EVA) Retriever, a voice-supervised, intelligent, free-flying robot, are designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The EVA Retriever software is required to autonomously plan and execute a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles with subsequent object handover. The software architecture incorporates a heirarchical decomposition of the control system that is horizontally partitioned into five major functional subsystems: sensing, perception, world model, reasoning, and acting. The design provides for supervised autonomy as the primary mode of operation. It is intended to be an evolutionary system improving in capability over time and as it earns crew trust through reliable and safe operation. This paper gives an overview of the hardware, a focus on software, and a summary of results achieved recently from both computer simulations and air bearing floor demonstrations. Limitations of the technology used are evaluated. Plans for the next phase, during which moving targets and obstacles drive realtime behavior requirements, are discussed.

  17. Technology test results from an intelligent, free-flying robot for crew and equipment retrieval in space

    NASA Astrophysics Data System (ADS)

    Erickson, Jon D.; Goode, R.; Grimm, K. A.; Hess, Clifford W.; Norsworthy, Robert S.; Anderson, Greg D.; Merkel, L.; Phinney, Dale E.

    1992-03-01

    The ground-based demonstrations of Extra Vehicular Activity (EVA) Retriever, a voice- supervised, intelligent, free-flying robot, are designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the space station. The EVA Retriever software is required to autonomously plan and execute a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles with subsequent object handover. The software architecture incorporates a hierarchical decomposition of the control system that is horizontally partitioned into five major functional subsystems: sensing, perception, world model, reasoning, and acting. The design provides for supervised autonomy as the primary mode of operation. It is intended to be an evolutionary system improving in capability over time and as it earns crew trust through reliable and safe operation. This paper gives an overview of the hardware, a focus on software, and a summary of results achieved recently from both computer simulations and air bearing floor demonstrations. Limitations of the technology used are evaluated. Plans for the next phase, during which moving targets and obstacles drive realtime behavior requirements, are discussed.

  18. Estimation and enhancement of real-time software reliability through mutation analysis

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.

    1992-01-01

    A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.

  19. Information Technology and the Autonomous Control of a Mars In-Situ Propellant Production System

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R.; Sridhar, K. R.; Larson, William E.; Clancy, Daniel J.; Peschur, Charles; Briggs, Geoffrey A.; Zornetzer, Steven F. (Technical Monitor)

    1999-01-01

    With the rapidly increasing performance of information technology, i.e., computer hardware and software systems, as well as networks and communication systems, a new capability is being developed that holds the clear promise of greatly increased exploration capability, along with dramatically reduced design, development, and operating costs. These new intelligent systems technologies, utilizing knowledge-based software and very high performance computer systems, will provide new design and development tools, scheduling mechanisms, and vehicle and system health monitoring capabilities. In addition, specific technologies such as neural nets will provide a degree of machine intelligence and associated autonomy which has previously been unavailable to the mission and spacecraft designer and to the system operator. One of the most promising applications of these new information technologies is to the area of in situ resource utilization. Useful resources such as oxygen, compressed carbon dioxide, water, methane, and buffer gases can be extracted and/or generated from planetary atmospheres, such as the Martian atmosphere. These products, when used for propulsion and life-support needs can provide significant savings in the launch mass and costs for both robotic and crewed missions. In the longer term the utilization of indigenous resources is an enabling technology that is vital to sustaining long duration human presence on Mars. This paper will present the concepts that are currently under investigation and development for mining the Martian atmosphere, such as temperature-swing adsorption, zirconia electrolysis etc., to create propellants and life-support materials. This description will be followed by an analysis of the information technology and control needs for the reliable and autonomous operation of such processing plants in a fault tolerant manner, as well as the approach being taken for the development of the controlling software. Finally, there will be a brief discussion of the verification and validation process so crucial to the implementation of mission-critical software.

  20. Virtually Out of This World!

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Ames Research Center granted Reality Capture Technologies (RCT), Inc., a license to further develop NASA's Mars Map software platform. The company incorporated NASA#s innovation into software that uses the Virtual Plant Model (VPM)(TM) to structure, modify, and implement the construction sites of industrial facilities, as well as develop, validate, and train operators on procedures. The VPM orchestrates the exchange of information between engineering, production, and business transaction systems. This enables users to simulate, control, and optimize work processes while increasing the reliability of critical business decisions. Engineers can complete the construction process and test various aspects of it in virtual reality before building the actual structure. With virtual access to and simulation of the construction site, project personnel can manage, access control, and respond to changes on complex constructions more effectively. Engineers can also create operating procedures, training, and documentation. Virtual Plant Model(TM) is a trademark of Reality Capture Technologies, Inc.

  1. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  2. Look-ahead Dynamic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-20

    Look-ahead dynamic simulation software system incorporates the high performance parallel computing technologies, significantly reduces the solution time for each transient simulation case, and brings the dynamic simulation analysis into on-line applications to enable more transparency for better reliability and asset utilization. It takes the snapshot of the current power grid status, functions in parallel computing the system dynamic simulation, and outputs the transient response of the power system in real time.

  3. Practical Issues in Implementing Software Reliability Measurement

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.

    1999-01-01

    Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.

  4. The probability estimation of the electronic lesson implementation taking into account software reliability

    NASA Astrophysics Data System (ADS)

    Gurov, V. V.

    2017-01-01

    Software tools for educational purposes, such as e-lessons, computer-based testing system, from the point of view of reliability, have a number of features. The main ones among them are the need to ensure a sufficiently high probability of their faultless operation for a specified time, as well as the impossibility of their rapid recovery by the way of replacing it with a similar running program during the classes. The article considers the peculiarities of reliability evaluation of programs in contrast to assessments of hardware reliability. The basic requirements to reliability of software used for carrying out practical and laboratory classes in the form of computer-based training programs are given. The essential requirements applicable to the reliability of software used for conducting the practical and laboratory studies in the form of computer-based teaching programs are also described. The mathematical tool based on Markov chains, which allows to determine the degree of debugging of the training program for use in the educational process by means of applying the graph of the software modules interaction, is presented.

  5. Scientific Research Program for Power, Energy, and Thermal Technologies. Task Order 0002: Power, Thermal and Control Technologies and Processes Experimental Research. Subtask: Laboratory Test Set-up to Evaluate Electromechanical Actuation Systems for Aircraft Flight Control

    DTIC Science & Technology

    2015-08-01

    faults are incorporated into the system in order to better understand the EMA reliability, and to aid in designing fault detection software for real...to a fixed angle repeatedly and accurately [16]. The motor in the EHA is used to drive a reversible pump tied to a hydraulic cylinder which moves...24] [25] [26]. These test stands are used for the prognostic testing of EMAS that have had mechanical or electrical faults injected into them. The

  6. Applying New Network Security Technologies to SCADA Systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurd, Steven A; Stamp, Jason Edwin; Duggan, David P

    2006-11-01

    Supervisory Control and Data Acquisition (SCADA) systems for automation are very important for critical infrastructure and manufacturing operations. They have been implemented to work in a number of physical environments using a variety of hardware, software, networking protocols, and communications technologies, often before security issues became of paramount concern. To offer solutions to security shortcomings in the short/medium term, this project was to identify technologies used to secure "traditional" IT networks and systems, and then assess their efficacy with respect to SCADA systems. These proposed solutions must be relatively simple to implement, reliable, and acceptable to SCADA owners and operators.more » 4This page intentionally left blank.« less

  7. Software reliability perspectives

    NASA Technical Reports Server (NTRS)

    Wilson, Larry; Shen, Wenhui

    1987-01-01

    Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.

  8. A Model for Assessing the Liability of Seemingly Correct Software

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.; Voas, Larry K.; Miller, Keith W.

    1991-01-01

    Current research on software reliability does not lend itself to quantitatively assessing the risk posed by a piece of life-critical software. Black-box software reliability models are too general and make too many assumptions to be applied confidently to assessing the risk of life-critical software. We present a model for assessing the risk caused by a piece of software; this model combines software testing results and Hamlet's probable correctness model. We show how this model can assess software risk for those who insure against a loss that can occur if life-critical software fails.

  9. Interactive data collection: benefits of integrating new media into pediatric research.

    PubMed

    Kennedy, Christine; Charlesworth, Annemarie; Chen, Jyu-Lin

    2003-01-01

    Despite the prevalence of children's computerized games for recreational and educational purposes, the use of interactive technology to obtain pediatric research data remains underexplored. This article describes the development of laptop interactive data collection (IDC) software for a children's health intervention study. The IDC integrates computer technology, children's developmental needs, and quantitative research methods that are engaging for school-age children as well as reliable and efficient for the pediatric health researcher. Using this methodology, researchers can address common problems such as maintaining a child's attention throughout an assessment session while potentially increasing their response rate and reducing missing data rates. The IDC also promises to produce more reliable data by eliminating the need for manual double entry of data and reducing much of the time and costs associated with data cleaning and management. Development and design considerations and recommendations for further use are discussed.

  10. Evaluating a technical university's placement test using the Rasch measurement model

    NASA Astrophysics Data System (ADS)

    Salleh, Tuan Salwani; Bakri, Norhayati; Zin, Zalhan Mohd

    2016-10-01

    This study discusses the process of validating a mathematics placement test at a technical university. The main objective is to produce a valid and reliable test to measure students' prerequisite knowledge to learn engineering technology mathematics. It is crucial to have a valid and reliable test as the results will be used in a critical decision making to assign students into different groups of Technical Mathematics 1. The placement test which consists of 50 mathematics questions were tested on 82 new diplomas in engineering technology students at a technical university. This study employed rasch measurement model to analyze the data through the Winsteps software. The results revealed that there are ten test questions lower than less able students' ability. Nevertheless, all the ten questions satisfied infit and outfit standard values. Thus, all the questions can be reused in the future placement test at the technical university.

  11. A study of software standards used in the avionics industry

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1994-01-01

    Within the past decade, software has become an increasingly common element in computing systems. In particular, the role of software used in the aerospace industry, especially in life- or safety-critical applications, is rapidly expanding. This intensifies the need to use effective techniques for achieving and verifying the reliability of avionics software. Although certain software development processes and techniques are mandated by government regulating agencies, no one methodology has been shown to consistently produce reliable software. The knowledge base for designing reliable software simply has not reached the maturity of its hardware counterpart. In an effort to increase our understanding of software, the Langley Research Center conducted a series of experiments over 15 years with the goal of understanding why and how software fails. As part of this program, the effectiveness of current industry standards for the development of avionics is being investigated. This study involves the generation of a controlled environment to conduct scientific experiments on software processes.

  12. Logical optical line terminal technologies towards flexible and highly reliable metro- and access-integrated networks

    NASA Astrophysics Data System (ADS)

    Okamoto, Satoru; Sato, Takehiro; Yamanaka, Naoaki

    2017-01-01

    In this paper, flexible and highly reliable metro and access integrated networks with network virtualization and software defined networking technologies will be presented. Logical optical line terminal (L-OLT) technologies and active optical distribution networks (ODNs) are the key to introduce flexibility and high reliability into the metro and access integrated networks. In the Elastic Lambda Aggregation Network (EλAN) project which was started in 2012, a concept of the programmable optical line terminal (P-OLT) has been proposed. A role of the P-OLT is providing multiple network services that have different protocols and quality of service requirements by single OLT box. Accommodated services will be Internet access, mobile front-haul/back-haul, data-center access, and leased line. L-OLTs are configured within the P-OLT box to support the functions required for each network service. Multiple P-OLTs and programmable optical network units (P-ONUs) are connected by the active ODN. Optical access paths which have flexible capacity are set on the ODN to provide network services from L-OLT to logical ONUs (L-ONUs). The L-OLT to L-ONU path on the active ODN provides a logical connection. Therefore, introducing virtualization technologies becomes possible. One example is moving an L-OLT from one P-OLT to another P-OLT like a virtual machine. This movement is called L-OLT migration. The L-OLT migration provides flexible and reliable network functions such as energy saving by aggregating L-OLTs to a limited number of P-OLTs, and network wide optical access path restoration. Other L-OLT virtualization technologies and experimental results will be also discussed in the paper.

  13. Evaluation of software sensors for on-line estimation of culture conditions in an Escherichia coli cultivation expressing a recombinant protein.

    PubMed

    Warth, Benedikt; Rajkai, György; Mandenius, Carl-Fredrik

    2010-05-03

    Software sensors for monitoring and on-line estimation of critical bioprocess variables have mainly been used with standard bioreactor sensors, such as electrodes and gas analyzers, where algorithms in the software model have generated the desired state variables. In this article we propose that other on-line instruments, such as NIR probes and on-line HPLC, should be used to make more reliable and flexible software sensors. Five software sensor architectures were compared and evaluated: (1) biomass concentration from an on-line NIR probe, (2) biomass concentration from titrant addition, (3) specific growth rate from titrant addition, (4) specific growth rate from the NIR probe, and (5) specific substrate uptake rate and by-product rate from on-line HPLC and NIR probe signals. The software sensors were demonstrated on an Escherichia coli cultivation expressing a recombinant protein, green fluorescent protein (GFP), but the results could be extrapolated to other production organisms and product proteins. We conclude that well-maintained on-line instrumentation (hardware sensors) can increase the potential of software sensors. This would also strongly support the intentions with process analytical technology and quality-by-design concepts. 2010 Elsevier B.V. All rights reserved.

  14. Reliable data storage system design and implementation for acoustic logging while drilling

    NASA Astrophysics Data System (ADS)

    Hao, Xiaolong; Ju, Xiaodong; Wu, Xiling; Lu, Junqiang; Men, Baiyong; Yao, Yongchao; Liu, Dong

    2016-12-01

    Owing to the limitations of real-time transmission, reliable downhole data storage and fast ground reading have become key technologies in developing tools for acoustic logging while drilling (LWD). In order to improve the reliability of the downhole storage system in conditions of high temperature, intensive shake and periodic power supply, improvements were made in terms of hardware and software. In hardware, we integrated the storage system and data acquisition control module into one circuit board, to reduce the complexity of the storage process, by adopting the controller combination of digital signal processor and field programmable gate array. In software, we developed a systematic management strategy for reliable storage. Multiple-backup independent storage was employed to increase the data redundancy. A traditional error checking and correction (ECC) algorithm was improved and we embedded the calculated ECC code into all management data and waveform data. A real-time storage algorithm for arbitrary length data was designed to actively preserve the storage scene and ensure the independence of the stored data. The recovery procedure of management data was optimized to realize reliable self-recovery. A new bad block management idea of static block replacement and dynamic page mark was proposed to make the period of data acquisition and storage more balanced. In addition, we developed a portable ground data reading module based on a new reliable high speed bus to Ethernet interface to achieve fast reading of the logging data. Experiments have shown that this system can work stably below 155 °C with a periodic power supply. The effective ground data reading rate reaches 1.375 Mbps with 99.7% one-time success rate at room temperature. This work has high practical application significance in improving the reliability and field efficiency of acoustic LWD tools.

  15. Digital Sensor Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Ken D.; Quinn, Edward L.; Mauck, Jerry L.

    The nuclear industry has been slow to incorporate digital sensor technology into nuclear plant designs due to concerns with digital qualification issues. However, the benefits of digital sensor technology for nuclear plant instrumentation are substantial in terms of accuracy and reliability. This paper, which refers to a final report issued in 2013, demonstrates these benefits in direct comparisons of digital and analog sensor applications. Improved accuracy results from the superior operating characteristics of digital sensors. These include improvements in sensor accuracy and drift and other related parameters which reduce total loop uncertainty and thereby increase safety and operating margins. Anmore » example instrument loop uncertainty calculation for a pressure sensor application is presented to illustrate these improvements. This is a side-by-side comparison of the instrument loop uncertainty for both an analog and a digital sensor in the same pressure measurement application. Similarly, improved sensor reliability is illustrated with a sample calculation for determining the probability of failure on demand, an industry standard reliability measure. This looks at equivalent analog and digital temperature sensors to draw the comparison. The results confirm substantial reliability improvement with the digital sensor, due in large part to ability to continuously monitor the health of a digital sensor such that problems can be immediately identified and corrected. This greatly reduces the likelihood of a latent failure condition of the sensor at the time of a design basis event. Notwithstanding the benefits of digital sensors, there are certain qualification issues that are inherent with digital technology and these are described in the report. One major qualification impediment for digital sensor implementation is software common cause failure (SCCF).« less

  16. Infusing Reliability Techniques into Software Safety Analysis

    NASA Technical Reports Server (NTRS)

    Shi, Ying

    2015-01-01

    Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.

  17. Reliability Analysis and Optimal Release Problem Considering Maintenance Time of Software Components for an Embedded OSS Porting Phase

    NASA Astrophysics Data System (ADS)

    Tamura, Yoshinobu; Yamada, Shigeru

    OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.

  18. Software Technology for Adaptable, Reliable Systems (STARS) Technical Program Plan,

    DTIC Science & Technology

    1986-08-06

    Shadow projects that are being considered by the SEI: Service Project Company . . Army FATDS Magnavox CSS To Be Determined PLRS Hughes Navy BSY-I IBM...ISCS Rockwell ACDS Hughes Air Force Simulators Boeing CSSR GTE ATF Grumman MILSTAR Lockheed Figure 4-2 Candidate FY 1986 Shadow Projects 4.2.3 SHADOW...Journal of the Armed Forces Communications and Electronic Association, September 1985. 11. Schill, J., Smeaton, R., and Jackman , R., The Conversion of

  19. Software Technology for Adaptable Reliable Systems (STARS) Workshop March 24-27 1986.

    DTIC Science & Technology

    1986-03-01

    syntax is aug- monitor program behavior. Trace and mented to accept design notes in arbitrary single-step facilities will provide the capability ... capabilities of these worksta- inrs tions make them a logical choice for hosting The final component of Vise is the a visual development environment. We...the following When the user picks the desired action, capabilities : graphical program display and linguistic analysis is used to extract informa

  20. Software Defined Radios - Architectures, Systems and Functions

    NASA Technical Reports Server (NTRS)

    Sims, Herb

    2017-01-01

    Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency (RF) components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. These innovations have reduced the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. An additional pay-off is the increased flexibility of the SDR: allowing the same hardware to implement multiple transponder types by altering hardware logic -no change of analog hardware is required -all of which can be ultimately accomplished in orbit.

  1. Inter- and intrarater reliability of the Chicago Classification in pediatric high-resolution esophageal manometry recordings.

    PubMed

    Singendonk, M M J; Smits, M J; Heijting, I E; van Wijk, M P; Nurko, S; Rosen, R; Weijenborg, P W; Abu-Assi, R; Hoekman, D R; Kuizenga-Wessel, S; Seiboth, G; Benninga, M A; Omari, T I; Kritas, S

    2015-02-01

    The Chicago Classification (CC) facilitates interpretation of high-resolution manometry (HRM) recordings. Application of this adult based algorithm to the pediatric population is unknown. We therefore assessed intra and interrater reliability of software-based CC diagnosis in a pediatric cohort. Thirty pediatric solid state HRM recordings (13M; mean age 12.1 ± 5.1 years) assessing 10 liquid swallows per patient were analyzed twice by 11 raters (six experts, five non-experts). Software-placed anatomical landmarks required manual adjustment or removal. Integrated relaxation pressure (IRP4s), distal contractile integral (DCI), contractile front velocity (CFV), distal latency (DL) and break size (BS), and an overall CC diagnosis were software-generated. In addition, raters provided their subjective CC diagnosis. Reliability was calculated with Cohen's and Fleiss' kappa (κ) and intraclass correlation coefficient (ICC). Intra- and interrater reliability of software-generated CC diagnosis after manual adjustment of landmarks was substantial (mean κ = 0.69 and 0.77 respectively) and moderate-substantial for subjective CC diagnosis (mean κ = 0.70 and 0.58 respectively). Reliability of both software-generated and subjective diagnosis of normal motility was high (κ = 0.81 and κ = 0.79). Intra- and interrater reliability were excellent for IRP4s, DCI, and BS. Experts had higher interrater reliability than non-experts for DL (ICC = 0.65 vs ICC = 0.36 respectively) and the software-generated diagnosis diffuse esophageal spasm (DES, κ = 0.64 vs κ = 0.30). Among experts, the reliability for the subjective diagnosis of achalasia and esophageal gastric junction outflow obstruction was moderate-substantial (κ = 0.45-0.82). Inter- and intrarater reliability of software-based CC diagnosis of pediatric HRM recordings was high overall. However, experience was a factor influencing the diagnosis of some motility disorders, particularly DES and achalasia. © 2014 John Wiley & Sons Ltd.

  2. 76 FR 28819 - NUREG/CR-XXXX, Development of Quantitative Software Reliability Models for Digital Protection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software..., ``Development of Quantitative Software Reliability Models for Digital Protection Systems of Nuclear Power Plants... of Risk Analysis, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission...

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Z. J.; Wells, D.; Green, J.

    Photon Activation Analysis (PAA) of environmental, archaeological and industrial samples requires extensive data analysis that is susceptible to error. For the purpose of saving time, manpower and minimizing error, a computer program was designed, built and implemented using SQL, Access 2007 and asp.net technology to automate this process. Based on the peak information of the spectrum and assisted by its PAA library, the program automatically identifies elements in the samples and calculates their concentrations and respective uncertainties. The software also could be operated in browser/server mode, which gives the possibility to use it anywhere the internet is accessible. By switchingmore » the nuclide library and the related formula behind, the new software can be easily expanded to neutron activation analysis (NAA), charged particle activation analysis (CPAA) or proton-induced X-ray emission (PIXE). Implementation of this would standardize the analysis of nuclear activation data. Results from this software were compared to standard PAA analysis with excellent agreement. With minimum input from the user, the software has proven to be fast, user-friendly and reliable.« less

  4. The Software Correlator of the Chinese VLBI Network

    NASA Technical Reports Server (NTRS)

    Zheng, Weimin; Quan, Ying; Shu, Fengchun; Chen, Zhong; Chen, Shanshan; Wang, Weihua; Wang, Guangli

    2010-01-01

    The software correlator of the Chinese VLBI Network (CVN) has played an irreplaceable role in the CVN routine data processing, e.g., in the Chinese lunar exploration project. This correlator will be upgraded to process geodetic and astronomical observation data. In the future, with several new stations joining the network, CVN will carry out crustal movement observations, quick UT1 measurements, astrophysical observations, and deep space exploration activities. For the geodetic or astronomical observations, we need a wide-band 10-station correlator. For spacecraft tracking, a realtime and highly reliable correlator is essential. To meet the scientific and navigation requirements of CVN, two parallel software correlators in the multiprocessor environments are under development. A high speed, 10-station prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm on a computer cluster platform is being developed. Another real-time software correlator for spacecraft tracking adopts the thread-parallel technology, and it runs on the SMP (Symmetric Multiple Processor) servers. Both correlators have the characteristic of flexible structure and scalability.

  5. An overview of the mathematical and statistical analysis component of RICIS

    NASA Technical Reports Server (NTRS)

    Hallum, Cecil R.

    1987-01-01

    Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.

  6. Spinoff 2011

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Topics include: Bioreactors Drive Advances in Tissue Engineering; Tooling Techniques Enhance Medical Imaging; Ventilator Technologies Sustain Critically Injured Patients; Protein Innovations Advance Drug Treatments, Skin Care; Mass Analyzers Facilitate Research on Addiction; Frameworks Coordinate Scientific Data Management; Cameras Improve Navigation for Pilots, Drivers; Integrated Design Tools Reduce Risk, Cost; Advisory Systems Save Time, Fuel for Airlines; Modeling Programs Increase Aircraft Design Safety; Fly-by-Wire Systems Enable Safer, More Efficient Flight; Modified Fittings Enhance Industrial Safety; Simulation Tools Model Icing for Aircraft Design; Information Systems Coordinate Emergency Management; Imaging Systems Provide Maps for U.S. Soldiers; High-Pressure Systems Suppress Fires in Seconds; Alloy-Enhanced Fans Maintain Fresh Air in Tunnels; Control Algorithms Charge Batteries Faster; Software Programs Derive Measurements from Photographs; Retrofits Convert Gas Vehicles into Hybrids; NASA Missions Inspire Online Video Games; Monitors Track Vital Signs for Fitness and Safety; Thermal Components Boost Performance of HVAC Systems; World Wind Tools Reveal Environmental Change; Analyzers Measure Greenhouse Gasses, Airborne Pollutants; Remediation Technologies Eliminate Contaminants; Receivers Gather Data for Climate, Weather Prediction; Coating Processes Boost Performance of Solar Cells; Analyzers Provide Water Security in Space and on Earth; Catalyst Substrates Remove Contaminants, Produce Fuel; Rocket Engine Innovations Advance Clean Energy; Technologies Render Views of Earth for Virtual Navigation; Content Platforms Meet Data Storage, Retrieval Needs; Tools Ensure Reliability of Critical Software; Electronic Handbooks Simplify Process Management; Software Innovations Speed Scientific Computing; Controller Chips Preserve Microprocessor Function; Nanotube Production Devices Expand Research Capabilities; Custom Machines Advance Composite Manufacturing; Polyimide Foams Offer Superior Insulation; Beam Steering Devices Reduce Payload Weight; Models Support Energy-Saving Microwave Technologies; Materials Advance Chemical Propulsion Technology; and High-Temperature Coatings Offer Energy Savings.

  7. Performance and reliability enhancement of linear coolers

    NASA Astrophysics Data System (ADS)

    Mai, M.; Rühlich, I.; Schreiter, A.; Zehner, S.

    2010-04-01

    Highest efficiency states a crucial requirement for modern tactical IR cryocooling systems. For enhancement of overall efficiency, AIM cryocooler designs where reassessed considering all relevant loss mechanisms and associated components. Performed investigation was based on state-of-the-art simulation software featuring magnet circuitry analysis as well as computational fluid dynamics (CFD) to realistically replicate thermodynamic interactions. As a result, an improved design for AIM linear coolers could be derived. This paper gives an overview on performance enhancement activities and major results. An additional key-requirement for cryocoolers is reliability. In recent time, AIM has introduced linear coolers with full Flexure Bearing suspension on both ends of the driving mechanism incorporating Moving Magnet piston drive. In conjunction with a Pulse-Tube coldfinger these coolers are capable of meeting MTTF's (Mean Time To Failure) in excess of 50,000 hours offering superior reliability for space applications. Ongoing development also focuses on reliability enhancement, deriving space technology into tactical solutions combining both, excelling specific performance with space like reliability. Concerned publication will summarize the progress of this reliability program and give further prospect.

  8. Standardizing Activation Analysis: New Software for Photon Activation Analysis

    NASA Astrophysics Data System (ADS)

    Sun, Z. J.; Wells, D.; Segebade, C.; Green, J.

    2011-06-01

    Photon Activation Analysis (PAA) of environmental, archaeological and industrial samples requires extensive data analysis that is susceptible to error. For the purpose of saving time, manpower and minimizing error, a computer program was designed, built and implemented using SQL, Access 2007 and asp.net technology to automate this process. Based on the peak information of the spectrum and assisted by its PAA library, the program automatically identifies elements in the samples and calculates their concentrations and respective uncertainties. The software also could be operated in browser/server mode, which gives the possibility to use it anywhere the internet is accessible. By switching the nuclide library and the related formula behind, the new software can be easily expanded to neutron activation analysis (NAA), charged particle activation analysis (CPAA) or proton-induced X-ray emission (PIXE). Implementation of this would standardize the analysis of nuclear activation data. Results from this software were compared to standard PAA analysis with excellent agreement. With minimum input from the user, the software has proven to be fast, user-friendly and reliable.

  9. Simulation and animation of sensor-driven robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, C.; Trivedi, M.M.; Bidlack, C.R.

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aide the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the usersmore » visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.« less

  10. Preoperative Planning of Orthopedic Procedures using Digitalized Software Systems.

    PubMed

    Steinberg, Ely L; Segev, Eitan; Drexler, Michael; Ben-Tov, Tomer; Nimrod, Snir

    2016-06-01

    The progression from standard celluloid films to digitalized technology led to the development of new software programs to fulfill the needs of preoperative planning. We describe here preoperative digitalized programs and the variety of conditions for which those programs can be used to facilitate preparation for surgery. A PubMed search using the keywords "digitalized software programs," "preoperative planning" and "total joint arthroplasty" was performed for all studies regarding preoperative planning of orthopedic procedures that were published from 1989 to 2014 in English. Digitalized software programs are enabled to import and export all picture archiving communication system (PACS) files (i.e., X-rays, computerized tomograms, magnetic resonance images) from either the local working station or from any remote PACS. Two-dimension (2D) and 3D CT scans were found to be reliable tools with a high preoperative predicting accuracy for implants. The short learning curve, user-friendly features, accurate prediction of implant size, decreased implant stocks and low-cost maintenance makes digitalized software programs an attractive tool in preoperative planning of total joint replacement, fracture fixation, limb deformity repair and pediatric skeletal disorders.

  11. Modeling reliability measurement of interface on information system: Towards the forensic of rules

    NASA Astrophysics Data System (ADS)

    Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan

    2018-02-01

    Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.

  12. Intelligent pump test system based on virtual instrument

    NASA Astrophysics Data System (ADS)

    Ma, Jungong; Wang, Shifu; Wang, Zhanlin

    2003-09-01

    The intelligent pump system is the key component of the aircraft hydraulic system that can solve the problem, such as the temperature sharply increasing. As the performance of the intelligent pump directly determines that of the aircraft hydraulic system and seriously affects fly security and reliability. So it is important to test all kinds of performance parameters of intelligent pump during design and development, while the advanced, reliable and complete test equipments are the necessary instruments for achieving the goal. In this paper, the application of virtual instrument and computer network technology in aircraft intelligent pump test is presented. The composition of the hardware, software, hydraulic circuit in this system are designed and implemented.

  13. Leveraging Code Comments to Improve Software Reliability

    ERIC Educational Resources Information Center

    Tan, Lin

    2009-01-01

    Commenting source code has long been a common practice in software development. This thesis, consisting of three pieces of work, made novel use of the code comments written in natural language to improve software reliability. Our solution combines Natural Language Processing (NLP), Machine Learning, Statistics, and Program Analysis techniques to…

  14. Hardware and software reliability estimation using simulations

    NASA Technical Reports Server (NTRS)

    Swern, Frederic L.

    1994-01-01

    The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.

  15. Assessment of a spectral domain OCT segmentation software in a retrospective cohort study of exudative AMD patients.

    PubMed

    Tilleul, Julien; Querques, Giuseppe; Canoui-Poitrine, Florence; Leveziel, Nicolas; Souied, Eric H

    2013-01-01

    To assess the ability of the Spectralis optical coherence tomography (OCT) segmentation software to identify the inner limiting membrane and Bruch's membrane in exudative age-related macular degeneration (AMD) patients. Thirty-eight eyes of 38 naive exudative AMD patients were retrospectively included. They all had a complete ophthalmologic examination including Spectralis OCT at baseline, at month 1 and 2. Reliability of the segmentation software was assessed by 2 ophthalmologists. Reliability of the segmentation software was defined as good if both inner limiting membrane and Bruch's membrane were correctly drawn. A total of 38 patients charts were reviewed (114 scans). The inner limiting membrane was correctly drawn by the segmentation software in 114/114 spectral domain OCT scans (100%). Conversely, Bruch's membrane was correctly drawn in 59/114 scans (51.8%). The software was less reliable in locating Bruch's membrane in case of pigment epithelium detachment (PED) than without PED (42.5 vs. 73.5%, respectively; p = 0.049), but its reliability was not associated with SRF or CME (p = 0.55 and p = 0.10, respectively). Segmentation of the inner limiting membrane was constantly trustworthy but Bruch's membrane segmentation was poorly reliable using the automatic Spectralis segmentation software. Based on this software, evaluation of retinal thickness may be incorrect, particularly in case of PED. PED is effectively an important parameter which is not included when measuring retinal thickness. Copyright © 2012 S. Karger AG, Basel.

  16. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1992-01-01

    Accomplishments in the following research areas are summarized: structure based testing, reliability growth, and design testability with risk evaluation; reliability growth models and software risk management; and evaluation of consensus voting, consensus recovery block, and acceptance voting. Four papers generated during the reporting period are included as appendices.

  17. A novel Laser Ion Mobility Spectrometer

    NASA Astrophysics Data System (ADS)

    Göbel, J.; Kessler, M.; Langmeier, A.

    2009-05-01

    IMS is a well know technology within the range of security based applications. Its main advantages lie in the simplicity of measurement, along with a fast and sensitive detection method. Contemporary technology often fails due to interference substances, in conjunction with saturation effects and a low dynamic detection range. High throughput facilities, such as airports, require the analysis of many samples at low detection limits within a very short timeframe. High detection reliability is a requirement for safe and secure operation. In our present work we developed a laser based ion-mobility-sensor which shows several advantages over known IMS sensor technology. The goal of our research was to increase the sensitivity compared to the range of 63Ni based instruments. This was achieved with an optimised geometric drift tube design and a pulsed UV laser system at an efficient intensity. In this intensity range multi-photon ionisation is possible, which leads to higher selectivity in the ion-formation process itself. After high speed capturing of detection samples, a custom designed pattern recognition software toolbox provides reliable auto-detection capability with a learning algorithm and a graphical user interface.

  18. Systems Engineering and Integration (SE and I)

    NASA Technical Reports Server (NTRS)

    Chevers, ED; Haley, Sam

    1990-01-01

    The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.

  19. Design of Measure and Control System for Precision Pesticide Deploying Dynamic Simulating Device

    NASA Astrophysics Data System (ADS)

    Liang, Yong; Liu, Pingzeng; Wang, Lu; Liu, Jiping; Wang, Lang; Han, Lei; Yang, Xinxin

    A measure and control system for precision deploying pesticide simulating equipment is designed in order to study pesticide deployment technology. The system can simulate every state of practical pesticide deployment, and carry through precise, simultaneous measure to every factor affecting pesticide deployment effects. The hardware and software incorporates a structural design of modularization. The system is divided into many different function modules of hardware and software, and exploder corresponding modules. The modules’ interfaces are uniformly defined, which is convenient for module connection, enhancement of system’s universality, explodes efficiency and systemic reliability, and make the program’s characteristics easily extended and easy maintained. Some relevant hardware and software modules can be adapted to other measures and control systems easily. The paper introduces the design of special numeric control system, the main module of information acquisition system and the speed acquisition module in order to explain the design process of the module.

  20. GLOBECOM '84 - Global Telecommunications Conference, Atlanta, GA, November 26-29, 1984, Conference Record. Volume 1

    NASA Astrophysics Data System (ADS)

    The subjects discussed are related to LSI/VLSI based subscriber transmission and customer access for the Integrated Services Digital Network (ISDN), special applications of fiber optics, ISDN and competitive telecommunication services, technical preparations for the Geostationary-Satellite Orbit Conference, high-capacity statistical switching fabrics, networking and distributed systems software, adaptive arrays and cancelers, synchronization and tracking, speech processing, advances in communication terminals, full-color videotex, and a performance analysis of protocols. Advances in data communications are considered along with transmission network plans and progress, direct broadcast satellite systems, packet radio system aspects, radio-new and developing technologies and applications, the management of software quality, and Open Systems Interconnection (OSI) aspects of telematic services. Attention is given to personal computers and OSI, the role of software reliability measurement in information systems, and an active array antenna for the next-generation direct broadcast satellite.

  1. AST Critical Propulsion and Noise Reduction Technologies for Future Commercial Subsonic Engines Area of Interest 1.0: Reliable and Affordable Control Systems

    NASA Technical Reports Server (NTRS)

    Myers, William; Winter, Steve

    2006-01-01

    The General Electric Reliable and Affordable Controls effort under the NASA Advanced Subsonic Technology (AST) Program has designed, fabricated, and tested advanced controls hardware and software to reduce emissions and improve engine safety and reliability. The original effort consisted of four elements: 1) a Hydraulic Multiplexer; 2) Active Combustor Control; 3) a Variable Displacement Vane Pump (VDVP); and 4) Intelligent Engine Control. The VDVP and Intelligent Engine Control elements were cancelled due to funding constraints and are reported here only to the state they progressed. The Hydraulic Multiplexing element developed and tested a prototype which improves reliability by combining the functionality of up to 16 solenoids and servo-valves into one component with a single electrically powered force motor. The Active Combustor Control element developed intelligent staging and control strategies for low emission combustors. This included development and tests of a Controlled Pressure Fuel Nozzle for fuel sequencing, a Fuel Multiplexer for individual fuel cup metering, and model-based control logic. Both the Hydraulic Multiplexer and Controlled Pressure Fuel Nozzle system were cleared for engine test. The Fuel Multiplexer was cleared for combustor rig test which must be followed by an engine test to achieve full maturation.

  2. NASA's computer science research program

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1983-01-01

    Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.

  3. Rotorcraft digital advanced avionics system (RODAAS) functional description

    NASA Technical Reports Server (NTRS)

    Peterson, E. M.; Bailey, J.; Mcmanus, T. J.

    1985-01-01

    A functional design of a rotorcraft digital advanced avionics system (RODAAS) to transfer the technology developed for general aviation in the Demonstration Advanced Avionics System (DAAS) program to rotorcraft operation was undertaken. The objective was to develop an integrated avionics system design that enhances rotorcraft single pilot IFR operations without increasing the required pilot training/experience by exploiting advanced technology in computers, busing, displays and integrated systems design. A key element of the avionics system is the functionally distributed architecture that has the potential for high reliability with low weight, power and cost. A functional description of the RODAAS hardware and software functions is presented.

  4. Real-time software failure characterization

    NASA Technical Reports Server (NTRS)

    Dunham, Janet R.; Finelli, George B.

    1990-01-01

    A series of studies aimed at characterizing the fundamentals of the software failure process has been undertaken as part of a NASA project on the modeling of a real-time aerospace vehicle software reliability. An overview of these studies is provided, and the current study, an investigation of the reliability of aerospace vehicle guidance and control software, is examined. The study approach provides for the collection of life-cycle process data, and for the retention and evaluation of interim software life-cycle products.

  5. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    PubMed

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.

  6. An experimental evaluation of software redundancy as a strategy for improving reliability

    NASA Technical Reports Server (NTRS)

    Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.

    1990-01-01

    The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.

  7. Preliminary design of the redundant software experiment

    NASA Technical Reports Server (NTRS)

    Campbell, Roy; Deimel, Lionel; Eckhardt, Dave, Jr.; Kelly, John; Knight, John; Lauterbach, Linda; Lee, Larry; Mcallister, Dave; Mchugh, John

    1985-01-01

    The goal of the present experiment is to characterize the fault distributions of highly reliable software replicates, constructed using techniques and environments which are similar to those used in comtemporary industrial software facilities. The fault distributions and their effect on the reliability of fault tolerant configurations of the software will be determined through extensive life testing of the replicates against carefully constructed randomly generated test data. Each detected error will be carefully analyzed to provide insight in to their nature and cause. A direct objective is to develop techniques for reducing the intensity of coincident errors, thus increasing the reliability gain which can be achieved with fault tolerance. Data on the reliability gains realized, and the cost of the fault tolerant configurations can be used to design a companion experiment to determine the cost effectiveness of the fault tolerant strategy. Finally, the data and analysis produced by this experiment will be valuable to the software engineering community as a whole because it will provide a useful insight into the nature and cause of hard to find, subtle faults which escape standard software engineering validation techniques and thus persist far into the software life cycle.

  8. Evaluation and Validation (E&V) Team Public Report. Volume 2.

    DTIC Science & Technology

    1985-11-30

    Byron, Countess of Lovelace. The Countess was an associate of Charles Babbage and is presumed to be the world’s first programmer (Barnes, 1982:2...by DDT&E, STARS, and AJPO personnel, as appropriate. 4.20.9 Focal Point The DDT&E and STARS focal points, respectively, are indicated below Charles K...Technology For Adaptable, Reliable Systems Major Charles W. Lillie from Headquarters Air Force Systems Command gave a presentation concerning Software For

  9. Education and the Asian Surge: A Comparison of the Education Systems in India and China

    DTIC Science & Technology

    2008-01-01

    countries similar to those that other researchers have faced. For instance, Bardhan (2003) notes that fewer reliability checks and internal consistency tests...with a critical mass to take advantage of the software outsourcing boom 2 According to UNESCO, although the definition of literacy may vary from one...need to be targeted. For instance, too much emphasis on the study of information technology to take advantage of the current outsourcing trends could

  10. DTD Creation for the Software Technology for Adaptable, Reliable Systems (STARS) Program

    DTIC Science & Technology

    1990-06-23

    developed to store documents in a format peculiar to the program’s design . Editing the document became easy since word processors adjust all spacing and...descriptive markup may be output to a 3 CDRL 1810 January 26, 1990 variety of devices ranging from high quality typography printers through laser printers...provision for non-SGML material, such as graphics , to be inserted in a document. For these reasons the Computer-Aided Acquisition and Logistics Support

  11. Strengthening National, Homeland, and Economic Security. Networking and Information Technology Research and Development Supplement to the President’s FY 2003 Budget

    DTIC Science & Technology

    2002-07-01

    Knowledge From Data .................................................. 25 HIGH-CONFIDENCE SOFTWARE AND SYSTEMS Reliability, Security, and Safety for...NOAA’s Cessna Citation flew over the 16-acre World Trade Center site, scanning with an Optech ALSM unit. The system recorded data points from 33,000...provide the data storage and compute power for intelligence analysis, high-performance national defense systems , and critical scientific research • Large

  12. Technical Basis for Evaluating Software-Related Common-Cause Failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muhlheim, Michael David; Wood, Richard

    2016-04-01

    The instrumentation and control (I&C) system architecture at a nuclear power plant (NPP) incorporates protections against common-cause failures (CCFs) through the use of diversity and defense-in-depth. Even for well-established analog-based I&C system designs, the potential for CCFs of multiple systems (or redundancies within a system) constitutes a credible threat to defeating the defense-in-depth provisions within the I&C system architectures. The integration of digital technologies into the I&C systems provides many advantages compared to the aging analog systems with respect to reliability, maintenance, operability, and cost effectiveness. However, maintaining the diversity and defense-in-depth for both the hardware and software within themore » digital system is challenging. In fact, the introduction of digital technologies may actually increase the potential for CCF vulnerabilities because of the introduction of undetected systematic faults. These systematic faults are defined as a “design fault located in a software component” and at a high level, are predominately the result of (1) errors in the requirement specification, (2) inadequate provisions to account for design limits (e.g., environmental stress), or (3) technical faults incorporated in the internal system (or architectural) design or implementation. Other technology-neutral CCF concerns include hardware design errors, equipment qualification deficiencies, installation or maintenance errors, instrument loop scaling and setpoint mistakes.« less

  13. Minimal support technology and in situ resource utilization for risk management of planetary spaceflight missions

    NASA Astrophysics Data System (ADS)

    Murphy, K. L.; Rygalov, V. Ye.; Johnson, S. B.

    2009-04-01

    All artificial systems and components in space degrade at higher rates than on Earth, depending in part on environmental conditions, design approach, assembly technologies, and the materials used. This degradation involves not only the hardware and software systems but the humans that interact with those systems. All technological functions and systems can be expressed through functional dependence: [Function]˜[ERU]∗[RUIS]∗[ISR]/[DR];where [ERU]efficiency (rate) of environmental resource utilization[RUIS]resource utilization infrastructure[ISR]in situ resources[DR]degradation rateThe limited resources of spaceflight and open space for autonomous missions require a high reliability (maximum possible, approaching 100%) for system functioning and operation, and must minimize the rate of any system degradation. To date, only a continuous human presence with a system in the spaceflight environment can absolutely mitigate those degradations. This mitigation is based on environmental amelioration for both the technology systems, as repair of data and spare parts, and the humans, as exercise and psychological support. Such maintenance now requires huge infrastructures, including research and development complexes and management agencies, which currently cannot move beyond the Earth. When considering what is required to move manned spaceflight from near Earth stations to remote locations such as Mars, what are the minimal technologies and infrastructures necessary for autonomous restoration of a degrading system in space? In all of the known system factors of a mission to Mars that reduce the mass load, increase the reliability, and reduce the mission’s overall risk, the current common denominator is the use of undeveloped or untested technologies. None of the technologies required to significantly reduce the risk for critical systems are currently available at acceptable readiness levels. Long term interplanetary missions require that space programs produce a craft with all systems integrated so that they are of the highest reliability. Right now, with current technologies, we cannot guarantee this reliability for a crew of six for 1000 days to Mars and back. Investigation of the technologies to answer this need and a focus of resources and research on their advancement would significantly improve chances for a safe and successful mission.

  14. Reliability Engineering for Service Oriented Architectures

    DTIC Science & Technology

    2013-02-01

    Common Object Request Broker Architecture Ecosystem In software , an ecosystem is a set of applications and/or services that grad- ually build up over time...Enterprise Service Bus Foreign In an SOA context: Any SOA, service or software which the owners of the calling software do not have control of, either...SOA Service Oriented Architecture SRE Software Reliability Engineering System Mode Many systems exhibit different modes of operation. E.g. the cockpit

  15. Reliability of simulated robustness testing in fast liquid chromatography, using state-of-the-art column technology, instrumentation and modelling software.

    PubMed

    Kormány, Róbert; Fekete, Jenő; Guillarme, Davy; Fekete, Szabolcs

    2014-02-01

    The goal of this study was to evaluate the accuracy of simulated robustness testing using commercial modelling software (DryLab) and state-of-the-art stationary phases. For this purpose, a mixture of amlodipine and its seven related impurities was analyzed on short narrow bore columns (50×2.1mm, packed with sub-2μm particles) providing short analysis times. The performance of commercial modelling software for robustness testing was systematically compared to experimental measurements and DoE based predictions. We have demonstrated that the reliability of predictions was good, since the predicted retention times and resolutions were in good agreement with the experimental ones at the edges of the design space. In average, the retention time relative errors were <1.0%, while the predicted critical resolution errors were comprised between 6.9 and 17.2%. Because the simulated robustness testing requires significantly less experimental work than the DoE based predictions, we think that robustness could now be investigated in the early stage of method development. Moreover, the column interchangeability, which is also an important part of robustness testing, was investigated considering five different C8 and C18 columns packed with sub-2μm particles. Again, thanks to modelling software, we proved that the separation was feasible on all columns within the same analysis time (less than 4min), by proper adjustments of variables. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. GROVER: An autonomous vehicle for ice sheet research

    NASA Astrophysics Data System (ADS)

    Trisca, G. O.; Robertson, M. E.; Marshall, H.; Koenig, L.; Comberiate, M. A.

    2013-12-01

    The Goddard Remotely Operated Vehicle for Exploration and Research or Greenland Rover (GROVER) is a science enabling autonomous robot specifically designed to carry a low-power, large bandwidth radar for snow accumulation mapping over the Greenland Ice Sheet. This new and evolving technology enables reduced cost and increased safety for polar research. GROVER was field tested at Summit, Greenland in May 2013. The robot traveled over 30 km and was controlled both by line of sight wireless and completely autonomously with commands and telemetry via the Iridium Satellite Network, from Summit as well as remotely from Boise, Idaho. Here we describe GROVER's unique abilities and design. The software stack features a modular design that can be adapted for any application that requires autonomous behavior, reliable communications using different technologies and low level control of peripherals. The modules are built to communicate using the publisher-subscriber design pattern to maximize data-reuse and allow for graceful failures at the software level, along with the ability to be loaded or unloaded on-the-fly, enabling the software to adopt different behaviors based on power constraints or specific processing needs. These modules can also be loaded or unloaded remotely for servicing and telemetry can be configured to contain any kind of information being generated by the sensors or scientific instruments. The hardware design protects the electronic components and the control system can change functional parameters based on sensor input. Power failure modes built into the hardware prevent the vehicle from running out of energy permanently by monitoring voltage levels and triggering software reboots when the levels match pre-established conditions. This guarantees that the control software will be operational as soon as there is enough charge to sustain it, giving the vehicle increased longevity in case of a temporary power loss. GROVER demonstrates that autonomous rovers can be a revolutionary tool for data collection, and that both the technology and the software are available and ready to be implemented to create scientific data collection platforms.

  17. Reliability and accuracy of three imaging software packages used for 3D analysis of the upper airway on cone beam computed tomography images.

    PubMed

    Chen, Hui; van Eijnatten, Maureen; Wolff, Jan; de Lange, Jan; van der Stelt, Paul F; Lobbezoo, Frank; Aarab, Ghizlane

    2017-08-01

    The aim of this study was to assess the reliability and accuracy of three different imaging software packages for three-dimensional analysis of the upper airway using CBCT images. To assess the reliability of the software packages, 15 NewTom 5G ® (QR Systems, Verona, Italy) CBCT data sets were randomly and retrospectively selected. Two observers measured the volume, minimum cross-sectional area and the length of the upper airway using Amira ® (Visage Imaging Inc., Carlsbad, CA), 3Diagnosys ® (3diemme, Cantu, Italy) and OnDemand3D ® (CyberMed, Seoul, Republic of Korea) software packages. The intra- and inter-observer reliability of the upper airway measurements were determined using intraclass correlation coefficients and Bland & Altman agreement tests. To assess the accuracy of the software packages, one NewTom 5G ® CBCT data set was used to print a three-dimensional anthropomorphic phantom with known dimensions to be used as the "gold standard". This phantom was subsequently scanned using a NewTom 5G ® scanner. Based on the CBCT data set of the phantom, one observer measured the volume, minimum cross-sectional area, and length of the upper airway using Amira ® , 3Diagnosys ® , and OnDemand3D ® , and compared these measurements with the gold standard. The intra- and inter-observer reliability of the measurements of the upper airway using the different software packages were excellent (intraclass correlation coefficient ≥0.75). There was excellent agreement between all three software packages in volume, minimum cross-sectional area and length measurements. All software packages underestimated the upper airway volume by -8.8% to -12.3%, the minimum cross-sectional area by -6.2% to -14.6%, and the length by -1.6% to -2.9%. All three software packages offered reliable volume, minimum cross-sectional area and length measurements of the upper airway. The length measurements of the upper airway were the most accurate results in all software packages. All software packages underestimated the upper airway dimensions of the anthropomorphic phantom.

  18. Software Defined Radios - Architectures, Systems and Functions

    NASA Technical Reports Server (NTRS)

    Sims, William H.

    2017-01-01

    Software Defined Radio is an industry term describing a method of utilizing a minimum amount of Radio Frequency (RF)/analog electronics before digitization takes place. Upon digitization all other functions are performed in software/firmware. There are as many different types of SDRs as there are data systems. Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast the foundations of transponder technology presently qualified for satellite applications were developed during the early space program of the 1960's. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. This presentation will show how the emerging SDR market has leveraged the existing commercial sector to provide a path to a radiation tolerant SDR transponder. These innovations will reduce the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. A second pay-off is the increased flexibility of the SDR by allowing the same hardware to implement multiple transponder types by altering hardware logic - no change of analog hardware is required - all of which can be ultimately accomplished in orbit. This in turn would provide high capability and low cost transponder to programs of all sizes.

  19. Software Defined Radios - Architectures, Systems and Functions

    NASA Technical Reports Server (NTRS)

    Sims, Herb

    2017-01-01

    Software Defined Radio is an industry term describing a method of utilizing a minimum amount of Radio Frequency (RF)/analog electronics before digitization takes place. Upon digitization all other functions are performed in software/firmware. There are as many different types of SDRs as there are data systems. Software Defined Radio (SDR) technology has been proven in the commercial sector since the early 90's. Today's rapid advancement in mobile telephone reliability and power management capabilities exemplifies the effectiveness of the SDR technology for the modern communications market. In contrast the foundations of transponder technology presently qualified for satellite applications were developed during the early space program of the 1960's. SDR technology offers potential to revolutionize satellite transponder technology by increasing science data through-put capability by at least an order of magnitude. While the SDR is adaptive in nature and is "One-size-fits-all" by design, conventional transponders are built to a specific platform and must be redesigned for every new bus. The SDR uses a minimum amount of analog/Radio Frequency components to up/down-convert the RF signal to/from a digital format. Once analog data is digitized, all processing is performed using hardware logic. Typical SDR processes include; filtering, modulation, up/down converting and demodulation. This presentation will show how the emerging SDR market has leveraged the existing commercial sector to provide a path to a radiation tolerant SDR transponder. These innovations will reduce the cost of transceivers, a decrease in power requirements and a commensurate reduction in volume. A second pay-off is the increased flexibility of the SDR by allowing the same hardware to implement multiple transponder types by altering hardware logic - no change of analog hardware is required - all of which can be ultimately accomplished in orbit. This in turn would provide high capability and low cost transponder to programs of all sizes

  20. Development of confidence limits by pivotal functions for estimating software reliability

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1987-01-01

    The utility of pivotal functions is established for assessing software reliability. Based on the Moranda geometric de-eutrophication model of reliability growth, confidence limits for attained reliability and prediction limits for the time to the next failure are derived using a pivotal function approach. Asymptotic approximations to the confidence and prediction limits are considered and are shown to be inadequate in cases where only a few bugs are found in the software. Departures from the assumed exponentially distributed interfailure times in the model are also investigated. The effect of these departures is discussed relative to restricting the use of the Moranda model.

  1. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data.

    PubMed

    Robinson, Mark D; McCarthy, Davis J; Smyth, Gordon K

    2010-01-01

    It is expected that emerging digital gene expression (DGE) technologies will overtake microarray technologies in the near future for many functional genomics applications. One of the fundamental data analysis tasks, especially for gene expression studies, involves determining whether there is evidence that counts for a transcript or exon are significantly different across experimental conditions. edgeR is a Bioconductor software package for examining differential expression of replicated count data. An overdispersed Poisson model is used to account for both biological and technical variability. Empirical Bayes methods are used to moderate the degree of overdispersion across transcripts, improving the reliability of inference. The methodology can be used even with the most minimal levels of replication, provided at least one phenotype or experimental condition is replicated. The software may have other applications beyond sequencing data, such as proteome peptide count data. The package is freely available under the LGPL licence from the Bioconductor web site (http://bioconductor.org).

  2. Development of x-ray imaging technique for liquid screening at airport

    NASA Astrophysics Data System (ADS)

    Sulaiman, Nurhani binti; Srisatit, Somyot

    2016-01-01

    X-ray imaging technology is a viable option to recognize flammable liquids for the purposes of aviation security. In this study, an X-ray imaging technology was developed whereby, the image viewing system was built with the use of a digital camera coupled with a gadolinium oxysulfide (GOS) fluorescent screen. The camera was equipped with a software for remote control setting of the camera via a USB cable which allows the images to be captured. The image was analysed to determine the average grey level using a software designed by Microsoft Visual Basic 6.0. The data was obtained for various densities of liquid thickness of 4.5 cm, 6.0 cm and 7.5 cm respectively for X-ray energies ranging from 70 to 200 kVp. In order to verify the reliability of the constructed calibration data, the system was tested with a few types of unknown liquids. The developed system could be conveniently employed for security screening in order to discriminate between a threat and an innocuous liquid.

  3. Emerging technologies for V&V of ISHM software for space exploration

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.; Markosian, Lawrence Z.

    2006-01-01

    Systems1,2 required to exhibit high operational reliability often rely on some form of fault protection to recognize and respond to faults, preventing faults' escalation to catastrophic failures. Integrated System Health Management (ISHM) extends the functionality of fault protection to both scale to more complex systems (and systems of systems), and to maintain capability rather than just avert catastrophe. Forms of ISHM have been utilized to good effect in the maintenance phase of systems' total lifecycles (often referred to as 'condition-based mainte-nance'), but less so in a 'fault protection' role during actual operations. One of the impediments to such use lies in the challenges of verification, validation and certification of ISHM systems themselves. This paper makes the case that state-of-the-practice V&V and certification techniques will not suffice for emerging forms of ISHM systems; however, a number of maturing software engineering assurance technologies show particular promise for addressing these ISHM V&V challenges.

  4. Approach and Instrument Placement Validation

    NASA Technical Reports Server (NTRS)

    Ator, Danielle

    2005-01-01

    The Mars Exploration Rovers (MER) from the 2003 flight mission represents the state of the art technology for target approach and instrument placement on Mars. It currently takes 3 sols (Martian days) for the rover to place an instrument on a designated rock target that is about 10 to 20 m away. The objective of this project is to provide an experimentally validated single-sol instrument placement capability to future Mars missions. After completing numerous test runs on the Rocky8 rover under various test conditions, it has been observed that lighting conditions, shadow effects, target features and the initial target distance have an effect on the performance and reliability of the tracking software. Additional software validation testing will be conducted in the months to come.

  5. A Tool for Verification and Validation of Neural Network Based Adaptive Controllers for High Assurance Systems

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Schumann, Johann

    2004-01-01

    High reliability of mission- and safety-critical software systems has been identified by NASA as a high-priority technology challenge. We present an approach for the performance analysis of a neural network (NN) in an advanced adaptive control system. This problem is important in the context of safety-critical applications that require certification, such as flight software in aircraft. We have developed a tool to measure the performance of the NN during operation by calculating a confidence interval (error bar) around the NN's output. Our tool can be used during pre-deployment verification as well as monitoring the network performance during operation. The tool has been implemented in Simulink and simulation results on a F-15 aircraft are presented.

  6. Agile: From Software to Mission System

    NASA Technical Reports Server (NTRS)

    Trimble, Jay; Shirley, Mark H.; Hobart, Sarah Groves

    2016-01-01

    The Resource Prospector (RP) is an in-situ resource utilization (ISRU) technology demonstration mission, designed to search for volatiles at the Lunar South Pole. This is NASA's first near real time tele-operated rover on the Moon. The primary objective is to search for volatiles at one of the Lunar Poles. The combination of short mission duration, a solar powered rover, and the requirement to explore shadowed regions makes for an operationally challenging mission. To maximize efficiency and flexibility in Mission System design and thus to improve the performance and reliability of the resulting Mission System, we are tailoring Agile principles that we have used effectively in ground data system software development and applying those principles to the design of elements of the mission operations system.

  7. Numerical aerodynamic simulation facility feasibility study

    NASA Technical Reports Server (NTRS)

    1979-01-01

    There were three major issues examined in the feasibility study. First, the ability of the proposed system architecture to support the anticipated workload was evaluated. Second, the throughput of the computational engine (the flow model processor) was studied using real application programs. Third, the availability reliability, and maintainability of the system were modeled. The evaluations were based on the baseline systems. The results show that the implementation of the Numerical Aerodynamic Simulation Facility, in the form considered, would indeed be a feasible project with an acceptable level of risk. The technology required (both hardware and software) either already exists or, in the case of a few parts, is expected to be announced this year. Facets of the work described include the hardware configuration, software, user language, and fault tolerance.

  8. Assistant for Specifying Quality Software (ASQS) Mission Area Analysis

    DTIC Science & Technology

    1990-12-01

    somewhat arbitrary, it was a reasonable and fast approach for partitioning the mission and software domains. The MAD builds on work done by Boeing Aerospace...Reliability ++ Reliability +++ Response 2: NO Discussion: A NO response implies intermittent burns -- most likely to perform attitude control functions...Propulsion Reliability +++ Reliability ++ 4-15 4.8.3 Query BT.3 Query: For intermittent thruster firing requirements, will the average burn time be less than

  9. Integrating Formal Methods and Testing 2002

    NASA Technical Reports Server (NTRS)

    Cukic, Bojan

    2002-01-01

    Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.

  10. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement

    PubMed Central

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-01-01

    Objective The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Methods Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Results Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. Conclusions AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis. PMID:22654681

  11. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement.

    PubMed

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-12-01

    The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis.

  12. An overview of 5G network slicing architecture

    NASA Astrophysics Data System (ADS)

    Chen, Qiang; Wang, Xiaolei; Lv, Yingying

    2018-05-01

    With the development of mobile communication technology, the traditional single network model has been unable to meet the needs of users, and the demand for differentiated services is increasing. In order to solve this problem, the fifth generation of mobile communication technology came into being, and as one of the key technologies of 5G, network slice is the core technology of network virtualization and software defined network, enabling network slices to flexibly provide one or more network services according to users' needs[1]. Each slice can independently tailor the network functions according to the requirements of the business scene and the traffic model and manage the layout of the corresponding network resources, to improve the flexibility of network services and the utilization of resources, and enhance the robustness and reliability of the whole network [2].

  13. Active Wireless System for Structural Health Monitoring Applications.

    PubMed

    Perera, Ricardo; Pérez, Alberto; García-Diéguez, Marta; Zapico-Valle, José Luis

    2017-12-11

    The use of wireless sensors in Structural Health Monitoring (SHM) has increased significantly in the last years. Piezoelectric-based lead zirconium titanate (PZT) sensors have been on the rise in SHM due to their superior sensing abilities. They are applicable in different technologies such as electromechanical impedance (EMI)-based SHM. This work develops a flexible wireless smart sensor (WSS) framework based on the EMI method using active sensors for full-scale and autonomous SHM. In contrast to passive sensors, the self-sensing properties of the PZTs allow interrogating with or exciting a structure when desired. The system integrates the necessary software and hardware within a service-oriented architecture approach able to provide in a modular way the services suitable to satisfy the key requirements of a WSS. The framework developed in this work has been validated on different experimental applications. Initially, the reliability of the EMI method when carried out with the proposed wireless sensor system is evaluated by comparison with the wireless counterpart. Afterwards, the performance of the system is evaluated in terms of software stability and reliability of functioning.

  14. Rhinoplasty perioperative database using a personal digital assistant.

    PubMed

    Kotler, Howard S

    2004-01-01

    To construct a reliable, accurate, and easy-to-use handheld computer database that facilitates the point-of-care acquisition of perioperative text and image data specific to rhinoplasty. A user-modified database (Pendragon Forms [v.3.2]; Pendragon Software Corporation, Libertyville, Ill) and graphic image program (Tealpaint [v.4.87]; Tealpaint Software, San Rafael, Calif) were used to capture text and image data, respectively, on a Palm OS (v.4.11) handheld operating with 8 megabytes of memory. The handheld and desktop databases were maintained secure using PDASecure (v.2.0) and GoldSecure (v.3.0) (Trust Digital LLC, Fairfax, Va). The handheld data were then uploaded to a desktop database of either FileMaker Pro 5.0 (v.1) (FileMaker Inc, Santa Clara, Calif) or Microsoft Access 2000 (Microsoft Corp, Redmond, Wash). Patient data were collected from 15 patients undergoing rhinoplasty in a private practice outpatient ambulatory setting. Data integrity was assessed after 6 months' disk and hard drive storage. The handheld database was able to facilitate data collection and accurately record, transfer, and reliably maintain perioperative rhinoplasty data. Query capability allowed rapid search using a multitude of keyword search terms specific to the operative maneuvers performed in rhinoplasty. Handheld computer technology provides a method of reliably recording and storing perioperative rhinoplasty information. The handheld computer facilitates the reliable and accurate storage and query of perioperative data, assisting the retrospective review of one's own results and enhancement of surgical skills.

  15. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  16. Software reliability report

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1991-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

  17. Model of load balancing using reliable algorithm with multi-agent system

    NASA Astrophysics Data System (ADS)

    Afriansyah, M. F.; Somantri, M.; Riyadi, M. A.

    2017-04-01

    Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable.

  18. The determination of measures of software reliability

    NASA Technical Reports Server (NTRS)

    Maxwell, F. D.; Corn, B. C.

    1978-01-01

    Measurement of software reliability was carried out during the development of data base software for a multi-sensor tracking system. The failure ratio and failure rate were found to be consistent measures. Trend lines could be established from these measurements that provide good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.

  19. Integrated Systems of Manufacture (Robotics) Technology Working Group Report (IDA/OSD R&M (Institute for Defense Analyses/Office of the Secretary of Defense Reliability and Maintainability) Study)

    DTIC Science & Technology

    1983-11-01

    INSTRUMENTATION ;(U) FORKLIFT VEHICLES ;(U) EXPERIMENTAL DATA IDENTIFIERS: OBJECTIVE: (U) SUPPORT INHOUSE RESEARCH FOR- ACQUISITION AND ANALYSIS OF...ROBOTIC RECONNAISSANCE VEHICLE DEMONSTRATOR WITH TERRAIN ANALYSIS . THIS WORK WILL SPECIFY THE BASE LINE HARDWARE, SOFTWARE, DATA BASE, AND SYSTEM...THE DATA ANALYSIS . THIS IS ALSO TRUE OF INFLIGMT DATA THAT THE PILOT IS REQUIRED TO ANALYZE. THIS RESEARCH IS CONCERNED WITH THE REPORT NO. CX7419

  20. Embedded control system for computerized franking machine

    NASA Astrophysics Data System (ADS)

    Shi, W. M.; Zhang, L. B.; Xu, F.; Zhan, H. W.

    2007-12-01

    This paper presents a novel control system for franking machine. A methodology for operating a franking machine using the functional controls consisting of connection, configuration and franking electromechanical drive is studied. A set of enabling technologies to synthesize postage management software architectures driven microprocessor-based embedded systems is proposed. The cryptographic algorithm that calculates mail items is analyzed to enhance the postal indicia accountability and security. The study indicated that the franking machine is reliability, performance and flexibility in printing mail items.

  1. Ada/Xt Architecture: Design Report for the Software Technology for Adaptable, Reliable Systems (STARS)

    DTIC Science & Technology

    1990-01-25

    N Task: UR20 CDRL: 01000 N UR2O--ProcesslEnvironmentx Ada/Xt. Architecture : Design Report ~ ~ fFCp Informal Technical Data I? ,LECp Sofwar Tehoog for...S. FUNDING NUMBERS Ada/Xt Architecture : Design Report STARS Contract 6.AUTHOR(S)_ Ft9628-88-D-0031 6. AUTHOR(S) Kurt Wallnau 7. PERFORMING...of the STARS Prime contract under the Process Environment Integration task (UR20). This document "Ada Xt Architecture : Design Report", type A005

  2. Space Station man-machine automation trade-off analysis

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Bard, J.; Feinberg, A.

    1985-01-01

    The man machine automation tradeoff methodology presented is of four research tasks comprising the autonomous spacecraft system technology (ASST) project. ASST was established to identify and study system level design problems for autonomous spacecraft. Using the Space Station as an example spacecraft system requiring a certain level of autonomous control, a system level, man machine automation tradeoff methodology is presented that: (1) optimizes man machine mixes for different ground and on orbit crew functions subject to cost, safety, weight, power, and reliability constraints, and (2) plots the best incorporation plan for new, emerging technologies by weighing cost, relative availability, reliability, safety, importance to out year missions, and ease of retrofit. A fairly straightforward approach is taken by the methodology to valuing human productivity, it is still sensitive to the important subtleties associated with designing a well integrated, man machine system. These subtleties include considerations such as crew preference to retain certain spacecraft control functions; or valuing human integration/decision capabilities over equivalent hardware/software where appropriate.

  3. Implementation of a light-route TDMA communications satellite system for advanced business networks

    NASA Astrophysics Data System (ADS)

    Hanson, B.; Smalley, A.; Zuliani, M.

    The application of Light Route TDMA systems to various business communication requirements is discussed. It is noted that full development of this technology for use in advanced business networks will be guided by considerations of flexibility, reliability, security, and cost. The implementation of the TDMA system for demonstrating these advantages to a wide range of public and private organizations is described in detail. Among the advantages offered by this system are point-to-point and point-to-multipoint (broadcast) capability; the ability to vary the mix and quantity of services between destinations in a fully connected mesh network on an almost instantaneous basis through software control; and enhanced reliability with centralized monitor, alarm and control functions by virtue of an overhead channel.

  4. Development and assessment of a digital X-ray software tool to determine vertebral rotation in adolescent idiopathic scoliosis.

    PubMed

    Eijgenraam, Susanne M; Boselie, Toon F M; Sieben, Judith M; Bastiaenen, Caroline H G; Willems, Paul C; Arts, Jacobus J; Lataster, Arno

    2017-02-01

    The amount of vertebral rotation in the axial plane is of key importance in the prognosis and treatment of adolescent idiopathic scoliosis (AIS). Current methods to determine vertebral rotation are either designed for use in analogue plain radiographs and not useful in digital images, or lack measurement precision and are therefore less suitable for the follow-up of rotation in AIS patients. This study aimed to develop a digital X-ray software tool with high measurement precision to determine vertebral rotation in AIS, and to assess its (concurrent) validity and reliability. In this study a combination of basic science and reliability methodology applied in both laboratory and clinical settings was used. Software was developed using the algorithm of the Perdriolle torsion meter for analogue AP plain radiographs of the spine. Software was then assessed for (1) concurrent validity and (2) intra- and interobserver reliability. Plain radiographs of both human cadaver vertebrae and outpatient AIS patients were used. Concurrent validity was measured by two independent observers, both experienced in the assessment of plain radiographs. Reliability-measurements were performed by three independent spine surgeons. Pearson correlation of the software compared with the analogue Perdriolle torsion meter for mid-thoracic vertebrae was 0.98, for low-thoracic vertebrae 0.97 and for lumbar vertebrae 0.97. Measurement exactness of the software was within 5° in 62% of cases and within 10° in 97% of cases. Intraclass correlation coefficient (ICC) for inter-observer reliability was 0.92 (0.91-0.95), ICC for intra-observer reliability was 0.96 (0.94-0.97). We developed a digital X-ray software tool to determine vertebral rotation in AIS with a substantial concurrent validity and reliability, which may be useful for the follow-up of vertebral rotation in AIS patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Reducing Time to Science: Unidata and JupyterHub Technology Using the Jetstream Cloud

    NASA Astrophysics Data System (ADS)

    Chastang, J.; Signell, R. P.; Fischer, J. L.

    2017-12-01

    Cloud computing can accelerate scientific workflows, discovery, and collaborations by reducing research and data friction. We describe the deployment of Unidata and JupyterHub technologies on the NSF-funded XSEDE Jetstream cloud. With the aid of virtual machines and Docker technology, we deploy a Unidata JupyterHub server co-located with a Local Data Manager (LDM), THREDDS data server (TDS), and RAMADDA geoscience content management system. We provide Jupyter Notebooks and the pre-built Python environments needed to run them. The notebooks can be used for instruction and as templates for scientific experimentation and discovery. We also supply a large quantity of NCEP forecast model results to allow data-proximate analysis and visualization. In addition, users can transfer data using Globus command line tools, and perform their own data-proximate analysis and visualization with Notebook technology. These data can be shared with others via a dedicated TDS server for scientific distribution and collaboration. There are many benefits of this approach. Not only is the cloud computing environment fast, reliable and scalable, but scientists can analyze, visualize, and share data using only their web browser. No local specialized desktop software or a fast internet connection is required. This environment will enable scientists to spend less time managing their software and more time doing science.

  6. The development and technology transfer of software engineering technology at NASA. Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Pitman, C. L.; Erb, D. M.; Izygon, M. E.; Fridge, E. M., III; Roush, G. B.; Braley, D. M.; Savely, R. T.

    1992-01-01

    The United State's big space projects of the next decades, such as Space Station and the Human Exploration Initiative, will need the development of many millions of lines of mission critical software. NASA-Johnson (JSC) is identifying and developing some of the Computer Aided Software Engineering (CASE) technology that NASA will need to build these future software systems. The goal is to improve the quality and the productivity of large software development projects. New trends are outlined in CASE technology and how the Software Technology Branch (STB) at JSC is endeavoring to provide some of these CASE solutions for NASA is described. Key software technology components include knowledge-based systems, software reusability, user interface technology, reengineering environments, management systems for the software development process, software cost models, repository technology, and open, integrated CASE environment frameworks. The paper presents the status and long-term expectations for CASE products. The STB's Reengineering Application Project (REAP), Advanced Software Development Workstation (ASDW) project, and software development cost model (COSTMODL) project are then discussed. Some of the general difficulties of technology transfer are introduced, and a process developed by STB for CASE technology insertion is described.

  7. Assessment of physical server reliability in multi cloud computing system

    NASA Astrophysics Data System (ADS)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  8. A Human Reliability Based Usability Evaluation Method for Safety-Critical Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillippe Palanque; Regina Bernhaupt; Ronald Boring

    2006-04-01

    Recent years have seen an increasing use of sophisticated interaction techniques including in the field of safety critical interactive software [8]. The use of such techniques has been required in order to increase the bandwidth between the users and systems and thus to help them deal efficiently with increasingly complex systems. These techniques come from research and innovation done in the field of humancomputer interaction (HCI). A significant effort is currently being undertaken by the HCI community in order to apply and extend current usability evaluation techniques to these new kinds of interaction techniques. However, very little has been donemore » to improve the reliability of software offering these kinds of interaction techniques. Even testing basic graphical user interfaces remains a challenge that has rarely been addressed in the field of software engineering [9]. However, the non reliability of interactive software can jeopardize usability evaluation by showing unexpected or undesired behaviors. The aim of this SIG is to provide a forum for both researchers and practitioners interested in testing interactive software. Our goal is to define a roadmap of activities to cross fertilize usability and reliability testing of these kinds of systems to minimize duplicate efforts in both communities.« less

  9. The Emerging Interdependence of the Electric Power Grid & Information and Communication Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taft, Jeffrey D.; Becker-Dippmann, Angela S.

    2015-08-01

    This paper examines the implications of emerging interdependencies between the electric power grid and Information and Communication Technology (ICT). Over the past two decades, electricity and ICT infrastructure have become increasingly interdependent, driven by a combination of factors including advances in sensor, network and software technologies and progress in their deployment, the need to provide increasing levels of wide-area situational awareness regarding grid conditions, and the promise of enhanced operational efficiencies. Grid operators’ ability to utilize new and closer-to-real-time data generated by sensors throughout the system is providing early returns, particularly with respect to management of the transmission system formore » purposes of reliability, coordination, congestion management, and integration of variable electricity resources such as wind generation.« less

  10. MILCOM '85 - Military Communications Conference, Boston, MA, October 20-23, 1985, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    The present conference on the development status of communications systems in the context of electronic warfare gives attention to topics in spread spectrum code acquisition, digital speech technology, fiber-optics communications, free space optical communications, the networking of HF systems, and applications and evaluation methods for digital speech. Also treated are issues in local area network system design, coding techniques and applications, technology applications for HF systems, receiver technologies, software development status, channel simultion/prediction methods, C3 networking spread spectrum networks, the improvement of communication efficiency and reliability through technical control methods, mobile radio systems, and adaptive antenna arrays. Finally, communications system cost analyses, spread spectrum performance, voice and image coding, switched networks, and microwave GaAs ICs, are considered.

  11. Patent information retrieval: approaching a method and analysing nanotechnology patent collaborations.

    PubMed

    Ozcan, Sercan; Islam, Nazrul

    2017-01-01

    Many challenges still remain in the processing of explicit technological knowledge documents such as patents. Given the limitations and drawbacks of the existing approaches, this research sets out to develop an improved method for searching patent databases and extracting patent information to increase the efficiency and reliability of nanotechnology patent information retrieval process and to empirically analyse patent collaboration. A tech-mining method was applied and the subsequent analysis was performed using Thomson data analyser software. The findings show that nations such as Korea and Japan are highly collaborative in sharing technological knowledge across academic and corporate organisations within their national boundaries, and China presents, in some cases, a great illustration of effective patent collaboration and co-inventorship. This study also analyses key patent strengths by country, organisation and technology.

  12. Feasibility of automated speech sample collection with stuttering children using interactive voice response (IVR) technology.

    PubMed

    Vogel, Adam P; Block, Susan; Kefalianos, Elaina; Onslow, Mark; Eadie, Patricia; Barth, Ben; Conway, Laura; Mundt, James C; Reilly, Sheena

    2015-04-01

    To investigate the feasibility of adopting automated interactive voice response (IVR) technology for remotely capturing standardized speech samples from stuttering children. Participants were 10 6-year-old stuttering children. Their parents called a toll-free number from their homes and were prompted to elicit speech from their children using a standard protocol involving conversation, picture description and games. The automated IVR system was implemented using an off-the-shelf telephony software program and delivered by a standard desktop computer. The software infrastructure utilizes voice over internet protocol. Speech samples were automatically recorded during the calls. Video recordings were simultaneously acquired in the home at the time of the call to evaluate the fidelity of the telephone collected samples. Key outcome measures included syllables spoken, percentage of syllables stuttered and an overall rating of stuttering severity using a 10-point scale. Data revealed a high level of relative reliability in terms of intra-class correlation between the video and telephone acquired samples on all outcome measures during the conversation task. Findings were less consistent for speech samples during picture description and games. Results suggest that IVR technology can be used successfully to automate remote capture of child speech samples.

  13. Integrated smart structures wingbox

    NASA Astrophysics Data System (ADS)

    Simon, Solomon H.

    1993-09-01

    One objective of smart structures development is to demonstrate the ability of a mechanical component to monitor its own structural integrity and health. Achievement of this objective requires the integration of different technologies, i.e.: (1) structures, (2) sensors, and (3) artificial intelligence. We coordinated a team of experts from these three fields. These experts used reliable knowledge towards the forefront of their technologies and combined the appropriate features into an integrated hardware/software smart structures wingbox (SSW) test article. A 1/4 in. hole was drilled into the SSW test article. Although the smart structure had never seen damage of this type, it correctly recognized and located the damage. Based on a knowledge-based simulation, quantification and assessment were also carried out. We have demonstrated that the SSW integrated hardware & software test article can perform six related functions: (1) identification of a defect; (2) location of the defect; (3) quantification of the amount of damage; (4) assessment of performance degradation; (5) continued monitoring in spite of damage; and (6) continuous recording of integrity data. We present the successful results of the integrated test article in this paper, along with plans for future development and deployment of the technology.

  14. Software reliability: Application of a reliability model to requirements error analysis

    NASA Technical Reports Server (NTRS)

    Logan, J.

    1980-01-01

    The application of a software reliability model having a well defined correspondence of computer program properties to requirements error analysis is described. Requirements error categories which can be related to program structural elements are identified and their effect on program execution considered. The model is applied to a hypothetical B-5 requirement specification for a program module.

  15. An evaluation of the documented requirements of the SSP UIL and a review of commercial software packages for the development and testing of UIL prototypes

    NASA Technical Reports Server (NTRS)

    Gill, Esther Naomi

    1986-01-01

    A review was conducted of software packages currently on the market which might be integrated with the interface language and aid in reaching the objectives of customization, standardization, transparency, reliability, maintainability, language substitutions, expandability, portability, and flexibility. Recommendations are given for best choices in hardware and software acquisition for inhouse testing of these possible integrations. Software acquisition in the line of tools to aid expert-system development and/or novice program development, artificial intelligent voice technology and touch screen or joystick or mouse utilization as well as networking were recommended. Other recommendations concerned using the language Ada for the user interface language shell because of its high level of standardization, structure, and ability to accept and execute programs written in other programming languages, its DOD ownership and control, and keeping the user interface language simple so that multiples of users will find the commercialization of space within their realm of possibility which is, after all, the purpose of the Space Station.

  16. SPOT4 Operational Control Center (CMP)

    NASA Technical Reports Server (NTRS)

    Zaouche, G.

    1993-01-01

    CNES(F) is responsible for the development of a new generation of Operational Control Center (CMP) which will operate the new heliosynchronous remote sensing satellite (SPOT4). This Operational Control Center takes large benefit from the experience of the first generation of control center and from the recent advances in computer technology and standards. The CMP is designed for operating two satellites all the same time with a reduced pool of controllers. The architecture of this CMP is simple, robust, and flexible, since it is based on powerful distributed workstations interconnected through an Ethernet LAN. The application software uses modern and formal software engineering methods, in order to improve quality and reliability, and facilitate maintenance. This software is table driven so it can be easily adapted to other operational needs. Operation tasks are automated to the maximum extent, so that it could be possible to operate the CMP automatically with very limited human interference for supervision and decision making. This paper provides an overview of the SPOTS mission and associated ground segment. It also details the CMP, its functions, and its software and hardware architecture.

  17. CARES/Life Ceramics Durability Evaluation Software Used for Mars Microprobe Aeroshell

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1998-01-01

    The CARES/Life computer program, which was developed at the NASA Lewis Research Center, predicts the probability of a monolithic ceramic component's failure as a function of time in service. The program has many features and options for materials evaluation and component design. It couples commercial finite element programs-which resolve a component's temperature and stress distribution-to-reliability evaluation and fracture mechanics routines for modeling strength-limiting defects. These routines are based on calculations of the probabilistic nature of the brittle material's strength. The capability, flexibility, and uniqueness of CARES/Life has attracted many users representing a broad range of interests and has resulted in numerous awards for technological achievements and technology transfer.

  18. Predicting Software Suitability Using a Bayesian Belief Network

    NASA Technical Reports Server (NTRS)

    Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.

    2005-01-01

    The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.

  19. Experiences in improving the state of the practice in verification and validation of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Culbert, Chris; French, Scott W.; Hamilton, David

    1994-01-01

    Knowledge-based systems (KBS's) are in general use in a wide variety of domains, both commercial and government. As reliance on these types of systems grows, the need to assess their quality and validity reaches critical importance. As with any software, the reliability of a KBS can be directly attributed to the application of disciplined programming and testing practices throughout the development life-cycle. However, there are some essential differences between conventional software and KBSs, both in construction and use. The identification of these differences affect the verification and validation (V&V) process and the development of techniques to handle them. The recognition of these differences is the basis of considerable on-going research in this field. For the past three years IBM (Federal Systems Company - Houston) and the Software Technology Branch (STB) of NASA/Johnson Space Center have been working to improve the 'state of the practice' in V&V of Knowledge-based systems. This work was motivated by the need to maintain NASA's ability to produce high quality software while taking advantage of new KBS technology. To date, the primary accomplishment has been the development and teaching of a four-day workshop on KBS V&V. With the hope of improving the impact of these workshops, we also worked directly with NASA KBS projects to employ concepts taught in the workshop. This paper describes two projects that were part of this effort. In addition to describing each project, this paper describes problems encountered and solutions proposed in each case, with particular emphasis on implications for transferring KBS V&V technology beyond the NASA domain.

  20. NASA/CARES dual-use ceramic technology spinoff applications

    NASA Technical Reports Server (NTRS)

    Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.; Nemeth, Noel N.

    1994-01-01

    NASA has developed software that enables American industry to establish the reliability and life of ceramic structures in a wide variety of 21st Century applications. Designing ceramic components to survive at higher temperatures than the capability of most metals and in severe loading environments involves the disciplines of statistics and fracture mechanics. Successful application of advanced ceramics material properties and the use of a probabilistic brittle material design methodology. The NASA program, known as CARES (Ceramics Analysis and Reliability Evaluation of Structures), is a comprehensive general purpose design tool that predicts the probability of failure of a ceramic component as a function of its time in service. The latest version of this software, CARESALIFE, is coupled to several commercially available finite element analysis programs (ANSYS, MSC/NASTRAN, ABAQUS, COSMOS/N4, MARC), resulting in an advanced integrated design tool which is adapted to the computing environment of the user. The NASA-developed CARES software has been successfully used by industrial, government, and academic organizations to design and optimize ceramic components for many demanding applications. Industrial sectors impacted by this program include aerospace, automotive, electronic, medical, and energy applications. Dual-use applications include engine components, graphite and ceramic high temperature valves, TV picture tubes, ceramic bearings, electronic chips, glass building panels, infrared windows, radiant heater tubes, heat exchangers, and artificial hips, knee caps, and teeth.

  1. Payload software technology

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A software analysis was performed of known STS sortie payload elements and their associated experiments. This provided basic data for STS payload software characteristics and sizes. A set of technology drivers was identified based on a survey of future technology needs and an assessment of current software technology. The results will be used to evolve a planned approach to software technology development. The purpose of this plan is to ensure that software technology is advanced at a pace and a depth sufficient to fulfill the identified future needs.

  2. Validity and reliability of balance assessment software using the Nintendo Wii balance board: usability and validation

    PubMed Central

    2014-01-01

    Background A balance test provides important information such as the standard to judge an individual’s functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Methods Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). Results The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. Conclusion The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment. PMID:24912769

  3. Validity and reliability of balance assessment software using the Nintendo Wii balance board: usability and validation.

    PubMed

    Park, Dae-Sung; Lee, GyuChang

    2014-06-10

    A balance test provides important information such as the standard to judge an individual's functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment.

  4. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  5. Integrated Application of Active Controls (IAAC) technology to an advanced subsonic transport project: Test act system validation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The primary objective of the Test Active Control Technology (ACT) System laboratory tests was to verify and validate the system concept, hardware, and software. The initial lab tests were open loop hardware tests of the Test ACT System as designed and built. During the course of the testing, minor problems were uncovered and corrected. Major software tests were run. The initial software testing was also open loop. These tests examined pitch control laws, wing load alleviation, signal selection/fault detection (SSFD), and output management. The Test ACT System was modified to interface with the direct drive valve (DDV) modules. The initial testing identified problem areas with DDV nonlinearities, valve friction induced limit cycling, DDV control loop instability, and channel command mismatch. The other DDV issue investigated was the ability to detect and isolate failures. Some simple schemes for failure detection were tested but were not completely satisfactory. The Test ACT System architecture continues to appear promising for ACT/FBW applications in systems that must be immune to worst case generic digital faults, and be able to tolerate two sequential nongeneric faults with no reduction in performance. The challenge in such an implementation would be to keep the analog element sufficiently simple to achieve the necessary reliability.

  6. NASA's Approach to Software Assurance

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha

    2015-01-01

    NASA defines software assurance as: the planned and systematic set of activities that ensure conformance of software life cycle processes and products to requirements, standards, and procedures via quality, safety, reliability, and independent verification and validation. NASA's implementation of this approach to the quality, safety, reliability, security and verification and validation of software is brought together in one discipline, software assurance. Organizationally, NASA has software assurance at each NASA center, a Software Assurance Manager at NASA Headquarters, a Software Assurance Technical Fellow (currently the same person as the SA Manager), and an Independent Verification and Validation Organization with its own facility. An umbrella risk mitigation strategy for safety and mission success assurance of NASA's software, software assurance covers a wide area and is better structured to address the dynamic changes in how software is developed, used, and managed, as well as it's increasingly complex functionality. Being flexible, risk based, and prepared for challenges in software at NASA is essential, especially as much of our software is unique for each mission.

  7. A proven approach for more effective software development and maintenance

    NASA Technical Reports Server (NTRS)

    Pajerski, Rose; Hall, Dana; Sinclair, Craig

    1994-01-01

    Modern space flight mission operations and associated ground data systems are increasingly dependent upon reliable, quality software. Critical functions such as command load preparation, health and status monitoring, communications link scheduling and conflict resolution, and transparent gateway protocol conversion are routinely performed by software. Given budget constraints and the ever increasing capabilities of processor technology, the next generation of control centers and data systems will be even more dependent upon software across all aspects of performance. A key challenge now is to implement improved engineering, management, and assurance processes for the development and maintenance of that software; processes that cost less, yield higher quality products, and that self-correct for continual improvement evolution. The NASA Goddard Space Flight Center has a unique experience base that can be readily tapped to help solve the software challenge. Over the past eighteen years, the Software Engineering Laboratory within the code 500 Flight Dynamics Division has evolved a software development and maintenance methodology that accommodates the unique characteristics of an organization while optimizing and continually improving the organization's software capabilities. This methodology relies upon measurement, analysis, and feedback much analogous to that of control loop systems. It is an approach with a time-tested track record proven through repeated applications across a broad range of operational software development and maintenance projects. This paper describes the software improvement methodology employed by the Software Engineering Laboratory, and how it has been exploited within the Flight Dynamics Division with GSFC Code 500. Examples of specific improvement in the software itself and its processes are presented to illustrate the effectiveness of the methodology. Finally, the initial findings are given when this methodology was applied across the mission operations and ground data systems software domains throughout Code 500.

  8. Resilience Engineering in Critical Long Term Aerospace Software Systems: A New Approach to Spacecraft Software Safety

    NASA Astrophysics Data System (ADS)

    Dulo, D. A.

    Safety critical software systems permeate spacecraft, and in a long term venture like a starship would be pervasive in every system of the spacecraft. Yet software failure today continues to plague both the systems and the organizations that develop them resulting in the loss of life, time, money, and valuable system platforms. A starship cannot afford this type of software failure in long journeys away from home. A single software failure could have catastrophic results for the spaceship and the crew onboard. This paper will offer a new approach to developing safe reliable software systems through focusing not on the traditional safety/reliability engineering paradigms but rather by focusing on a new paradigm: Resilience and Failure Obviation Engineering. The foremost objective of this approach is the obviation of failure, coupled with the ability of a software system to prevent or adapt to complex changing conditions in real time as a safety valve should failure occur to ensure safe system continuity. Through this approach, safety is ensured through foresight to anticipate failure and to adapt to risk in real time before failure occurs. In a starship, this type of software engineering is vital. Through software developed in a resilient manner, a starship would have reduced or eliminated software failure, and would have the ability to rapidly adapt should a software system become unstable or unsafe. As a result, long term software safety, reliability, and resilience would be present for a successful long term starship mission.

  9. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    PubMed

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  10. How Nasa's Independent Verification and Validation (IVandV) Program Builds Reliability into a Space Mission Software System (SMSS)

    NASA Technical Reports Server (NTRS)

    Fisher, Marcus S.; Northey, Jeffrey; Stanton, William

    2014-01-01

    The purpose of this presentation is to outline how the NASA Independent Verification and Validation (IVV) Program helps to build reliability into the Space Mission Software Systems (SMSSs) that its customers develop.

  11. 15 CFR 740.13 - Technology and software-unrestricted (TSU).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false Technology and software-unrestricted... REGULATIONS LICENSE EXCEPTIONS § 740.13 Technology and software—unrestricted (TSU). This license exception authorizes exports and reexports of operation technology and software; sales technology and software...

  12. 15 CFR 740.13 - Technology and software-unrestricted (TSU).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 2 2013-01-01 2013-01-01 false Technology and software-unrestricted... REGULATIONS LICENSE EXCEPTIONS § 740.13 Technology and software—unrestricted (TSU). This license exception authorizes exports and reexports of operation technology and software; sales technology and software...

  13. 15 CFR 740.13 - Technology and software-unrestricted (TSU).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 2 2014-01-01 2014-01-01 false Technology and software-unrestricted... REGULATIONS LICENSE EXCEPTIONS § 740.13 Technology and software—unrestricted (TSU). This license exception authorizes exports and reexports of operation technology and software; sales technology and software...

  14. 15 CFR 740.13 - Technology and software-unrestricted (TSU).

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 2 2012-01-01 2012-01-01 false Technology and software-unrestricted... REGULATIONS LICENSE EXCEPTIONS § 740.13 Technology and software—unrestricted (TSU). This license exception authorizes exports and reexports of operation technology and software; sales technology and software...

  15. 15 CFR 740.13 - Technology and software-unrestricted (TSU).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Technology and software-unrestricted... REGULATIONS LICENSE EXCEPTIONS § 740.13 Technology and software—unrestricted (TSU). This license exception authorizes exports and reexports of operation technology and software; sales technology and software...

  16. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software in Young People with Down Syndrome.

    PubMed

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Rey-Abella, Ferran; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2016-05-01

    People with Down syndrome present skeletal abnormalities in their feet that can be analyzed by commonly used gold standard indices (the Hernández-Corvo index, the Chippaux-Smirak index, the Staheli arch index, and the Clarke angle) based on footprint measurements. The use of Photoshop CS5 software (Adobe Systems Software Ireland Ltd, Dublin, Ireland) to measure footprints has been validated in the general population. The present study aimed to assess the reliability and validity of this footprint assessment technique in the population with Down syndrome. Using optical podography and photography, 44 footprints from 22 patients with Down syndrome (11 men [mean ± SD age, 23.82 ± 3.12 years] and 11 women [mean ± SD age, 24.82 ± 6.81 years]) were recorded in a static bipedal standing position. A blinded observer performed the measurements using a validated manual method three times during the 4-month study, with 2 months between measurements. Test-retest was used to check the reliability of the Photoshop CS5 software measurements. Validity and reliability were obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed very good values for the Photoshop CS5 method (ICC, 0.982-0.995). Validity testing also found no differences between the techniques (ICC, 0.988-0.999). The Photoshop CS5 software method is reliable and valid for the study of footprints in young people with Down syndrome.

  17. A study of fault prediction and reliability assessment in the SEL environment

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Patnaik, Debabrata

    1986-01-01

    An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.

  18. HALO--a Java framework for precise transcript half-life determination.

    PubMed

    Friedel, Caroline C; Kaufmann, Stefanie; Dölken, Lars; Zimmer, Ralf

    2010-05-01

    Recent improvements in experimental technologies now allow measurements of de novo transcription and/or RNA decay at whole transcriptome level and determination of precise transcript half-lives. Such transcript half-lives provide important insights into the regulation of biological processes and the relative contributions of RNA decay and de novo transcription to differential gene expression. In this article, we present HALO (Half-life Organizer), the first software for the precise determination of transcript half-lives from measurements of RNA de novo transcription or decay determined with microarrays or RNA-seq. In addition, methods for quality control, filtering and normalization are supplied. HALO provides a graphical user interface, command-line tools and a well-documented Java application programming interface (API). Thus, it can be used both by biologists to determine transcript half-lives fast and reliably with the provided user interfaces as well as software developers integrating transcript half-life analysis into other gene expression profiling pipelines. Source code, executables and documentation are available at http://www.bio.ifi.lmu.de/software/halo.

  19. Design and Implementation of a Modern Automatic Deformation Monitoring System

    NASA Astrophysics Data System (ADS)

    Engel, Philipp; Schweimler, Björn

    2016-03-01

    The deformation monitoring of structures and buildings is an important task field of modern engineering surveying, ensuring the standing and reliability of supervised objects over a long period. Several commercial hardware and software solutions for the realization of such monitoring measurements are available on the market. In addition to them, a research team at the University of Applied Sciences in Neubrandenburg (NUAS) is actively developing a software package for monitoring purposes in geodesy and geotechnics, which is distributed under an open source licence and free of charge. The task of managing an open source project is well-known in computer science, but it is fairly new in a geodetic context. This paper contributes to that issue by detailing applications, frameworks, and interfaces for the design and implementation of open hardware and software solutions for sensor control, sensor networks, and data management in automatic deformation monitoring. It will be discussed how the development effort of networked applications can be reduced by using free programming tools, cloud computing technologies, and rapid prototyping methods.

  20. A New Generation of Telecommunications for Mars: The Reconfigurable Software Radio

    NASA Technical Reports Server (NTRS)

    Adams, J.; Horne, W.

    2000-01-01

    Telecommunications is a critical component for any mission at Mars as it is an enabling function that provides connectivity back to Earth and provides a means for conducting science. New developments in telecommunications, specifically in software - configurable radios, expand the possible approaches for science missions at Mars. These radios provide a flexible and re-configurable platform that can evolve with the mission and that provide an integrated approach to communications and science data processing. Deep space telecommunication faces challenges not normally faced by terrestrial and near-earth communications. Radiation, thermal, highly constrained mass, volume, packaging and reliability all are significant issues. Additionally, once the spacecraft leaves earth, there is no way to go out and upgrade or replace radio components. The reconfigurable software radio is an effort to provide not only a product that is immediately usable in the harsh space environment but also to develop a radio that will stay current as the years pass and technologies evolve.

  1. Managing Complexity in Next Generation Robotic Spacecraft: From a Software Perspective

    NASA Technical Reports Server (NTRS)

    Reinholtz, Kirk

    2008-01-01

    This presentation highlights the challenges in the design of software to support robotic spacecraft. Robotic spacecraft offer a higher degree of autonomy, however currently more capabilities are required, primarily in the software, while providing the same or higher degree of reliability. The complexity of designing such an autonomous system is great, particularly while attempting to address the needs for increased capabilities and high reliability without increased needs for time or money. The efforts to develop programming models for the new hardware and the integration of software architecture are highlighted.

  2. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  3. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  4. Establishing cephalometric landmarks for the translational study of Le Fort-based facial transplantation in Swine: enhanced applications using computer-assisted surgery and custom cutting guides.

    PubMed

    Santiago, Gabriel F; Susarla, Srinivas M; Al Rakan, Mohammed; Coon, Devin; Rada, Erin M; Sarhane, Karim A; Shores, Jamie T; Bonawitz, Steven C; Cooney, Damon; Sacks, Justin; Murphy, Ryan J; Fishman, Elliot K; Brandacher, Gerald; Lee, W P Andrew; Liacouras, Peter; Grant, Gerald; Armand, Mehran; Gordon, Chad R

    2014-05-01

    Le Fort-based, maxillofacial allotransplantation is a reconstructive alternative gaining clinical acceptance. However, the vast majority of single-jaw transplant recipients demonstrate less-than-ideal skeletal and dental relationships, with suboptimal aesthetic harmony. The purpose of this study was to investigate reproducible cephalometric landmarks in a large-animal model, where refinement of computer-assisted planning, intraoperative navigational guidance, translational bone osteotomies, and comparative surgical techniques could be performed. Cephalometric landmarks that could be translated into the human craniomaxillofacial skeleton, and that would remain reliable following maxillofacial osteotomies with midfacial alloflap inset, were sought on six miniature swine. Le Fort I- and Le Fort III-based alloflaps were harvested in swine with osteotomies, and all alloflaps were either autoreplanted or transplanted. Cephalometric analyses were performed on lateral cephalograms preoperatively and postoperatively. Critical cephalometric data sets were identified with the assistance of surgical planning and virtual prediction software and evaluated for reliability and translational predictability. Several pertinent landmarks and human analogues were identified, including pronasale, zygion, parietale, gonion, gnathion, lower incisor base, and alveolare. Parietale-pronasale-alveolare and parietale-pronasale-lower incisor base were found to be reliable correlates of sellion-nasion-A point angle and sellion-nasion-B point angle measurements in humans, respectively. There is a set of reliable cephalometric landmarks and measurement angles pertinent for use within a translational large-animal model. These craniomaxillofacial landmarks will enable development of novel navigational software technology, improve cutting guide designs, and facilitate exploration of new avenues for investigation and collaboration.

  5. A smart grid simulation testbed using Matlab/Simulink

    NASA Astrophysics Data System (ADS)

    Mallapuram, Sriharsha; Moulema, Paul; Yu, Wei

    2014-06-01

    The smart grid is the integration of computing and communication technologies into a power grid with a goal of enabling real time control, and a reliable, secure, and efficient energy system [1]. With the increased interest of the research community and stakeholders towards the smart grid, a number of solutions and algorithms have been developed and proposed to address issues related to smart grid operations and functions. Those technologies and solutions need to be tested and validated before implementation using software simulators. In this paper, we developed a general smart grid simulation model in the MATLAB/Simulink environment, which integrates renewable energy resources, energy storage technology, load monitoring and control capability. To demonstrate and validate the effectiveness of our simulation model, we created simulation scenarios and performed simulations using a real-world data set provided by the Pecan Street Research Institute.

  6. Software Carpentry In The Hydrological Sciences

    NASA Astrophysics Data System (ADS)

    Ahmadia, A. J.; Kees, C. E.

    2014-12-01

    Scientists are spending an increasing amount of time building and using hydrology software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. As hydrology models increase in capability and enter use by a growing number of scientists and their communities, it is important that the scientific software development practices scale up to meet the challenges posed by increasing software complexity, lengthening software lifecycles, a growing number of stakeholders and contributers, and a broadened developer base that extends from application domains to high performance computing centers. Many of these challenges in complexity, lifecycles, and developer base have been successfully met by the open source community, and there are many lessons to be learned from their experiences and practices. Additionally, there is much wisdom to be found in the results of research studies conducted on software engineering itself. Software Carpentry aims to bridge the gap between the current state of software development and these known best practices for scientific software development, with a focus on hands-on exercises and practical advice. In 2014, Software Carpentry workshops targeting earth/environmental sciences and hydrological modeling have been organized and run at the Massachusetts Institute of Technology, the US Army Corps of Engineers, the Community Surface Dynamics Modeling System Annual Meeting, and the Earth Science Information Partners Summer Meeting. In this presentation, we will share some of the successes in teaching this material, as well as discuss and present instructional material specific to hydrological modeling.

  7. Development of x-ray imaging technique for liquid screening at airport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulaiman, Nurhani binti, E-mail: nhani.sulaiman@gmail.com; Srisatit, Somyot, E-mail: somyot.s@chula.ac.th

    2016-01-22

    X-ray imaging technology is a viable option to recognize flammable liquids for the purposes of aviation security. In this study, an X-ray imaging technology was developed whereby, the image viewing system was built with the use of a digital camera coupled with a gadolinium oxysulfide (GOS) fluorescent screen. The camera was equipped with a software for remote control setting of the camera via a USB cable which allows the images to be captured. The image was analysed to determine the average grey level using a software designed by Microsoft Visual Basic 6.0. The data was obtained for various densities ofmore » liquid thickness of 4.5 cm, 6.0 cm and 7.5 cm respectively for X-ray energies ranging from 70 to 200 kVp. In order to verify the reliability of the constructed calibration data, the system was tested with a few types of unknown liquids. The developed system could be conveniently employed for security screening in order to discriminate between a threat and an innocuous liquid.« less

  8. Quantitative assessment of human motion using video motion analysis

    NASA Technical Reports Server (NTRS)

    Probe, John D.

    1993-01-01

    In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.

  9. Design and reliability analysis of DP-3 dynamic positioning control architecture

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Wan, Lei; Jiang, Da-Peng; Xu, Yu-Ru

    2011-12-01

    As the exploration and exploitation of oil and gas proliferate throughout deepwater area, the requirements on the reliability of dynamic positioning system become increasingly stringent. The control objective ensuring safety operation at deep water will not be met by a single controller for dynamic positioning. In order to increase the availability and reliability of dynamic positioning control system, the triple redundancy hardware and software control architectures were designed and developed according to the safe specifications of DP-3 classification notation for dynamically positioned ships and rigs. The hardware redundant configuration takes the form of triple-redundant hot standby configuration including three identical operator stations and three real-time control computers which connect each other through dual networks. The function of motion control and redundancy management of control computers were implemented by software on the real-time operating system VxWorks. The software realization of task loose synchronization, majority voting and fault detection were presented in details. A hierarchical software architecture was planed during the development of software, consisting of application layer, real-time layer and physical layer. The behavior of the DP-3 dynamic positioning control system was modeled by a Markov model to analyze its reliability. The effects of variation in parameters on the reliability measures were investigated. The time domain dynamic simulation was carried out on a deepwater drilling rig to prove the feasibility of the proposed control architecture.

  10. Software Engineering Research/Developer Collaborations in 2005

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom

    2006-01-01

    In CY 2005, three collaborations between software engineering technology providers and NASA software development personnel deployed three software engineering technologies on NASA development projects (a different technology on each project). The main purposes were to benefit the projects, infuse the technologies if beneficial into NASA, and give feedback to the technology providers to improve the technologies. Each collaboration project produced a final report. Section 2 of this report summarizes each project, drawing from the final reports and communications with the software developers and technology providers. Section 3 indicates paths to further infusion of the technologies into NASA practice. Section 4 summarizes some technology transfer lessons learned. Also included is an acronym list.

  11. Payload software technology: Software technology development plan

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Programmatic requirements for the advancement of software technology are identified for meeting the space flight requirements in the 1980 to 1990 time period. The development items are described, and software technology item derivation worksheets are presented along with the cost/time/priority assessments.

  12. Software Technology for Adaptable, Reliable Systems (STARS) (User Manual). Ada Command Environment (ACE) Version 8.0 Sun OS Implementation

    DTIC Science & Technology

    1990-10-29

    the equivalent type names in the basic X libary . 37. Intrinsics Contains the type declarations common to all Xt toolkit routines. 38. Widget-Package...Memory-Size constant Integer 1; MinInt constant I-reger Integer’First; MaxInt const-i’ integer Integer’Last; -- Max- Digits constant Integer 1; -- MaxMan...connection between some type names used by Xt routines and the equivalent type names in the basic X libary . .package RenamedXlibTypes is P;’ge 65 29

  13. [An experimental research on the fabrication of the fused porcelain to CAD/CAM molar crown].

    PubMed

    Dai, Ning; Zhou, Yongyao; Liao, Wenhe; Yu, Qing; An, Tao; Jiao, Yiqun

    2007-02-01

    This paper introduced the fabrication process of the fused porcelain to molar crown with CAD/CAM technology. Firstly, preparation teeth data was retrieved by the 3D-optical measuring system. Then, we have reconstructed the inner surface designed the outer surface shape with the computer aided design software. Finally, the mini high-speed NC milling machine was used to produce the fused porcelain to CAD/CAM molar crown. The result has proved that the fabrication process is reliable and efficient. The dental restoration quality is steady and precise.

  14. User’s Manual for a Prototype Binding of ANSI-Standard SQL to Ada Supporting the SAME Methodology for the Software Technology for Adaptable, Reliable Systems (STARS) Program

    DTIC Science & Technology

    1990-06-30

    declaring all of the valid enumeration values for a domain of type enumeration. Example-> In type color_vals is (red, blue , green), color_vals is the name...Color Vals is (red, white, blue ); package Fifth is new domainl.generateenum domain (ColorVals, Colors, enumeration, nullandnotnull); Resulting...generated domain definition-> type colorsnotnull is (red, white, blue ); package colors_ops is new sqlenumeration_pkg(colors notnull); type colors_type is

  15. Simulation of future global warming scenarios in rice paddies with an open-field warming facility

    PubMed Central

    2011-01-01

    To simulate expected future global warming, hexagonal arrays of infrared heaters have previously been used to warm open-field canopies of upland crops such as wheat. Through the use of concrete-anchored posts, improved software, overhead wires, extensive grounding, and monitoring with a thermal camera, the technology was safely and reliably extended to paddy rice fields. The system maintained canopy temperature increases within 0.5°C of daytime and nighttime set-point differences of 1.3 and 2.7°C 67% of the time. PMID:22145582

  16. Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.

    PubMed

    Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William

    2018-05-08

    Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.

  17. Modernizing Systems and Software: How Evolving Trends in Future Trends in Systems and Software Technology Bode Well for Advancing the Precision of Technology

    DTIC Science & Technology

    2009-04-23

    of Code Need for increased functionality will be a forcing function to bring the fields of software and systems engineering... of Software-Intensive Systems is Increasing 3 How Evolving Trends in Systems and Software Technologies Bode Well for Advancing the Precision of ...Engineering in Continued Partnership 4 How Evolving Trends in Systems and Software Technologies Bode Well for Advancing the

  18. Developing of an automation for therapy dosimetry systems by using labview software

    NASA Astrophysics Data System (ADS)

    Aydin, Selim; Kam, Erol

    2018-06-01

    Traceability, accuracy and consistency of radiation measurements are essential in radiation dosimetry, particularly in radiotherapy, where the outcome of treatments is highly dependent on the radiation dose delivered to patients. Therefore it is very important to provide reliable, accurate and fast calibration services for therapy dosimeters since the radiation dose delivered to a radiotherapy patient is directly related to accuracy and reliability of these devices. In this study, we report the performance of in-house developed computer controlled data acquisition and monitoring software for the commercially available radiation therapy electrometers. LabVIEW® software suite is used to provide reliable, fast and accurate calibration services. The software also collects environmental data such as temperature, pressure and humidity in order to use to use these them in correction factor calculations. By using this software tool, a better control over the calibration process is achieved and the need for human intervention is reduced. This is the first software that can control frequently used dosimeter systems, in radiation thereapy field at hospitals, such as Unidos Webline, Unidos E, Dose-1 and PC Electrometers.

  19. A testing-coverage software reliability model considering fault removal efficiency and error generation

    PubMed Central

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091

  20. Development of a Standard Set of Software Indicators for Aeronautical Systems Center.

    DTIC Science & Technology

    1992-09-01

    29:12). The composite models listed include COCOMO and the Software Productivity, Quality, and Reliability Model ( SPQR ) (29:12). The SPQR model was...determine the values of the 68 input parameters. Source provides no specifics. Indicator Name SPQR (SW Productivity, Qual, Reliability) Indicator Class

  1. Software technology insertion: A study of success factors

    NASA Technical Reports Server (NTRS)

    Lydon, Tom

    1990-01-01

    Managing software development in large organizations has become increasingly difficult due to increasing technical complexity, stricter government standards, a shortage of experienced software engineers, competitive pressure for improved productivity and quality, the need to co-develop hardware and software together, and the rapid changes in both hardware and software technology. The 'software factory' approach to software development minimizes risks while maximizing productivity and quality through standardization, automation, and training. However, in practice, this approach is relatively inflexible when adopting new software technologies. The methods that a large multi-project software engineering organization can use to increase the likelihood of successful software technology insertion (STI), especially in a standardized engineering environment, are described.

  2. On the use and the performance of software reliability growth models

    NASA Technical Reports Server (NTRS)

    Keiller, Peter A.; Miller, Douglas R.

    1991-01-01

    We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.

  3. Digital image management project for dermatological health care environments: a new dedicated software and review of the literature.

    PubMed

    Rubegni, Pietro; Nami, Niccolò; Poggiali, Sara; Tataranno, Domenico; Fimiani, M

    2009-05-01

    Because the skin is the only organ completely accessible to visual examination, digital technology has therefore attracted the attention of dermatologists for documenting, monitoring, measuring and classifying morphological manifestations. To describe a digital image management system dedicated to dermatological health care environments and to compare it with other existing softwares for digital image storage. We designed a reliable hardware structure that could ensure future scaling, because storage needs tend to grow exponentially. For the software, we chose a client-web server application based on a relational database and with a 'minimalist' user interface. We developed a software with a ready-made, adaptable index of skin pathologies. It facilitates classification by pathology, patient and visit, with an advanced search option allowing access to all images according to personalized criteria. The software also offers the possibility of comparing two or more digital images (follow-up). The fact that the archives of years of digital photos acquired and saved on PCs can easily be entered in the program distinguishes it from the others in the market. This option is fundamental for accessing all the photos taken in years of practice in the program without entering them one by one. The program is available to any user connected to the local Intranet and the system may directly be available in the future from the Internet. All clinics and surgeries, especially those that rely on digital images, are obliged to keep up with technological advances. It is therefore hoped that our project will become a model for medical structures intending to rationalise digital and other data according to statutory requirements.

  4. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    NASA Astrophysics Data System (ADS)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  5. The Goddard Space Flight Center (GSFC) robotics technology testbed

    NASA Technical Reports Server (NTRS)

    Schnurr, Rick; Obrien, Maureen; Cofer, Sue

    1989-01-01

    Much of the technology planned for use in NASA's Flight Telerobotic Servicer (FTS) and the Demonstration Test Flight (DTF) is relatively new and untested. To provide the answers needed to design safe, reliable, and fully functional robotics for flight, NASA/GSFC is developing a robotics technology testbed for research of issues such as zero-g robot control, dual arm teleoperation, simulations, and hierarchical control using a high level programming language. The testbed will be used to investigate these high risk technologies required for the FTS and DTF projects. The robotics technology testbed is centered around the dual arm teleoperation of a pair of 7 degree-of-freedom (DOF) manipulators, each with their own 6-DOF mini-master hand controllers. Several levels of safety are implemented using the control processor, a separate watchdog computer, and other low level features. High speed input/output ports allow the control processor to interface to a simulation workstation: all or part of the testbed hardware can be used in real time dynamic simulation of the testbed operations, allowing a quick and safe means for testing new control strategies. The NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) hierarchical control scheme, is being used as the reference standard for system design. All software developed for the testbed, excluding some of simulation workstation software, is being developed in Ada. The testbed is being developed in phases. The first phase, which is nearing completion, and highlights future developments is described.

  6. Effectiveness of back-to-back testing

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.

    1987-01-01

    Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.

  7. EOSlib, Version 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Nathan; Menikoff, Ralph

    2017-02-03

    Equilibrium thermodynamics underpins many of the technologies used throughout theoretical physics, yet verification of the various theoretical models in the open literature remains challenging. EOSlib provides a single, consistent, verifiable implementation of these models, in a single, easy-to-use software package. It consists of three parts: a software library implementing various published equation-of-state (EOS) models; a database of fitting parameters for various materials for these models; and a number of useful utility functions for simplifying thermodynamic calculations such as computing Hugoniot curves or Riemann problem solutions. Ready availability of this library will enable reliable code-to- code testing of equation-of-state implementations, asmore » well as a starting point for more rigorous verification work. EOSlib also provides a single, consistent API for its analytic and tabular EOS models, which simplifies the process of comparing models for a particular application.« less

  8. Discrete Address Beacon System (DABS) Software System Reliability Modeling and Prediction.

    DTIC Science & Technology

    1981-06-01

    Service ( ATARS ) module because of its interim status. Reliability prediction models for software modules were derived and then verified by matching...System (A’iCR3BS) and thus can be introduced gradually and economically without ma jor olper- ational or procedural change. Since DABS uses monopulse...lineanaly- sis tools or are ured during maintenance or pre-initialization were not modeled because they are not part of the mission software. The ATARS

  9. 75 FR 10439 - Cognitive Radio Technologies and Software Defined Radios

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-08

    ... Technologies and Software Defined Radios AGENCY: Federal Communications Commission. ACTION: Final rule. SUMMARY... concerning the use of open source software to implement security features in software defined radios (SDRs... ongoing technical developments in cognitive and software defined radio (SDR) technologies. 2. On April 20...

  10. 15 CFR 740.6 - Technology and software under restriction (TSR).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 2 2013-01-01 2013-01-01 false Technology and software under... REGULATIONS LICENSE EXCEPTIONS § 740.6 Technology and software under restriction (TSR). (a) Scope. License Exception TSR permits exports and reexports of technology and software where the Commerce Country Chart...

  11. 15 CFR 740.6 - Technology and software under restriction (TSR).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false Technology and software under... REGULATIONS LICENSE EXCEPTIONS § 740.6 Technology and software under restriction (TSR). (a) Scope. License Exception TSR permits exports and reexports of technology and software where the Commerce Country Chart...

  12. 15 CFR 740.6 - Technology and software under restriction (TSR).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 2 2014-01-01 2014-01-01 false Technology and software under... REGULATIONS LICENSE EXCEPTIONS § 740.6 Technology and software under restriction (TSR). (a) Scope. License Exception TSR permits exports and reexports of technology and software where the Commerce Country Chart...

  13. 15 CFR 740.6 - Technology and software under restriction (TSR).

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 2 2012-01-01 2012-01-01 false Technology and software under... REGULATIONS LICENSE EXCEPTIONS § 740.6 Technology and software under restriction (TSR). (a) Scope. License Exception TSR permits exports and reexports of technology and software where the Commerce Country Chart...

  14. 15 CFR 740.6 - Technology and software under restriction (TSR).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Technology and software under... REGULATIONS LICENSE EXCEPTIONS § 740.6 Technology and software under restriction (TSR). (a) Scope. License Exception TSR permits exports and reexports of technology and software where the Commerce Country Chart...

  15. Introduction of structural health and safety monitoring warning systems for Shenzhen-Hong Kong Western Corridor Shenzhen Bay Bridge

    NASA Astrophysics Data System (ADS)

    Li, N.; Zhang, X. Y.; Zhou, X. T.; Leng, J.; Liang, Z.; Zheng, C.; Sun, X. F.

    2008-03-01

    Though the brief introduction of the completed structural health and safety monitoring warning systems for Shenzhen-Hongkong western corridor Shenzhen bay highway bridge (SZBHMS), the self-developed system frame, hardware and software scheme of this practical research project are systematically discussed in this paper. The data acquisition and transmission hardware and the basic software based on the NI (National Instruments) Company virtual instruments technology were selected in this system, which adopted GPS time service receiver technology and so on. The objectives are to establish the structural safety monitoring and status evaluation system to monitor the structural responses and working conditions in real time and to analyze the structural working statue using information obtained from the measured data. It will be also provided the scientific decision-making bases for the bridge management and maintenance. Potential technical approaches to the structural safety warning systems, status identification and evaluation method are presented. The result indicated that the performance of the system has achieved the desired objectives, ensure the longterm high reliability, real time concurrence and advanced technology of SZBHMS. The innovate achievement which is the first time to implement in domestic, provide the reference for long-span bridge structural health and safety monitoring warning systems design.

  16. "Diagnosis by behavioral observation" home-videosomnography - a rigorous ethnographic approach to sleep of children with neurodevelopmental conditions.

    PubMed

    Ipsiroglu, Osman S; Hung, Yi-Hsuan Amy; Chan, Forson; Ross, Michelle L; Veer, Dorothee; Soo, Sonja; Ho, Gloria; Berger, Mai; McAllister, Graham; Garn, Heinrich; Kloesch, Gerhard; Barbosa, Adriano Vilela; Stockler, Sylvia; McKellin, William; Vatikiotis-Bateson, Eric

    2015-01-01

    Advanced video technology is available for sleep-laboratories. However, low-cost equipment for screening in the home setting has not been identified and tested, nor has a methodology for analysis of video recordings been suggested. We investigated different combinations of hardware/software for home-videosomnography (HVS) and established a process for qualitative and quantitative analysis of HVS-recordings. A case vignette (HVS analysis for a 5.5-year-old girl with major insomnia and several co-morbidities) demonstrates how methodological considerations were addressed and how HVS added value to clinical assessment. We suggest an "ideal set of hardware/software" that is reliable, affordable (∼$500) and portable (=2.8 kg) to conduct non-invasive HVS, which allows time-lapse analyses. The equipment consists of a net-book, a camera with infrared optics, and a video capture device. (1) We present an HVS-analysis protocol consisting of three steps of analysis at varying replay speeds: (a) basic overview and classification at 16× normal speed; (b) second viewing and detailed descriptions at 4-8× normal speed, and (c) viewing, listening, and in-depth descriptions at real-time speed. (2) We also present a custom software program that facilitates video analysis and note-taking (Annotator(©)), and Optical Flow software that automatically quantifies movement for internal quality control of the HVS-recording. The case vignette demonstrates how the HVS-recordings revealed the dimension of insomnia caused by restless legs syndrome, and illustrated the cascade of symptoms, challenging behaviors, and resulting medications. The strategy of using HVS, although requiring validation and reliability testing, opens the floor for a new "observational sleep medicine," which has been useful in describing discomfort-related behavioral movement patterns in patients with communication difficulties presenting with challenging/disruptive sleep/wake behaviors.

  17. Reliability measurement during software development. [for a multisensor tracking system

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Sturm, W. A.; Trattner, S.

    1977-01-01

    During the development of data base software for a multi-sensor tracking system, reliability was measured. The failure ratio and failure rate were found to be consistent measures. Trend lines were established from these measurements that provided good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.

  18. NASA Software Assurance's Roles in Research and Technology

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha

    2010-01-01

    This slide presentation reviews the interactions between the scientist and engineers doing research and technology and the software developers and others who are doing software assurance. There is a discussion of the role of the Safety and Mission Assurance (SMA) in developing software to be used for research and technology, and the importance of this role as the technology moves to the higher levels of the technology readiness levels (TRLs). There is also a call to change the way the development of software is developed.

  19. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  20. Space Shuttle Software Development and Certification

    NASA Technical Reports Server (NTRS)

    Orr, James K.; Henderson, Johnnie A

    2000-01-01

    Man-rated software, "software which is in control of systems and environments upon which human life is critically dependent," must be highly reliable. The Space Shuttle Primary Avionics Software System is an excellent example of such a software system. Lessons learn from more than 20 years of effort have identified basic elements that must be present to achieve this high degree of reliability. The elements include rigorous application of appropriate software development processes, use of trusted tools to support those processes, quantitative process management, and defect elimination and prevention. This presentation highlights methods used within the Space Shuttle project and raises questions that must be addressed to provide similar success in a cost effective manner on future long-term projects where key application development tools are COTS rather than internally developed custom application development tools

  1. Autonomous system for Web-based microarray image analysis.

    PubMed

    Bozinov, Daniel

    2003-12-01

    Software-based feature extraction from DNA microarray images still requires human intervention on various levels. Manual adjustment of grid and metagrid parameters, precise alignment of superimposed grid templates and gene spots, or simply identification of large-scale artifacts have to be performed beforehand to reliably analyze DNA signals and correctly quantify their expression values. Ideally, a Web-based system with input solely confined to a single microarray image and a data table as output containing measurements for all gene spots would directly transform raw image data into abstracted gene expression tables. Sophisticated algorithms with advanced procedures for iterative correction function can overcome imminent challenges in image processing. Herein is introduced an integrated software system with a Java-based interface on the client side that allows for decentralized access and furthermore enables the scientist to instantly employ the most updated software version at any given time. This software tool is extended from PixClust as used in Extractiff incorporated with Java Web Start deployment technology. Ultimately, this setup is destined for high-throughput pipelines in genome-wide medical diagnostics labs or microarray core facilities aimed at providing fully automated service to its users.

  2. A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility.

    PubMed

    Zaballos, Agustín; Navarro, Joan; Martín De Pozuelo, Ramon

    2018-02-28

    Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid's data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.

  3. Design, Development and Pre-Flight Testing of the Communications, Navigation, and Networking Reconfigurable Testbed (Connect) to Investigate Software Defined Radio Architecture on the International Space Station

    NASA Technical Reports Server (NTRS)

    Over, Ann P.; Barrett, Michael J.; Reinhart, Richard C.; Free, James M.; Cikanek, Harry A., III

    2011-01-01

    The Communication Navigation and Networking Reconfigurable Testbed (CoNNeCT) is a NASA-sponsored mission, which will investigate the usage of Software Defined Radios (SDRs) as a multi-function communication system for space missions. A softwaredefined radio system is a communication system in which typical components of the system (e.g., modulators) are incorporated into software. The software-defined capability allows flexibility and experimentation in different modulation, coding and other parameters to understand their effects on performance. This flexibility builds inherent redundancy and flexibility into the system for improved operational efficiency, real-time changes to space missions and enhanced reliability/redundancy. The CoNNeCT Project is a collaboration between industrial radio providers and NASA. The industrial radio providers are providing the SDRs and NASA is designing, building and testing the entire flight system. The flight system will be integrated on the Express Logistics Carrier (ELC) on the International Space Station (ISS) after launch on the H-IIB Transfer Vehicle in 2012. This paper provides an overview of the technology research objectives, payload description, design challenges and pre-flight testing results.

  4. Batching System for Superior Service

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Veridian's Portable Batch System (PBS) was the recipient of the 1997 NASA Space Act Award for outstanding software. A batch system is a set of processes for managing queues and jobs. Without a batch system, it is difficult to manage the workload of a computer system. By bundling the enterprise's computing resources, the PBS technology offers users a single coherent interface, resulting in efficient management of the batch services. Users choose which information to package into "containers" for system-wide use. PBS also provides detailed system usage data, a procedure not easily executed without this software. PBS operates on networked, multi-platform UNIX environments. Veridian's new version, PBS Pro,TM has additional features and enhancements, including support for additional operating systems. Veridian distributes the original version of PBS as Open Source software via the PBS website. Customers can register and download the software at no cost. PBS Pro is also available via the web and offers additional features such as increased stability, reliability, and fault tolerance.A company using PBS can expect a significant increase in the effective management of its computing resources. Tangible benefits include increased utilization of costly resources and enhanced understanding of computational requirements and user needs.

  5. A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility

    PubMed Central

    2018-01-01

    Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid’s data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction. PMID:29495599

  6. Final Technical Report on Quantifying Dependability Attributes of Software Based Safety Critical Instrumentation and Control Systems in Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smidts, Carol; Huang, Funqun; Li, Boyuan

    With the current transition from analog to digital instrumentation and control systems in nuclear power plants, the number and variety of software-based systems have significantly increased. The sophisticated nature and increasing complexity of software raises trust in these systems as a significant challenge. The trust placed in a software system is typically termed software dependability. Software dependability analysis faces uncommon challenges since software systems’ characteristics differ from those of hardware systems. The lack of systematic science-based methods for quantifying the dependability attributes in software-based instrumentation as well as control systems in safety critical applications has proved itself to be amore » significant inhibitor to the expanded use of modern digital technology in the nuclear industry. Dependability refers to the ability of a system to deliver a service that can be trusted. Dependability is commonly considered as a general concept that encompasses different attributes, e.g., reliability, safety, security, availability and maintainability. Dependability research has progressed significantly over the last few decades. For example, various assessment models and/or design approaches have been proposed for software reliability, software availability and software maintainability. Advances have also been made to integrate multiple dependability attributes, e.g., integrating security with other dependability attributes, measuring availability and maintainability, modeling reliability and availability, quantifying reliability and security, exploring the dependencies between security and safety and developing integrated analysis models. However, there is still a lack of understanding of the dependencies between various dependability attributes as a whole and of how such dependencies are formed. To address the need for quantification and give a more objective basis to the review process -- therefore reducing regulatory uncertainty -- measures and methods are needed to assess dependability attributes early on, as well as throughout the life-cycle process of software development. In this research, extensive expert opinion elicitation is used to identify the measures and methods for assessing software dependability. Semi-structured questionnaires were designed to elicit expert knowledge. A new notation system, Causal Mechanism Graphing, was developed to extract and represent such knowledge. The Causal Mechanism Graphs were merged, thus, obtaining the consensus knowledge shared by the domain experts. In this report, we focus on how software contributes to dependability. However, software dependability is not discussed separately from the context of systems or socio-technical systems. Specifically, this report focuses on software dependability, reliability, safety, security, availability, and maintainability. Our research was conducted in the sequence of stages found below. Each stage is further examined in its corresponding chapter. Stage 1 (Chapter 2): Elicitation of causal maps describing the dependencies between dependability attributes. These causal maps were constructed using expert opinion elicitation. This chapter describes the expert opinion elicitation process, the questionnaire design, the causal map construction method and the causal maps obtained. Stage 2 (Chapter 3): Elicitation of the causal map describing the occurrence of the event of interest for each dependability attribute. The causal mechanisms for the “event of interest” were extracted for each of the software dependability attributes. The “event of interest” for a dependability attribute is generally considered to be the “attribute failure”, e.g. security failure. The extraction was based on the analysis of expert elicitation results obtained in Stage 1. Stage 3 (Chapter 4): Identification of relevant measurements. Measures for the “events of interest” and their causal mechanisms were obtained from expert opinion elicitation for each of the software dependability attributes. The measures extracted are presented in this chapter. Stage 4 (Chapter 5): Assessment of the coverage of the causal maps via measures. Coverage was assessed to determine whether the measures obtained were sufficient to quantify software dependability, and what measures are further required. Stage 5 (Chapter 6): Identification of “missing” measures and measurement approaches for concepts not covered. New measures, for concepts that had not been covered sufficiently as determined in Stage 4, were identified using supplementary expert opinion elicitation as well as literature reviews. Stage 6 (Chapter 7): Building of a detailed quantification model based on the causal maps and measurements obtained. Ability to derive such a quantification model shows that the causal models and measurements derived from the previous stages (Stage 1 to Stage 5) can form the technical basis for developing dependability quantification models. Scope restrictions have led us to prioritize this demonstration effort. The demonstration was focused on a critical system, i.e. the reactor protection system. For this system, a ranking of the software dependability attributes by nuclear stakeholders was developed. As expected for this application, the stakeholder ranking identified safety as the most critical attribute to be quantified. A safety quantification model limited to the requirements phase of development was built. Two case studies were conducted for verification. A preliminary control gate for software safety for the requirements stage was proposed and applied to the first case study. The control gate allows a cost effective selection of the duration of the requirements phase.« less

  7. Automatic Speech Recognition: Reliability and Pedagogical Implications for Teaching Pronunciation

    ERIC Educational Resources Information Center

    Kim, In-Seok

    2006-01-01

    This study examines the reliability of automatic speech recognition (ASR) software used to teach English pronunciation, focusing on one particular piece of software, "FluSpeak, as a typical example." Thirty-six Korean English as a Foreign Language (EFL) college students participated in an experiment in which they listened to 15 sentences…

  8. A Survey of Software Reliability Modeling and Estimation

    DTIC Science & Technology

    1983-09-01

    considered include: the Jelinski-Moranda Model, the ,Geometric Model,’ and Musa’s Model. A Monte -Carlo study of the behavior of the ’V"’"*least squares...ceedings Number 261, 1979, pp. 34-1, 34-11. IoelAmrit, AGieboSSukert, Alan and Goel, Ararat , "A Guidebookfor Software Reliability Assessment, 1980

  9. Evaluation of Fieldbus and OPC for Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Boulanger, Richard P.; Cardinale, Paul; Bradley, Matthew; Luna, Bernadette (Technical Monitor)

    2000-01-01

    FOUNDATION(Tm) Fieldbus and OP(TM) (OLE(TM)for Process Control) technologies were integrated into an existing control system for a crop growth chamber at NASA Ames Research Center. FOUNDATION(TM) Fieldbus is a digital, bi-directional, multi-drop, serial communications network which functions essentially as a LAN for sensors. FOUNDATION(TM) Fieldbus is heterarchical, with publishers and subscribers of data performing complex control functions at low levels without centralized control and its associated overhead. OPC(TM) is a set of interfaces which replace proprietary drivers with a transparent means of exchanging data between the fieldbus and applications. The objectives were: (1) to integrate FOUNDATION(TM) Fieldbus into existing ALS hardware and determine its overall effectiveness and reliability and, (2) to quantify any savings produced by using fieldbus and OPC technologies. We encountered several problems with the FOUNDATION(TM) Fieldbus hardware chosen. Our hardware exposed 100 data for each channel of the fieldbus. The fieldbus configurator software used to program the fieldbus was simply not adequate. The fieldbus was also not inherently reliable. It lost its settings twice during our tests for unknown reasons. OPC also had issues. It did not function at all as supplied, requiring substitution of some of its components with those from other vendors. It would stop working after a fixed period of time. Certain database calls eventually lock the machine. Overall, we would not recommend FOUNDATION(TM) Fieldbus: it was too difficult to implement with little overall added value. It also seems unlikely that FOUNDATION(TM) Fieldbus will gain sufficient penetration into the laboratory instrument market to ever be cost effective for the ALS community. OPC had good reliability and performance once a stable installation was achieved. It allowed a rapid change to an alternative software strategy when our first strategy failed. It is a cost effective solution to distributed control systems development.

  10. Software service history report

    DOT National Transportation Integrated Search

    2002-01-01

    The safe and reliable operation of software within civil aviation systems and equipment has historically been assured through the application of rigorous design assurance applied during the software development process. Increasingly, manufacturers ar...

  11. 31 CFR 545.204 - Prohibited exportation, reexportation, sale, or supply of goods, software, technology, or services.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., sale, or supply of goods, software, technology, or services. 545.204 Section 545.204 Money and Finance... exportation, reexportation, sale, or supply of goods, software, technology, or services. Except as otherwise... States, or by a U.S. person, wherever located, of any goods, software, technology (including technical...

  12. Reliability, Safety and Error Recovery for Advanced Control Software

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2003-01-01

    For long-duration automated operation of regenerative life support systems in space environments, there is a need for advanced integration and control systems that are significantly more reliable and safe, and that support error recovery and minimization of operational failures. This presentation outlines some challenges of hazardous space environments and complex system interactions that can lead to system accidents. It discusses approaches to hazard analysis and error recovery for control software and challenges of supporting effective intervention by safety software and the crew.

  13. Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer

    NASA Technical Reports Server (NTRS)

    Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.

    1984-01-01

    SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.

  14. 15 CFR Supplement No. 2 to Part 774 - General Technology and Software Notes

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 2 2013-01-01 2013-01-01 false General Technology and Software Notes... Software Notes 1. General Technology Note. The export of “technology” that is “required” for the... necessary” information. 2. General Software Note. License Exception TSU (“mass market” software) is...

  15. 15 CFR Supplement No. 2 to Part 774 - General Technology and Software Notes

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false General Technology and Software Notes... Software Notes 1. General Technology Note. The export of “technology” that is “required” for the... necessary” information. 2. General Software Note. License Exception TSU (“mass market” software) is...

  16. 15 CFR Supplement No. 2 to Part 774 - General Technology and Software Notes

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 2 2012-01-01 2012-01-01 false General Technology and Software Notes... Software Notes 1. General Technology Note. The export of “technology” that is “required” for the... necessary” information. 2. General Software Note. License Exception TSU (“mass market” software) is...

  17. Evaluation methodologies for an advanced information processing system

    NASA Technical Reports Server (NTRS)

    Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.

    1984-01-01

    The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.

  18. Air-condition Control System of Weaving Workshop Based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Song, Jian

    The project of air-condition measurement and control system based on LabVIEW is put forward for the sake of controlling effectively the environmental targets in the weaving workshop. In this project, which is based on the virtual instrument technology and in which LabVIEW development platform by NI is adopted, the system is constructed on the basis of the virtual instrument technology. It is composed of the upper PC, central control nodes based on CC2530, sensor nodes, sensor modules and executive device. Fuzzy control algorithm is employed to achieve the accuracy control of the temperature and humidity. A user-friendly man-machine interaction interface is designed with virtual instrument technology at the core of the software. It is shown by experiments that the measurement and control system can run stably and reliably and meet the functional requirements for controlling the weaving workshop.

  19. Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha

    2016-01-01

    To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.

  20. Evaluating the Quantitative Capabilities of Metagenomic Analysis Software.

    PubMed

    Kerepesi, Csaba; Grolmusz, Vince

    2016-05-01

    DNA sequencing technologies are applied widely and frequently today to describe metagenomes, i.e., microbial communities in environmental or clinical samples, without the need for culturing them. These technologies usually return short (100-300 base-pairs long) DNA reads, and these reads are processed by metagenomic analysis software that assign phylogenetic composition-information to the dataset. Here we evaluate three metagenomic analysis software (AmphoraNet--a webserver implementation of AMPHORA2--, MG-RAST, and MEGAN5) for their capabilities of assigning quantitative phylogenetic information for the data, describing the frequency of appearance of the microorganisms of the same taxa in the sample. The difficulties of the task arise from the fact that longer genomes produce more reads from the same organism than shorter genomes, and some software assign higher frequencies to species with longer genomes than to those with shorter ones. This phenomenon is called the "genome length bias." Dozens of complex artificial metagenome benchmarks can be found in the literature. Because of the complexity of those benchmarks, it is usually difficult to judge the resistance of a metagenomic software to this "genome length bias." Therefore, we have made a simple benchmark for the evaluation of the "taxon-counting" in a metagenomic sample: we have taken the same number of copies of three full bacterial genomes of different lengths, break them up randomly to short reads of average length of 150 bp, and mixed the reads, creating our simple benchmark. Because of its simplicity, the benchmark is not supposed to serve as a mock metagenome, but if a software fails on that simple task, it will surely fail on most real metagenomes. We applied three software for the benchmark. The ideal quantitative solution would assign the same proportion to the three bacterial taxa. We have found that AMPHORA2/AmphoraNet gave the most accurate results and the other two software were under-performers: they counted quite reliably each short read to their respective taxon, producing the typical genome length bias. The benchmark dataset is available at http://pitgroup.org/static/3RandomGenome-100kavg150bps.fna.

  1. Evaluation of the efficiency and reliability of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1994-01-01

    There are numerous studies which show that CASE Tools greatly facilitate software development. As a result of these advantages, an increasing amount of software development is done with CASE Tools. As more software engineers become proficient with these tools, their experience and feedback lead to further development with the tools themselves. What has not been widely studied, however, is the reliability and efficiency of the actual code produced by the CASE Tools. This investigation considered these matters. Three segments of code generated by MATRIXx, one of many commercially available CASE Tools, were chosen for analysis: ETOFLIGHT, a portion of the Earth to Orbit Flight software, and ECLSS and PFMC, modules for Environmental Control and Life Support System and Pump Fan Motor Control, respectively.

  2. NASA software specification and evaluation system design, part 2

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A survey and analysis of the existing methods, tools and techniques employed in the development of software are presented along with recommendations for the construction of reliable software. Functional designs for software specification language, and the data base verifier are presented.

  3. Effective Software Engineering Leadership for Development Programs

    ERIC Educational Resources Information Center

    Cagle West, Marsha

    2010-01-01

    Software is a critical component of systems ranging from simple consumer appliances to complex health, nuclear, and flight control systems. The development of quality, reliable, and effective software solutions requires the incorporation of effective software engineering processes and leadership. Processes, approaches, and methodologies for…

  4. Major transitions in information technology

    PubMed Central

    Valverde, Sergi

    2016-01-01

    When looking at the history of technology, we can see that all inventions are not of equal importance. Only a few technologies have the potential to start a new branching series (specifically, by increasing diversity), have a lasting impact in human life and ultimately became turning points. Technological transitions correspond to times and places in the past when a large number of novel artefact forms or behaviours appeared together or in rapid succession. Why does that happen? Is technological change continuous and gradual or does it occur in sudden leaps and bounds? The evolution of information technology (IT) allows for a quantitative and theoretical approach to technological transitions. The value of information systems experiences sudden changes (i) when we learn how to use this technology, (ii) when we accumulate a large amount of information, and (iii) when communities of practice create and exchange free information. The coexistence between gradual improvements and discontinuous technological change is a consequence of the asymmetric relationship between complexity and hardware and software. Using a cultural evolution approach, we suggest that sudden changes in the organization of ITs depend on the high costs of maintaining and transmitting reliable information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431527

  5. The Advanced Technology Operations System: ATOS

    NASA Technical Reports Server (NTRS)

    Kaufeler, J.-F.; Laue, H. A.; Poulter, K.; Smith, H.

    1993-01-01

    Mission control systems supporting new space missions face ever-increasing requirements in terms of functionality, performance, reliability and efficiency. Modern data processing technology is providing the means to meet these requirements in new systems under development. During the past few years the European Space Operations Centre (ESOC) of the European Space Agency (ESA) has carried out a number of projects to demonstrate the feasibility of using advanced software technology, in particular, knowledge based systems, to support mission operations. A number of advances must be achieved before these techniques can be moved towards operational use in future missions, namely, integration of the applications into a single system framework and generalization of the applications so that they are mission independent. In order to achieve this goal, ESA initiated the Advanced Technology Operations System (ATOS) program, which will develop the infrastructure to support advanced software technology in mission operations, and provide applications modules to initially support: Mission Preparation, Mission Planning, Computer Assisted Operations, and Advanced Training. The first phase of the ATOS program is tasked with the goal of designing and prototyping the necessary system infrastructure to support the rest of the program. The major components of the ATOS architecture is presented. This architecture relies on the concept of a Mission Information Base (MIB) as the repository for all information and knowledge which will be used by the advanced application modules in future mission control systems. The MIB is being designed to exploit the latest in database and knowledge representation technology in an open and distributed system. In conclusion the technological and implementation challenges expected to be encountered, as well as the future plans and time scale of the project, are presented.

  6. Software Engineering Research/Developer Collaborations in 2004 (C104)

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Markosian, Lawrance

    2005-01-01

    In 2004, six collaborations between software engineering technology providers and NASA software development personnel deployed a total of five software engineering technologies (for references, see Section 7.2) on the NASA projects. The main purposes were to benefit the projects, infuse the technologies if beneficial into NASA, and give feedback to the technology providers to improve the technologies. Each collaboration project produced a final report (for references, see Section 7.1). Section 2 of this report summarizes each project, drawing from the final reports and communications with the software developers and technology providers. Section 3 indicates paths to further infusion of the technologies into NASA practice. Section 4 summarizes some technology transfer lessons learned. Section 6 lists the acronyms used in this report.

  7. An analysis of functional shoulder movements during task performance using Dartfish movement analysis software.

    PubMed

    Khadilkar, Leenesh; MacDermid, Joy C; Sinden, Kathryn E; Jenkyn, Thomas R; Birmingham, Trevor B; Athwal, George S

    2014-01-01

    Video-based movement analysis software (Dartfish) has potential for clinical applications for understanding shoulder motion if functional measures can be reliably obtained. The primary purpose of this study was to describe the functional range of motion (ROM) of the shoulder used to perform a subset of functional tasks. A second purpose was to assess the reliability of functional ROM measurements obtained by different raters using Dartfish software. Ten healthy participants, mean age 29 ± 5 years, were videotaped while performing five tasks selected from the Disabilities of the Arm, Shoulder and Hand (DASH). Video cameras and markers were used to obtain video images suitable for analysis in Dartfish software. Three repetitions of each task were performed. Shoulder movements from all three repetitions were analyzed using Dartfish software. The tracking tool of the Dartfish software was used to obtain shoulder joint angles and arcs of motion. Test-retest and inter-rater reliability of the measurements were evaluated using intraclass correlation coefficients (ICC). Maximum (coronal plane) abduction (118° ± 16°) and (sagittal plane) flexion (111° ± 15°) was observed during 'washing one's hair;' maximum extension (-68° ± 9°) was identified during 'washing one's own back.' Minimum shoulder ROM was observed during 'opening a tight jar' (33° ± 13° abduction and 13° ± 19° flexion). Test-retest reliability (ICC = 0.45 to 0.94) suggests high inter-individual task variability, and inter-rater reliability (ICC = 0.68 to 1.00) showed moderate to excellent agreement. KEY FINDINGS INCLUDE: 1) functional shoulder ROM identified in this study compared to similar studies; 2) healthy individuals require less than full ROM when performing five common ADL tasks 3) high participant variability was observed during performance of the five ADL tasks; and 4) Dartfish software provides a clinically relevant tool to analyze shoulder function.

  8. 15 CFR Supplement No. 2 to Part 774 - General Technology and Software Notes

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 2 2014-01-01 2014-01-01 false General Technology and Software Notes... Software Notes 1. General Technology Note. The export of “technology” that is “required” for the... necessary” information. 2. General Software Note. License Exception TSU (mass market software) (see § 740.13...

  9. Software engineering technology transfer: Understanding the process

    NASA Technical Reports Server (NTRS)

    Zelkowitz, Marvin V.

    1993-01-01

    Technology transfer is of crucial concern to both government and industry today. In this report, the mechanisms developed by NASA to transfer technology are explored and the actual mechanisms used to transfer software development technologies are investigated. Time, cost, and effectiveness of software engineering technology transfer is reported.

  10. Markov chains for testing redundant software

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sjogren, Jon A.

    1988-01-01

    A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.

  11. A Bayesian modification to the Jelinski-Moranda software reliability growth model

    NASA Technical Reports Server (NTRS)

    Littlewood, B.; Sofer, A.

    1983-01-01

    The Jelinski-Moranda (JM) model for software reliability was examined. It is suggested that a major reason for the poor results given by this model is the poor performance of the maximum likelihood method (ML) of parameter estimation. A reparameterization and Bayesian analysis, involving a slight modelling change, are proposed. It is shown that this new Bayesian-Jelinski-Moranda model (BJM) is mathematically quite tractable, and several metrics of interest to practitioners are obtained. The BJM and JM models are compared by using several sets of real software failure data collected and in all cases the BJM model gives superior reliability predictions. A change in the assumption which underlay both models to present the debugging process more accurately is discussed.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Happenny, Sean F.

    The United States’ power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power distribution networks utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Demonstrating security in embedded systems is another research area PNNL ismore » tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the networks protecting them are becoming easier to breach. Providing a virtual power substation network to each student team at the National Collegiate Cyber Defense Competition, thereby supporting the education of future cyber security professionals, is another way PNNL is helping to strengthen the security of the nation’s power infrastructure.« less

  13. Increasing the resilience and security of the United States' power infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Happenny, Sean F.

    2015-08-01

    The United States' power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power infrastructure control and distribution paradigms by utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Understanding how these systems behave in real-worldmore » conditions will lead to new ways to make our power infrastructure more resilient and secure. Demonstrating security in embedded systems is another research area PNNL is tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the aging networks protecting them are becoming easier to attack.« less

  14. Study of a unified hardware and software fault-tolerant architecture

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan; Alger, Linda; Friend, Steven; Greeley, Gregory; Sacco, Stephen; Adams, Stuart

    1989-01-01

    A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified.

  15. 31 CFR 560.418 - Release of technology or software in the United States or a third country.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Release of technology or software in... IRANIAN TRANSACTIONS AND SANCTIONS REGULATIONS Interpretations § 560.418 Release of technology or software in the United States or a third country. The release of technology or software in the United States...

  16. 15 CFR 770.3 - Interpretations related to exports of technology and software to destinations in Country Group D:1.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... technology and software to destinations in Country Group D:1. 770.3 Section 770.3 Commerce and Foreign Trade... technology and software to destinations in Country Group D:1. (a) Introduction. This section is intended to provide you additional guidance on how to determine whether your technology or software would be eligible...

  17. 15 CFR 770.3 - Interpretations related to exports of technology and software to destinations in Country Group D:1.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... technology and software to destinations in Country Group D:1. 770.3 Section 770.3 Commerce and Foreign Trade... technology and software to destinations in Country Group D:1. (a) Introduction. This section is intended to provide you additional guidance on how to determine whether your technology or software would be eligible...

  18. 31 CFR 560.418 - Release of technology or software in the United States or a third country.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 3 2011-07-01 2011-07-01 false Release of technology or software in... IRANIAN TRANSACTIONS REGULATIONS Interpretations § 560.418 Release of technology or software in the United States or a third country. The release of technology or software in the United States, or by a United...

  19. 31 CFR 560.418 - Release of technology or software in the United States or a third country.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Release of technology or software in... IRANIAN TRANSACTIONS REGULATIONS Interpretations § 560.418 Release of technology or software in the United States or a third country. The release of technology or software in the United States, or by a United...

  20. 15 CFR 770.3 - Interpretations related to exports of technology and software to destinations in Country Group D:1.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... technology and software to destinations in Country Group D:1. 770.3 Section 770.3 Commerce and Foreign Trade... technology and software to destinations in Country Group D:1. (a) Introduction. This section is intended to provide you additional guidance on how to determine whether your technology or software would be eligible...

  1. 15 CFR 770.3 - Interpretations related to exports of technology and software to destinations in Country Group D:1.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... technology and software to destinations in Country Group D:1. 770.3 Section 770.3 Commerce and Foreign Trade... technology and software to destinations in Country Group D:1. (a) Introduction. This section is intended to provide you additional guidance on how to determine whether your technology or software would be eligible...

  2. 31 CFR 560.418 - Release of technology or software in the United States or a third country.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Release of technology or software in... IRANIAN TRANSACTIONS REGULATIONS Interpretations § 560.418 Release of technology or software in the United States or a third country. The release of technology or software in the United States, or by a United...

  3. 31 CFR 560.418 - Release of technology or software in the United States or a third country.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Release of technology or software in... IRANIAN TRANSACTIONS AND SANCTIONS REGULATIONS Interpretations § 560.418 Release of technology or software in the United States or a third country. The release of technology or software in the United States...

  4. 15 CFR 770.3 - Interpretations related to exports of technology and software to destinations in Country Group D:1.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... technology and software to destinations in Country Group D:1. 770.3 Section 770.3 Commerce and Foreign Trade... technology and software to destinations in Country Group D:1. (a) Introduction. This section is intended to provide you additional guidance on how to determine whether your technology or software would be eligible...

  5. Using Flash Technology for Motivation and Assessment

    ERIC Educational Resources Information Center

    Deal, Walter F., III

    2004-01-01

    A visit to most any technology education laboratory or classroom will reveal that computers, software, and multimedia software are rapidly becoming a mainstay in learning about technology and technological literacy. Almost all technology labs have at least several computers dedicated to specialized software or hardware such as Computer-aided…

  6. Thermographic Sensing For On-Line Industrial Control

    NASA Astrophysics Data System (ADS)

    Holmsten, Dag

    1986-10-01

    It is today's emergence of thermoelectrically cooled, highly accurate infrared linescanners and imaging systems that has definitely made on-line Infraread Thermography (IRT) possible. Specifically designed for continuous use, these scanners are equipped with dedicated software capable of monitoring and controlling highly complex thermodynamic situations. This paper will outline some possible implications of using IRT on-line by describing some uses of this technology in the steel-making (hot rolling) and automotive industries (machine-vision). A warning is also expressed that IRT technology not originally designed for automated applications e.g. high resolution, imaging systems, should not be directly applied to an on-line measurement situation without having its measurement resolution, accuracy and especially its repeatability, reliably proven. Some suitable testing procedures are briefly outlined at the end of the paper.

  7. Impacts of Technological Changes in the Cyber Environment on Software/Systems Engineering Workforce Development

    DTIC Science & Technology

    2010-04-01

    for decoupled parallel development Ref: Barry Boehm 12 Impacts of Technological Changes in the Cyber Environment on Software/Systems Engineering... Pressman , R.S., Software Engineering: A Practitioner’s Approach, 13 Impacts of Technological Changes in the Cyber Environment on Software/Systems

  8. The CFHT MegaCam control system: new solutions based on PLCs, WorldFIP fieldbus and Java softwares

    NASA Astrophysics Data System (ADS)

    Rousse, Jean Y.; Boulade, Olivier; Charlot, Xavier; Abbon, P.; Aune, Stephan; Borgeaud, Pierre; Carton, Pierre-Henri; Carty, M.; Da Costa, J.; Deschamps, H.; Desforge, D.; Eppele, Dominique; Gallais, Pascal; Gosset, L.; Granelli, Remy; Gros, Michel; de Kat, Jean; Loiseau, Denis; Ritou, J. L.; Starzynski, Pierre; Vignal, Nicolas; Vigroux, Laurent G.

    2003-03-01

    MegaCam is a wide-field imaging camera built for the prime focus of the 3.6m Canada-France-Hawaii Telescope. This large detector has required new approaches from the hardware up to the instrument control system software. Safe control of the three sub-systems of the instrument (cryogenics, filters and shutter), measurement of the exposure time with an accuracy of 0.1%, identification of the filters and management of the internal calibration source are the major challenges that are taken up by the control system. Another challenge is to insure all these functionalities with the minimum space available on the telescope structure for the electrical hardware and a minimum number of cables to keep the highest reliability. All these requirements have been met with a control system which different elements are linked by a WorldFip fieldbus on optical fiber. The diagnosis and remote user support will be insured with an Engineering Control System station based on software developed on Internet JAVA technologies (applets, servlets) and connected on the fieldbus.

  9. Development of AN Open-Source Automatic Deformation Monitoring System for Geodetical and Geotechnical Measurements

    NASA Astrophysics Data System (ADS)

    Engel, P.; Schweimler, B.

    2016-04-01

    The deformation monitoring of structures and buildings is an important task field of modern engineering surveying, ensuring the standing and reliability of supervised objects over a long period. Several commercial hardware and software solutions for the realization of such monitoring measurements are available on the market. In addition to them, a research team at the Neubrandenburg University of Applied Sciences (NUAS) is actively developing a software package for monitoring purposes in geodesy and geotechnics, which is distributed under an open source licence and free of charge. The task of managing an open source project is well-known in computer science, but it is fairly new in a geodetic context. This paper contributes to that issue by detailing applications, frameworks, and interfaces for the design and implementation of open hardware and software solutions for sensor control, sensor networks, and data management in automatic deformation monitoring. It will be discussed how the development effort of networked applications can be reduced by using free programming tools, cloud computing technologies, and rapid prototyping methods.

  10. Point of care use of a personal digital assistant for patient consultation management: experience of an intravenous resource nurse team in a major Canadian teaching hospital.

    PubMed

    Bosma, Laine; Balen, Robert M; Davidson, Erin; Jewesson, Peter J

    2003-01-01

    The development and integration of a personal digital assistant (PDA)-based point-of-care database into an intravenous resource nurse (IVRN) consultation service for the purposes of consultation management and service characterization are described. The IVRN team provides a consultation service 7 days a week in this 1000-bed tertiary adult care teaching hospital. No simple, reliable method for documenting IVRN patient care activity and facilitating IVRN-initiated patient follow-up evaluation was available. Implementation of a PDA database with exportability of data to statistical analysis software was undertaken in July 2001. A Palm IIIXE PDA was purchased and a three-table, 13-field database was developed using HanDBase software. During the 7-month period of data collection, the IVRN team recorded 4868 consultations for 40 patient care areas. Full analysis of service characteristics was conducted using SPSS 10.0 software. Team members adopted the new technology with few problems, and the authors now can efficiently track and analyze the services provided by their IVRN team.

  11. Centralized Alert-Processing and Asset Planning for Sensorwebs

    NASA Technical Reports Server (NTRS)

    Castano, Rebecca; Chien, Steve A.; Rabideau, Gregg R.; Tang, Benyang

    2010-01-01

    A software program provides a Sensorweb architecture for alert-processing, event detection, asset allocation and planning, and visualization. It automatically tasks and re-tasks various types of assets such as satellites and robotic vehicles in response to alerts (fire, weather) extracted from various data sources, including low-level Webcam data. JPL has adapted cons iderable Sensorweb infrastructure that had been previously applied to NASA Earth Science applications. This NASA Earth Science Sensorweb has been in operational use since 2003, and has proven reliability of the Sensorweb technologies for robust event detection and autonomous response using space and ground assets. Unique features of the software include flexibility to a range of detection and tasking methods including those that require aggregation of data over spatial and temporal ranges, generality of the response structure to represent and implement a range of response campaigns, and the ability to respond rapidly.

  12. Electro-optic Mach-Zehnder Interferometer based Optical Digital Magnitude Comparator and 1's Complement Calculator

    NASA Astrophysics Data System (ADS)

    Kumar, Ajay; Raghuwanshi, Sanjeev Kumar

    2016-06-01

    The optical switching activity is one of the most essential phenomena in the optical domain. The electro-optic effect-based switching phenomena are applicable to generate some effective combinational and sequential logic circuits. The processing of digital computational technique in the optical domain includes some considerable advantages of optical communication technology, e.g. immunity to electro-magnetic interferences, compact size, signal security, parallel computing and larger bandwidth. The paper describes some efficient technique to implement single bit magnitude comparator and 1's complement calculator using the concepts of electro-optic effect. The proposed techniques are simulated on the MATLAB software. However, the suitability of the techniques is verified using the highly reliable Opti-BPM software. It is interesting to analyze the circuits in order to specify some optimized device parameter in order to optimize some performance affecting parameters, e.g. crosstalk, extinction ratio, signal losses through the curved and straight waveguide sections.

  13. An investigation of fake fingerprint detection approaches

    NASA Astrophysics Data System (ADS)

    Ahmad, Asraful Syifaa'; Hassan, Rohayanti; Othman, Razib M.

    2017-10-01

    The most reliable biometrics technology, fingerprint recognition is widely used in terms of security due to its permanence and uniqueness. However, it is also vulnerable to the certain type of attacks including presenting fake fingerprints to the sensor which requires the development of new and efficient protection measures. Particularly, the aim is to identify the most recent literature related to the fake fingerprint recognition and only focus on software-based approaches. A systematic review is performed by analyzing 146 primary studies from the gross collection of 34 research papers to determine the taxonomy, approaches, online public databases, and limitations of the fake fingerprint. Fourteen software-based approaches have been briefly described, four limitations of fake fingerprint image were revealed and two known fake fingerprint databases were addressed briefly in this review. Therefore this work provides an overview of an insight into the current understanding of fake fingerprint recognition besides identifying future research possibilities.

  14. Optimally analyzing and implementing of bolt fittings in steel structure based on ANSYS

    NASA Astrophysics Data System (ADS)

    Han, Na; Song, Shuangyang; Cui, Yan; Wu, Yongchun

    2018-03-01

    ANSYS simulation software for its excellent performance become outstanding one in Computer-aided Engineering (CAE) family, it is committed to the innovation of engineering simulation to help users to shorten the design process. First, a typical procedure to implement CAE was design. The framework of structural numerical analysis on ANSYS Technology was proposed. Then, A optimally analyzing and implementing of bolt fittings in beam-column join of steel structure was implemented by ANSYS, which was display the cloud chart of XY-shear stress, the cloud chart of YZ-shear stress and the cloud chart of Y component of stress. Finally, ANSYS software simulating results was compared with the measured results by the experiment. The result of ANSYS simulating and analyzing is reliable, efficient and optical. In above process, a structural performance's numerical simulating and analyzing model were explored for engineering enterprises' practice.

  15. Demand Response Resource Quantification with Detailed Building Energy Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hale, Elaine; Horsey, Henry; Merket, Noel

    Demand response is a broad suite of technologies that enables changes in electrical load operations in support of power system reliability and efficiency. Although demand response is not a new concept, there is new appetite for comprehensively evaluating its technical potential in the context of renewable energy integration. The complexity of demand response makes this task difficult -- we present new methods for capturing the heterogeneity of potential responses from buildings, their time-varying nature, and metrics such as thermal comfort that help quantify likely acceptability of specific demand response actions. Computed with an automated software framework, the methods are scalable.

  16. In silico toxicology for the pharmaceutical sciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valerio, Luis G., E-mail: Luis.Valerio@fda.hhs.go

    2009-12-15

    The applied use of in silico technologies (a.k.a. computational toxicology, in silico toxicology, computer-assisted tox, e-tox, i-drug discovery, predictive ADME, etc.) for predicting preclinical toxicological endpoints, clinical adverse effects, and metabolism of pharmaceutical substances has become of high interest to the scientific community and the public. The increased accessibility of these technologies for scientists and recent regulations permitting their use for chemical risk assessment supports this notion. The scientific community is interested in the appropriate use of such technologies as a tool to enhance product development and safety of pharmaceuticals and other xenobiotics, while ensuring the reliability and accuracy ofmore » in silico approaches for the toxicological and pharmacological sciences. For pharmaceutical substances, this means active and impurity chemicals in the drug product may be screened using specialized software and databases designed to cover these substances through a chemical structure-based screening process and algorithm specific to a given software program. A major goal for use of these software programs is to enable industry scientists not only to enhance the discovery process but also to ensure the judicious use of in silico tools to support risk assessments of drug-induced toxicities and in safety evaluations. However, a great amount of applied research is still needed, and there are many limitations with these approaches which are described in this review. Currently, there is a wide range of endpoints available from predictive quantitative structure-activity relationship models driven by many different computational software programs and data sources, and this is only expected to grow. For example, there are models based on non-proprietary and/or proprietary information specific to assessing potential rodent carcinogenicity, in silico screens for ICH genetic toxicity assays, reproductive and developmental toxicity, theoretical prediction of human drug metabolism, mechanisms of action for pharmaceuticals, and newer models for predicting human adverse effects. How accurate are these approaches is both a statistical issue and challenge in toxicology. In this review, fundamental concepts and the current capabilities and limitations of this technology will be critically addressed.« less

  17. Proceedings of the Second Software Architecture Technology User Network (SATURN) Workshop

    DTIC Science & Technology

    2006-08-01

    Proceedings of the Second Software Architecture Technology User Network (SATURN) Workshop Robert L. Nord August 2006 TECHNICAL REPORT CMU...SEI-2006-TR-010 ESC-TR-2006-010 Software Architecture Technology Initiative Unlimited distribution subject to the copyright. This report was...Participants 3 3 Presentations 5 3.1 SATURN Opening Presentation: Future Directions of the Software Architecture Technology Initiative 5 3.2 Keynote

  18. Software Component Technologies and Space Applications

    NASA Technical Reports Server (NTRS)

    Batory, Don

    1995-01-01

    In the near future, software systems will be more reconfigurable than hardware. This will be possible through the advent of software component technologies which have been prototyped in universities and research labs. In this paper, we outline the foundations for those technologies and suggest how they might impact software for space applications.

  19. 15 CFR 742.13 - Communications intercepting devices; software and technology for communications intercepting...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...; software and technology for communications intercepting devices. 742.13 Section 742.13 Commerce and Foreign... Communications intercepting devices; software and technology for communications intercepting devices. (a) License... wire, oral, or electronic communications (ECCNs 5A001.i and 5A980); and for related “software...

  20. Software Reuse Within the Earth Science Community

    NASA Technical Reports Server (NTRS)

    Marshall, James J.; Olding, Steve; Wolfe, Robert E.; Delnore, Victor E.

    2006-01-01

    Scientific missions in the Earth sciences frequently require cost-effective, highly reliable, and easy-to-use software, which can be a challenge for software developers to provide. The NASA Earth Science Enterprise (ESE) spends a significant amount of resources developing software components and other software development artifacts that may also be of value if reused in other projects requiring similar functionality. In general, software reuse is often defined as utilizing existing software artifacts. Software reuse can improve productivity and quality while decreasing the cost of software development, as documented by case studies in the literature. Since large software systems are often the results of the integration of many smaller and sometimes reusable components, ensuring reusability of such software components becomes a necessity. Indeed, designing software components with reusability as a requirement can increase the software reuse potential within a community such as the NASA ESE community. The NASA Earth Science Data Systems (ESDS) Software Reuse Working Group is chartered to oversee the development of a process that will maximize the reuse potential of existing software components while recommending strategies for maximizing the reusability potential of yet-to-be-designed components. As part of this work, two surveys of the Earth science community were conducted. The first was performed in 2004 and distributed among government employees and contractors. A follow-up survey was performed in 2005 and distributed among a wider community, to include members of industry and academia. The surveys were designed to collect information on subjects such as the current software reuse practices of Earth science software developers, why they choose to reuse software, and what perceived barriers prevent them from reusing software. In this paper, we compare the results of these surveys, summarize the observed trends, and discuss the findings. The results are very similar, with the second, larger survey confirming the basic results of the first, smaller survey. The results suggest that reuse of ESE software can drive down the cost and time of system development, increase flexibility and responsiveness of these systems to new technologies and requirements, and increase effective and accountable community participation.

  1. An AFDX Network for Spacecraft Data Handling

    NASA Astrophysics Data System (ADS)

    Deredempt, Marie-Helene; Kollias, Vangelis; Sun, Zhili; Canamares, Ernest; Ricco, Philippe

    2014-08-01

    In aeronautical domain, ARINC-664 Part 7 specification (AFDX) [4] provides the enabling technology for interfacing equipment in Integrated Modular Avionics (IMA) architectures. The complementary part of AFDX for a complete interoperability - Time and Space Partitioning (ARINC 653) concepts [1]- was already studied as part of space domain ESA roadmap (i.e. IMA4Space project)Standardized IMA based architecture is already considered in aeronautical domain as more flexible, reliable and secure. Integration and validation become simple, using a common set of tools and data base and could be done by part on different means with the same definition (hardware and software test benches, flight control or alarm test benches, simulator and flight test installation).In some area, requirements in terms of data processing are quite similar in space domain and the concept could be applicable to take benefit of the technology itself and of the panel of hardware and software solutions and tools available on the market. The Mission project (Methodology and assessment for the applicability of ARINC-664 (AFDX) in Satellite/Spacecraft on-board communicatION networks), as an FP7 initiative for bringing terrestrial SME research into the space domain started to evaluate the applicability of the standard in space domain.

  2. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  3. Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects

    ERIC Educational Resources Information Center

    Buffardi, Kevin John

    2014-01-01

    Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's…

  4. On Quality and Measures in Software Engineering

    ERIC Educational Resources Information Center

    Bucur, Ion I.

    2006-01-01

    Complexity measures are mainly used to estimate vital information about reliability and maintainability of software systems from regular analysis of the source code. Such measures also provide constant feedback during a software project to assist the control of the development procedure. There exist several models to classify a software product's…

  5. A general software reliability process simulation technique

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1991-01-01

    The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.

  6. Automatic documentation system extension to multi-manufacturers' computers and to measure, improve, and predict software reliability

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.

    1975-01-01

    The DOMONIC system has been modified to run on the Univac 1108 and the CDC 6600 as well as the IBM 370 computer system. The DOMONIC monitor system has been implemented to gather data which can be used to optimize the DOMONIC system and to predict the reliability of software developed using DOMONIC. The areas of quality metrics, error characterization, program complexity, program testing, validation and verification are analyzed. A software reliability model for estimating program completion levels and one on which to base system acceptance have been developed. The DAVE system which performs flow analysis and error detection has been converted from the University of Colorado CDC 6400/6600 computer to the IBM 360/370 computer system for use with the DOMONIC system.

  7. Periorbital Biometric Measurements using ImageJ Software: Standardisation of Technique and Assessment Of Intra- and Interobserver Variability

    PubMed Central

    Rajyalakshmi, R.; Prakash, Winston D.; Ali, Mohammad Javed; Naik, Milind N.

    2017-01-01

    Purpose: To assess the reliability and repeatability of periorbital biometric measurements using ImageJ software and to assess if the horizontal visible iris diameter (HVID) serves as a reliable scale for facial measurements. Methods: This study was a prospective, single-blind, comparative study. Two clinicians performed 12 periorbital measurements on 100 standardised face photographs. Each individual’s HVID was determined by Orbscan IIz and used as a scale for measurements using ImageJ software. All measurements were repeated using the ‘average’ HVID of the study population as a measurement scale. Intraclass correlation coefficient (ICC) and Pearson product-moment coefficient were used as statistical tests to analyse the data. Results: The range of ICC for intra- and interobserver variability was 0.79–0.99 and 0.86–0.99, respectively. Test-retest reliability ranged from 0.66–1.0 to 0.77–0.98, respectively. When average HVID of the study population was used as scale, ICC ranged from 0.83 to 0.99, and the test-retest reliability ranged from 0.83 to 0.96 and the measurements correlated well with recordings done with individual Orbscan HVID measurements. Conclusion: Periorbital biometric measurements using ImageJ software are reproducible and repeatable. Average HVID of the population as measured by Orbscan is a reliable scale for facial measurements. PMID:29403183

  8. Technology and the future of medical equipment maintenance.

    PubMed

    Wear, J O

    1999-05-01

    Maintenance of medical equipment has been changing rapidly in the past few years. It is changing more rapidly in developed countries, but changes are also occurring in developing countries. Some of the changes may permit improved maintenance on the higher technology equipment in developing countries, since they do not require onsite expertise. Technology has had an increasing impact on the development of medical equipment with the increased use of microprocessors and computers. With miniaturization from space technology and electronic chip design, powerful microprocessors and computers have been built into medical equipment. The improvement in manufacturing technology has increased the quality of parts and therefore the medical equipment. This has resulted in increased mean time between failures and reduced maintenance needs. This has made equipment more reliable in remote areas and developing countries. The built-in computers and advances in software design have brought about self-diagnostics in medical equipment. The technicians now have a strong tool to be used in maintenance. One problem in this area is getting access to the self-diagnostics. Some manufacturers will not readily provide this access to the owner of the equipment. Advances in telecommunications in conjunction with self-diagnostics make available remote diagnosis and repair. Since components can no longer be repaired, a remote repair technician can instruct an operator or an on-site repairman on board replacement. In case of software problems, the remote repair technician may perform the repairs over the telephone. It is possible for the equipment to be monitored remotely by modern without interfering with the operation of the equipment. These changes in technology require the training of biomedical engineering technicians (BMETs) to change. They must have training in computers and telecommunications. Some of this training can be done with telecommunications and computers.

  9. Agile Methods for Open Source Safety-Critical Software

    PubMed Central

    Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John

    2011-01-01

    The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the right amount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion. PMID:21799545

  10. Agile Methods for Open Source Safety-Critical Software.

    PubMed

    Gary, Kevin; Enquobahrie, Andinet; Ibanez, Luis; Cheng, Patrick; Yaniv, Ziv; Cleary, Kevin; Kokoori, Shylaja; Muffih, Benjamin; Heidenreich, John

    2011-08-01

    The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion.

  11. Information Technology: A Survey from the Perspective of Higher Education.

    ERIC Educational Resources Information Center

    Van Houweling, Douglas E.

    1986-01-01

    Survey of the history and current development of information technology covers hardware (economies of scale, communications technology, magnetic and optical forms of storage), and the evolution of systems software ("tool" software, applications software, and nonprocedural languages). The effect of new computer technologies on human…

  12. 15 CFR 740.17 - Encryption commodities, software and technology (ENC).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... technology (ENC). 740.17 Section 740.17 Commerce and Foreign Trade Regulations Relating to Commerce and... REGULATIONS LICENSE EXCEPTIONS § 740.17 Encryption commodities, software and technology (ENC). License... therefor classified under ECCN 5B002, and equivalent or related software and technology classified under...

  13. 15 CFR 740.17 - Encryption commodities, software and technology (ENC).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... technology (ENC). 740.17 Section 740.17 Commerce and Foreign Trade Regulations Relating to Commerce and... REGULATIONS LICENSE EXCEPTIONS § 740.17 Encryption commodities, software and technology (ENC). License... therefor classified under ECCN 5B002, and equivalent or related software and technology classified under...

  14. 15 CFR 740.17 - Encryption commodities, software and technology (ENC).

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... technology (ENC). 740.17 Section 740.17 Commerce and Foreign Trade Regulations Relating to Commerce and... REGULATIONS LICENSE EXCEPTIONS § 740.17 Encryption commodities, software and technology (ENC). License... therefor classified under ECCN 5B002, and equivalent or related software and technology classified under...

  15. 15 CFR 742.13 - Communications intercepting devices; software and technology for communications intercepting...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...; software and technology for communications intercepting devices. 742.13 Section 742.13 Commerce and Foreign... Communications intercepting devices; software and technology for communications intercepting devices. (a) License... wire, oral, or electronic communications (ECCNs 5A001.f.1 and 5A980); and for related “software...

  16. Government Technology Acquisition Policy: The Case of Proprietary versus Open Source Software

    ERIC Educational Resources Information Center

    Hemphill, Thomas A.

    2005-01-01

    This article begins by explaining the concepts of proprietary and open source software technology, which are now competing in the marketplace. A review of recent individual and cooperative technology development and public policy advocacy efforts, by both proponents of open source software and advocates of proprietary software, subsequently…

  17. Reliability of new software in measuring cervical multifidus diameters and shoulder muscle strength in a synchronized way; an ultrasonographic study

    PubMed Central

    Rahnama, Leila; Rezasoltani, Asghar; Khalkhali-Zavieh, Minoo; Rahnama, Behnam; Noori-Kochi, Farhang

    2015-01-01

    OBJECTIVES: This study was conducted with the purpose of evaluating the inter-session reliability of new software to measure the diameters of the cervical multifidus muscle (CMM), both at rest and during isometric contractions of the shoulder abductors in subjects with neck pain and in healthy individuals. METHOD: In the present study, the reliability of measuring the diameters of the CMM with the Sonosynch software was evaluated by using 24 participants, including 12 subjects with chronic neck pain and 12 healthy individuals. The anterior-posterior diameter (APD) and the lateral diameter (LD) of the CMM were measured in a resting state and then repeated during isometric contraction of the shoulder abductors. Measurements were taken on separate occasions 3 to 7 days apart in order to determine inter-session reliability. Intraclass correlation coefficient (ICC), standard error of measurement (SEM), and smallest detectable difference (SDD) were used to evaluate the relative and absolute reliability, respectively. RESULTS: The Sonosynch software has shown to be highly reliable in measuring the diameters of the CMM both in healthy subjects and in those with neck pain. The ICCs 95% CI for APD ranged from 0.84 to 0.94 in subjects with neck pain and from 0.86 to 0.94 in healthy subjects. For LD, the ICC 95% CI ranged from 0.64 to 0.95 in subjects with neck pain and from 0.82 to 0.92 in healthy subjects. CONCLUSIONS: Ultrasonographic measurement of the diameters of the CMM using Sonosynch has proved to be reliable especially for APD in healthy subjects as well as subjects with neck pain. PMID:26443975

  18. The Fraunhofer MAVO FASPAS for smart system design

    NASA Astrophysics Data System (ADS)

    Melz, Tobias; Matthias, Michael; Drossel, Welf-Guntram; Sporn, Dieter; Schoenecker, Andreas; Poigne, Axel

    2005-05-01

    The Fraunhofer Gesellschaft is the largest organization for applied research in Europe, having a staff of some 12,700, predominantly qualified scientists and engineers, with an annual research budget of over one billion euros. One of its current internal Market-oriented strategic preliminary research (MaVo) projects is FASPAS (Function Consolidated Adaptive Structures Combining Piezo and Software Technologies for Autonomous Systems) which aims to promote adaptive structure technology for commercial exploitation within the current main research fields of the participating FhIs, namely automotive and machine tools engineering. Under the project management of the Fraunhofer-Institute Structural Durability and System Reliability LBF the six Fraunhofer Institutes LBF, IWU, IKTS, ISC, AiS and IIS bring together their competences ranging from material sciences to system reliability, in order to clarify unanswered questions. The predominant goal is to develop and validate methods and tools to establish a closed, modular development chain for the design and realization of such active structures which shall be useful in its width and depth, i.e. for specific R&D achievements such as the actuator development (depth) as well as the complete system design and realization (width). FASPAS focuses on the development of systems and on the following scientific topics: 1) on design and manufacturing technology for piezo components as integrable actuator/sensor semi-finished modules, 2) on development and transducer module integration of miniaturized electronics for charge generating sensor systems, 3) on the development of methods to analyze system reliability of active structures, 4) on the development of autonomous software structures for flexible, low cost electronics hardware for bulk production and 5) on the construction and validation of the complete, cost-effective development chain of function consolidated structures through application oriented demonstration structures. The research work will be oriented towards active vibration control for existing components on the basis of highly integrated, both, more or less established and highly innovative piezoelectric actuator and sensor systems in compact, cost-effective and robust design combined with advanced controllers. Within the presentation the project work will be shown using the example of one demonstration structure which is a robust interface, here for being integrated within an automotive spring strut system. The interface is designed as a modular, scalable subsystem. Being such, it can be used for similar scenarios in different technology areas e.g. for active mounting of vibration-inducing aggregates. The interface design allows for controlling uniaxial vibrations (z-direction) as well as tilting (normal to the uniaxial effect) and wobbling (rotating around the z-axis).

  19. Availability Improvement of Layer 2 Seamless Networks Using OpenFlow

    PubMed Central

    Molina, Elias; Jacob, Eduardo; Matias, Jon; Moreira, Naiara; Astarloa, Armando

    2015-01-01

    The network robustness and reliability are strongly influenced by the implementation of redundancy and its ability of reacting to changes. In situations where packet loss or maximum latency requirements are critical, replication of resources and information may become the optimal technique. To this end, the IEC 62439-3 Parallel Redundancy Protocol (PRP) provides seamless recovery in layer 2 networks by delegating the redundancy management to the end-nodes. In this paper, we present a combination of the Software-Defined Networking (SDN) approach and PRP topologies to establish a higher level of redundancy and thereby, through several active paths provisioned via the OpenFlow protocol, the global reliability is increased, as well as data flows are managed efficiently. Hence, the experiments with multiple failure scenarios, which have been run over the Mininet network emulator, show the improvement in the availability and responsiveness over other traditional technologies based on a single active path. PMID:25759861

  20. Availability improvement of layer 2 seamless networks using OpenFlow.

    PubMed

    Molina, Elias; Jacob, Eduardo; Matias, Jon; Moreira, Naiara; Astarloa, Armando

    2015-01-01

    The network robustness and reliability are strongly influenced by the implementation of redundancy and its ability of reacting to changes. In situations where packet loss or maximum latency requirements are critical, replication of resources and information may become the optimal technique. To this end, the IEC 62439-3 Parallel Redundancy Protocol (PRP) provides seamless recovery in layer 2 networks by delegating the redundancy management to the end-nodes. In this paper, we present a combination of the Software-Defined Networking (SDN) approach and PRP topologies to establish a higher level of redundancy and thereby, through several active paths provisioned via the OpenFlow protocol, the global reliability is increased, as well as data flows are managed efficiently. Hence, the experiments with multiple failure scenarios, which have been run over the Mininet network emulator, show the improvement in the availability and responsiveness over other traditional technologies based on a single active path.

  1. A highly reliable, high performance open avionics architecture for real time Nap-of-the-Earth operations

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Elks, Carl

    1995-01-01

    An Army Fault Tolerant Architecture (AFTA) has been developed to meet real-time fault tolerant processing requirements of future Army applications. AFTA is the enabling technology that will allow the Army to configure existing processors and other hardware to provide high throughput and ultrahigh reliability necessary for TF/TA/NOE flight control and other advanced Army applications. A comprehensive conceptual study of AFTA has been completed that addresses a wide range of issues including requirements, architecture, hardware, software, testability, producibility, analytical models, validation and verification, common mode faults, VHDL, and a fault tolerant data bus. A Brassboard AFTA for demonstration and validation has been fabricated, and two operating systems and a flight-critical Army application have been ported to it. Detailed performance measurements have been made of fault tolerance and operating system overheads while AFTA was executing the flight application in the presence of faults.

  2. NDE research efforts at the FAA Center for Aviation Systems Reliability

    NASA Technical Reports Server (NTRS)

    Thompson, Donald O.; Brasche, Lisa J. H.

    1992-01-01

    The Federal Aviation Administration-Center for Aviation Systems Reliability (FAA-CASR), a part of the Institute for Physical Research and Technology at Iowa State University, began operation in the Fall of 1990 with funding from the FAA. The mission of the FAA-CASR is to develop quantitative nondestructive evaluation (NDE) methods for aircraft structures and materials including prototype instrumentation, software, techniques, and procedures and to develop and maintain comprehensive education and training programs in aviation specific inspection procedures and practices. To accomplish this mission, FAA-CASR brings together resources from universities, government, and industry to develop a comprehensive approach to problems specific to the aviation industry. The problem areas are targeted by the FAA, aviation manufacturers, the airline industry and other members of the aviation business community. This consortium approach ensures that the focus of the efforts is on relevant problems and also facilitates effective transfer of the results to industry.

  3. Proceedings of the 1999 Oil and Gas Conference: Technology Options for Producer Survival

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None available

    2000-04-12

    The 1999 Oil & Gas Conference was cosponsored by the U.S. Department of Energy (DOE), Office of Fossil Energy, Federal Energy Technology Center (FETC) and National Petroleum Technology Office (NPTO) on June 28 to 30 in Dallas, Texas. The Oil & Gas Conference theme, Technology Options for Producer Survival, reflects the need for development and implementation of new technologies to ensure an affordable, reliable energy future. The conference was attended by nearly 250 representatives from industry, academia, national laboratories, DOE, and other Government agencies. Three preconference workshops (Downhole Separation Technologies: Is it Applicable for Your Operations, Exploring and developing Naturallymore » Fractured Low-Permeability Gas Reservoirs from the Rocky Mountains to the Austin Chalk, and Software Program Applications) were held. The conference agenda included an opening plenary session, three platform sessions (Sessions 2 and 3 were split into 2 concurrent topics), and a poster presentation reception. The platform session topics were Converting Your Resources Into Reserves (Sessions 1 and 2A), Clarifying Your Subsurface Vision (Session 2B), and High Performance, Cost Effective Drilling, Completion, Stimulation Technologies (Session 3B). In total, there were 5 opening speakers, 30 presenters, and 16 poster presentations.« less

  4. Exploiting IoT Technologies and Open Source Components for Smart Seismic Network Instrumentation

    NASA Astrophysics Data System (ADS)

    Germenis, N. G.; Koulamas, C. A.; Foundas, P. N.

    2017-12-01

    The data collection infrastructure of any seismic network poses a number of requirements and trade-offs related to accuracy, reliability, power autonomy and installation & operational costs. Having the right hardware design at the edge of this infrastructure, embedded software running inside the instruments is the heart of pre-processing and communication services implementation and their integration with the central storage and processing facilities of the seismic network. This work demonstrates the feasibility and benefits of exploiting software components from heterogeneous sources in order to realize a smart seismic data logger, achieving higher reliability, faster integration and less development and testing costs of critical functionality that is in turn responsible for the cost and power efficient operation of the device. The instrument's software builds on top of widely used open source components around the Linux kernel with real-time extensions, the core Debian Linux distribution, the earthworm and seiscomp tooling frameworks, as well as components from the Internet of Things (IoT) world, such as the CoAP and MQTT protocols for the signaling planes, besides the widely used de-facto standards of the application domain at the data plane, such as the SeedLink protocol. By using an innovative integration of features based on lower level GPL components of the seiscomp suite with higher level processing earthworm components, coupled with IoT protocol extensions to the latter, the instrument can implement smart functionality such as network controlled, event triggered data transmission in parallel with edge archiving and on demand, short term historical data retrieval.

  5. School Nutrition Directors' Perceptions of Technology Use in School Nutrition Programs

    ERIC Educational Resources Information Center

    Pratt, Peggy; Bednar, Carolyn; Kwon, Junehee

    2012-01-01

    Purpose/Objectives: This study investigated the types of technology/software currently used by Southwest Region school nutrition directors (SNDs) and assessed their perceptions of barriers to purchasing new technology/software. In addition, the importance of future technology/software acquisitions in meeting school nutrition program (SNP) goals…

  6. 31 CFR 545.205 - Prohibited importation of goods, software, technology, or services.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., software, technology, or services. 545.205 Section 545.205 Money and Finance: Treasury Regulations Relating..., technology, or services. Except as otherwise authorized, and notwithstanding any contract entered into or any..., software, technology, or services owned or controlled by the Taliban or persons whose property or interests...

  7. New technologies for advanced three-dimensional optimum shape design in aeronautics

    NASA Astrophysics Data System (ADS)

    Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno

    1999-05-01

    The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright

  8. Quantitative comparison and evaluation of software packages for assessment of abdominal adipose tissue distribution by magnetic resonance imaging.

    PubMed

    Bonekamp, S; Ghosh, P; Crawford, S; Solga, S F; Horska, A; Brancati, F L; Diehl, A M; Smith, S; Clark, J M

    2008-01-01

    To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Feature evaluation and test-retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. A random sample of 15 obese adults with type 2 diabetes. Axial T1-weighted spin echo images centered at vertebral bodies of L2-L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test-retest reliability. Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test-retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages.

  9. Quantitative comparison and evaluation of software packages for assessment of abdominal adipose tissue distribution by magnetic resonance imaging

    PubMed Central

    Bonekamp, S; Ghosh, P; Crawford, S; Solga, SF; Horska, A; Brancati, FL; Diehl, AM; Smith, S; Clark, JM

    2009-01-01

    Objective To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Design Feature evaluation and test–retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. Subjects A random sample of 15 obese adults with type 2 diabetes. Measurements Axial T1-weighted spin echo images centered at vertebral bodies of L2–L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test–retest reliability. Results Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test–retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Conclusion Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages. PMID:17700582

  10. The cost of software fault tolerance

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1982-01-01

    The proposed use of software fault tolerance techniques as a means of reducing software costs in avionics and as a means of addressing the issue of system unreliability due to faults in software is examined. A model is developed to provide a view of the relationships among cost, redundancy, and reliability which suggests strategies for software development and maintenance which are not conventional.

  11. Validation of next generation sequencing technologies in comparison to current diagnostic gold standards for BRAF, EGFR and KRAS mutational analysis.

    PubMed

    McCourt, Clare M; McArt, Darragh G; Mills, Ken; Catherwood, Mark A; Maxwell, Perry; Waugh, David J; Hamilton, Peter; O'Sullivan, Joe M; Salto-Tellez, Manuel

    2013-01-01

    Next Generation Sequencing (NGS) has the potential of becoming an important tool in clinical diagnosis and therapeutic decision-making in oncology owing to its enhanced sensitivity in DNA mutation detection, fast-turnaround of samples in comparison to current gold standard methods and the potential to sequence a large number of cancer-driving genes at the one time. We aim to test the diagnostic accuracy of current NGS technology in the analysis of mutations that represent current standard-of-care, and its reliability to generate concomitant information on other key genes in human oncogenesis. Thirteen clinical samples (8 lung adenocarcinomas, 3 colon carcinomas and 2 malignant melanomas) already genotyped for EGFR, KRAS and BRAF mutations by current standard-of-care methods (Sanger Sequencing and q-PCR), were analysed for detection of mutations in the same three genes using two NGS platforms and an additional 43 genes with one of these platforms. The results were analysed using closed platform-specific proprietary bioinformatics software as well as open third party applications. Our results indicate that the existing format of the NGS technology performed well in detecting the clinically relevant mutations stated above but may not be reliable for a broader unsupervised analysis of the wider genome in its current design. Our study represents a diagnostically lead validation of the major strengths and weaknesses of this technology before consideration for diagnostic use.

  12. A Compatible Hardware/Software Reliability Prediction Model.

    DTIC Science & Technology

    1981-07-22

    machines. In particular, he was interested in the following problem: assu me that one has a collection of connected elements computing and transmitting...software reliability prediction model is desirable, the findings about the Weibull distribution are intriguing. After collecting failure data from several...capacitor, some of the added charge carriers are collected by the capacitor. If the added charge is sufficiently large, the information stored is changed

  13. Are Bibliographic Management Software Search Interfaces Reliable?: A Comparison between Search Results Obtained Using Database Interfaces and the EndNote Online Search Function

    ERIC Educational Resources Information Center

    Fitzgibbons, Megan; Meert, Deborah

    2010-01-01

    The use of bibliographic management software and its internal search interfaces is now pervasive among researchers. This study compares the results between searches conducted in academic databases' search interfaces versus the EndNote search interface. The results show mixed search reliability, depending on the database and type of search…

  14. The Use of Computer Software to Teach High Technology Skills to Vocational Students.

    ERIC Educational Resources Information Center

    Farmer, Edgar I.

    A study examined the type of computer software that is best suited to teach high technology skills to vocational students. During the study, 50 manufacturers of computer software and hardware were sent questionnaires designed to gather data concerning their recommendations in regard to: software to teach high technology skills to vocational…

  15. 15 CFR 734.2 - Important EAR terms and principles.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... technology and software not subject to the EAR are described in §§ 734.7 through 734.11 and supplement no. 1... of items subject to the EAR out of the United States, or release of technology or software subject to... source code and object code software subject to the EAR. (2) Export of technology or software. (See...

  16. 15 CFR 734.2 - Important EAR terms and principles.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... technology and software not subject to the EAR are described in §§ 734.7 through 734.11 and supplement no. 1... of items subject to the EAR out of the United States, or release of technology or software subject to... source code and object code software subject to the EAR. (2) Export of technology or software. (See...

  17. 15 CFR 734.2 - Important EAR terms and principles.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... technology and software not subject to the EAR are described in §§ 734.7 through 734.11 and supplement no. 1... of items subject to the EAR out of the United States, or release of technology or software subject to... source code and object code software subject to the EAR. (2) Export of technology or software. (See...

  18. 15 CFR 734.2 - Important EAR terms and principles.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... technology and software not subject to the EAR are described in §§ 734.7 through 734.11 and supplement no. 1... of items subject to the EAR out of the United States, or release of technology or software subject to... source code and object code software subject to the EAR. (2) Export of technology or software. (See...

  19. 15 CFR 734.2 - Important EAR terms and principles.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... technology and software not subject to the EAR are described in §§ 734.7 through 734.11 and supplement no. 1... of items subject to the EAR out of the United States, or release of technology or software subject to... source code and object code software subject to the EAR. (2) Export of technology or software. (See...

  20. 31 CFR 545.505 - Importation of goods, software, or technology exported from the territory of Afghanistan...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... applicant submits proof satisfactory to the U.S. Customs Service that the goods, software, or technology... satisfactory to the U.S. Customs Service of the location of goods, software, or technology outside the... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Importation of goods, software, or...

  1. A Unified Approach to Model-Based Planning and Execution

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)

    2000-01-01

    Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.

  2. Space Environments and Effects Program (SEE)

    NASA Technical Reports Server (NTRS)

    Yhisreal-Rivas, David M.

    2013-01-01

    The need to preserve works and NASA documented articles is done via the collection of various Space Environments and Effects (SEE) related articles. (SEE) contains and lists the various projects that are ongoing, or have been conducted with the help of NASA. The goal of the (SEE) program is to make publicly available the environment technologies that are required to design, manufacture and operate reliable, cost-effective spacecraft for the government and commercial sectors. Of the many projects contained within the (SEE) program the Lunar-E Library and Spacecraft Materials Selector (SMS) have been selected for a more user friendly means to make the tools easily available to the public. This information which is still available required a person or entity to request access from a point of contact at NASA and wait for the requested bundled software DVD via postal service. Redesigning the material presentation and availability has been mapped to a single step process with faster turnaround time via Materials and Processes Technical Information System (MAPTIS) database. This process requires users to register and be verified in order to gain access to the information contained within. Aiding in the progression of making the software tools/documents available required a combination of specialized in-house data gathering software tools and software archeology.

  3. OAST Space Theme Workshop. Volume 3: Working group summary. 4: Software (E-4). A. Summary. B. Technology needs (form 1). C. Priority assessment (form 2)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Only a few efforts are currently underway to develop an adequate technology base for the various themes. Particular attention must be given to software commonality and evolutionary capability, to increased system integrity and autonomy; and to improved communications among the program users, the program developers, and the programs themselves. There is a need for quantum improvement in software development methods and increasing the awareness of software by all concerned. Major thrusts identified include: (1) data and systems management; (2) software technology for autonomous systems; (3) technology and methods for improving the software development process; (4) advances related to systems of software elements including their architecture, their attributes as systems, and their interfaces with users and other systems; and (5) applications of software including both the basic algorithms used in a number of applications and the software specific to a particular theme or discipline area. The impact of each theme on software is assessed.

  4. Blended Training on Scientific Software: A Study on How Scientific Data Are Generated

    ERIC Educational Resources Information Center

    Skordaki, Efrosyni-Maria; Bainbridge, Susan

    2018-01-01

    This paper presents the results of a research study on scientific software training in blended learning environments. The investigation focused on training approaches followed by scientific software users whose goal is the reliable application of such software. A key issue in current literature is the requirement for a theory-substantiated…

  5. Increasing the reliability of ecological models using modern software engineering techniques

    Treesearch

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  6. A methodology for model-based development and automated verification of software for aerospace systems

    NASA Astrophysics Data System (ADS)

    Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.

    Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.

  7. Toward improved peptide feature detection in quantitative proteomics using stable isotope labeling.

    PubMed

    Nilse, Lars; Sigloch, Florian Christoph; Biniossek, Martin L; Schilling, Oliver

    2015-08-01

    Reliable detection of peptides in LC-MS data is a key algorithmic step in the analysis of quantitative proteomics experiments. While highly abundant peptides can be detected reliably by most modern software tools, there is much less agreement on medium and low-intensity peptides in a sample. The choice of software tools can have a big impact on the quantification of proteins, especially for proteins that appear in lower concentrations. However, in many experiments, it is precisely this region of less abundant but substantially regulated proteins that holds the biggest potential for discoveries. This is particularly true for discovery proteomics in the pharmacological sector with a specific interest in key regulatory proteins. In this viewpoint article, we discuss how the development of novel software algorithms allows us to study this region of the proteome with increased confidence. Reliable results are one of many aspects to be considered when deciding on a bioinformatics software platform. Deployment into existing IT infrastructures, compatibility with other software packages, scalability, automation, flexibility, and support need to be considered and are briefly addressed in this viewpoint article. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Accelerating Project and Process Improvement using Advanced Software Simulation Technology: From the Office to the Enterprise

    DTIC Science & Technology

    2010-04-29

    Technology: From the Office Larry Smith Software Technology Support Center to the Enterprise 517 SMXS/MXDEA 6022 Fir Avenue Hill AFB, UT 84056 801...2010 to 00-00-2010 4. TITLE AND SUBTITLE Accelerating Project and Process Improvement using Advanced Software Simulation Technology: From the Office to

  9. 31 CFR 545.304 - Importation into the United States.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., software, or technology, the term importation into the United States means the bringing of any goods, software, or technology into the United States. However, with respect to goods, software or technology... technology into the United States with the intent to unlade. See also § 545.404. (b) With respect to services...

  10. Intra- and interrater reliability of the Chicago Classification of achalasia subtypes in pediatric high-resolution esophageal manometry (HRM) recordings.

    PubMed

    Singendonk, M M J; Rosen, R; Oors, J; Rommel, N; van Wijk, M P; Benninga, M A; Nurko, S; Omari, T I

    2017-11-01

    Subtyping achalasia by high-resolution manometry (HRM) is clinically relevant as response to therapy and prognosis have shown to vary accordingly. The aim of this study was to assess inter- and intrarater reliability of diagnosing achalasia and achalasia subtyping in children using the Chicago Classification (CC) V3.0. Six observers analyzed 40 pediatric HRM recordings (22 achalasia and 18 non-achalasia) twice by using dedicated analysis software (ManoView 3.0, Given Imaging, Los Angeles, CA, USA). Integrated relaxation pressure (IRP4s), distal contractile integral (DCI), intrabolus pressurization pattern (IBP), and distal latency (DL) were extracted and analyzed hierarchically. Cohen's κ (2 raters) and Fleiss' κ (>2 raters) and the intraclass correlation coefficient (ICC) were used for categorical and ordinal data, respectively. Based on the results of dedicated analysis software only, intra- and interrater reliability was excellent and moderate (κ=0.89 and κ=0.52, respectively) for differentiating achalasia from non-achalasia. For subtyping achalasia, reliability decreased to substantial and fair (κ=0.72 and κ=0.28, respectively). When observers were allowed to change the software-driven diagnosis according to their own interpretation of the manometric patterns, intra- and interrater reliability increased for diagnosing achalasia (κ=0.98 and κ=0.92, respectively) and for subtyping achalasia (κ=0.79 and κ=0.58, respectively). Intra- and interrater agreement for diagnosing achalasia when using HRM and the CC was very good to excellent when results of automated analysis software were interpreted by experienced observers. More variability was seen when relying solely on the software-driven diagnosis and for subtyping achalasia. Therefore, diagnosing and subtyping achalasia should be performed in pediatric motility centers with significant expertise. © 2017 John Wiley & Sons Ltd.

  11. Technology-driven dietary assessment: a software developer’s perspective

    PubMed Central

    Buday, Richard; Tapia, Ramsey; Maze, Gary R.

    2015-01-01

    Dietary researchers need new software to improve nutrition data collection and analysis, but creating information technology is difficult. Software development projects may be unsuccessful due to inadequate understanding of needs, management problems, technology barriers or legal hurdles. Cost overruns and schedule delays are common. Barriers facing scientific researchers developing software include workflow, cost, schedule, and team issues. Different methods of software development and the role that intellectual property rights play are discussed. A dietary researcher must carefully consider multiple issues to maximize the likelihood of success when creating new software. PMID:22591224

  12. Effect of system workload on operating system reliability - A study on IBM 3081

    NASA Technical Reports Server (NTRS)

    Iyer, R. K.; Rossetti, D. J.

    1985-01-01

    This paper presents an analysis of operating system failures on an IBM 3081 running VM/SP. Three broad categories of software failures are found: error handling, program control or logic, and hardware related; it is found that more than 25 percent of software failures occur in the hardware/software interface. Measurements show that results on software reliability cannot be considered representative unless the system workload is taken into account. The overall CPU execution rate, although measured to be close to 100 percent most of the time, is not found to correlate strongly with the occurrence of failures. Possible reasons for the observed workload failure dependency, based on detailed investigations of the failure data, are discussed.

  13. Modular Rocket Engine Control Software (MRECS)

    NASA Technical Reports Server (NTRS)

    Tarrant, Charlie; Crook, Jerry

    1997-01-01

    The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for a generic, advanced engine control system that will result in lower software maintenance (operations) costs. It effectively accommodates software requirements changes that occur due to hardware. technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives and benefits of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishment are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software, architecture, reuse software, and reduced software reverification time related to software changes. Currently, the program is focused on supporting MSFC in accomplishing a Space Shuttle Main Engine (SSME) hot-fire test at Stennis Space Center and the Low Cost Boost Technology (LCBT) Program.

  14. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    PubMed

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Major transitions in information technology.

    PubMed

    Valverde, Sergi

    2016-08-19

    When looking at the history of technology, we can see that all inventions are not of equal importance. Only a few technologies have the potential to start a new branching series (specifically, by increasing diversity), have a lasting impact in human life and ultimately became turning points. Technological transitions correspond to times and places in the past when a large number of novel artefact forms or behaviours appeared together or in rapid succession. Why does that happen? Is technological change continuous and gradual or does it occur in sudden leaps and bounds? The evolution of information technology (IT) allows for a quantitative and theoretical approach to technological transitions. The value of information systems experiences sudden changes (i) when we learn how to use this technology, (ii) when we accumulate a large amount of information, and (iii) when communities of practice create and exchange free information. The coexistence between gradual improvements and discontinuous technological change is a consequence of the asymmetric relationship between complexity and hardware and software. Using a cultural evolution approach, we suggest that sudden changes in the organization of ITs depend on the high costs of maintaining and transmitting reliable information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).

  16. Use of Synchronized Phasor Measurements for Model Validation in ERCOT

    NASA Astrophysics Data System (ADS)

    Nuthalapati, Sarma; Chen, Jian; Shrestha, Prakash; Huang, Shun-Hsien; Adams, John; Obadina, Diran; Mortensen, Tim; Blevins, Bill

    2013-05-01

    This paper discusses experiences in the use of synchronized phasor measurement technology in Electric Reliability Council of Texas (ERCOT) interconnection, USA. Implementation of synchronized phasor measurement technology in the region is a collaborative effort involving ERCOT, ONCOR, AEP, SHARYLAND, EPG, CCET, and UT-Arlington. As several phasor measurement units (PMU) have been installed in ERCOT grid in recent years, phasor data with the resolution of 30 samples per second is being used to monitor power system status and record system events. Post-event analyses using recorded phasor data have successfully verified ERCOT dynamic stability simulation studies. Real time monitoring software "RTDMS"® enables ERCOT to analyze small signal stability conditions by monitoring the phase angles and oscillations. The recorded phasor data enables ERCOT to validate the existing dynamic models of conventional and/or wind generator.

  17. Development of the Telehealth Usability Questionnaire (TUQ).

    PubMed

    Parmanto, Bambang; Lewis, Allen Nelson; Graham, Kristin M; Bertolet, Marnie H

    2016-01-01

    Current telehealth usability questionnaires are designed primarily for older technologies, where telehealth interaction is conducted over dedicated videoconferencing applications. However, telehealth services are increasingly conducted over computer-based systems that rely on commercial software and a user supplied computer interface. Therefore, a usability questionnaire that addresses the changes in telehealth service delivery and technology is needed. The Telehealth Usability Questionnaire (TUQ) was developed to evaluate the usability of telehealth implementation and services. This paper addresses: (1) the need for a new measure of telehealth usability, (2) the development of the TUQ, (3) intended uses for the TUQ, and (4) the reliability of the TUQ. Analyses indicate that the TUQ is a solid, robust, and versatile measure that can be used to measure the quality of the computer-based user interface and the quality of the telehealth interaction and services.

  18. Solving Autonomy Technology Gaps through Wireless Technology and Orion Avionics Architectural Principles

    NASA Astrophysics Data System (ADS)

    Black, Randy; Bai, Haowei; Michalicek, Andrew; Shelton, Blaine; Villela, Mark

    2008-01-01

    Currently, autonomy in space applications is limited by a variety of technology gaps. Innovative application of wireless technology and avionics architectural principles drawn from the Orion crew exploration vehicle provide solutions for several of these gaps. The Vision for Space Exploration envisions extensive use of autonomous systems. Economic realities preclude continuing the level of operator support currently required of autonomous systems in space. In order to decrease the number of operators, more autonomy must be afforded to automated systems. However, certification authorities have been notoriously reluctant to certify autonomous software in the presence of humans or when costly missions may be jeopardized. The Orion avionics architecture, drawn from advanced commercial aircraft avionics, is based upon several architectural principles including partitioning in software. Robust software partitioning provides "brick wall" separation between software applications executing on a single processor, along with controlled data movement between applications. Taking advantage of these attributes, non-deterministic applications can be placed in one partition and a "Safety" application created in a separate partition. This "Safety" partition can track the position of astronauts or critical equipment and prevent any unsafe command from executing. Only the Safety partition need be certified to a human rated level. As a proof-of-concept demonstration, Honeywell has teamed with the Ultra WideBand (UWB) Working Group at NASA Johnson Space Center to provide tracking of humans, autonomous systems, and critical equipment. Using UWB the NASA team can determine positioning to within less than one inch resolution, allowing a Safety partition to halt operation of autonomous systems in the event that an unplanned collision is imminent. Another challenge facing autonomous systems is the coordination of multiple autonomous agents. Current approaches address the issue as one of networking and coordination of multiple independent units, each with its own mission. As a proof-of-concept Honeywell is developing and testing various algorithms that lead to a deterministic, fault tolerant, reliable wireless backplane. Just as advanced avionics systems control several subsystems, actuators, sensors, displays, etc.; a single "master" autonomous agent (or base station computer) could control multiple autonomous systems. The problem is simplified to controlling a flexible body consisting of several sensors and actuators, rather than one of coordinating multiple independent units. By filling technology gaps associated with space based autonomous system, wireless technology and Orion architectural principles provide the means for decreasing operational costs and simplifying problems associated with collaboration of multiple autonomous systems.

  19. 15 CFR Supplement No. 2 to Part 774 - General Technology and Software Notes

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false General Technology and Software Notes... REGULATIONS THE COMMERCE CONTROL LIST Pt. 774, Supp. 2 Supplement No. 2 to Part 774—General Technology and Software Notes 1. General Technology Note. The export of “technology” that is “required” for the...

  20. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.

    PubMed

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2015-05-01

    Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.

  1. Fault tolerant software modules for SIFT

    NASA Technical Reports Server (NTRS)

    Hecht, M.; Hecht, H.

    1982-01-01

    The implementation of software fault tolerance is investigated for critical modules of the Software Implemented Fault Tolerance (SIFT) operating system to support the computational and reliability requirements of advanced fly by wire transport aircraft. Fault tolerant designs generated for the error reported and global executive are examined. A description of the alternate routines, implementation requirements, and software validation are included.

  2. Reducing Risk in DoD Software-Intensive Systems Development

    DTIC Science & Technology

    2016-03-01

    intensive systems development risk. This research addresses the use of the Technical Readiness Assessment (TRA) using the nine-level software Technology...The software TRLs are ineffective in reducing technical risk for the software component development. • Without the software TRLs, there is no...effective method to perform software TRA or reduce the technical development risk. The software component will behave as a new, untried technology in nearly

  3. Generation of multiple analog pulses with different duty cycles within VME control system for ICRH Aditya system

    NASA Astrophysics Data System (ADS)

    Joshi, Ramesh; Singh, Manoj; Jadav, H. M.; Misra, Kishor; Kulkarni, S. V.; ICRH-RF Group

    2010-02-01

    Ion Cyclotron Resonance Heating (ICRH) is a promising heating method for a fusion device due to its localized power deposition profile, a direct ion heating at high density, and established technology for high RF power generation and transmission at low cost. Multiple analog pulse with different duty cycle in master of digital pulse for Data acquisition and Control system for steady state RF ICRH System(RF ICRH DAC) to be used for operating of RF Generator in Aditya to produce pre ionization and second analog pulse will produce heating. The control system software is based upon single digital pulse operation for RF source. It is planned to integrate multiple analog pulses with different duty cycle in master of digital pulse for Data acquisition and Control system for RF ICRH System(RF ICRH DAC) to be used for operating of RF Generator in Aditya tokamak. The task of RF ICRH DAC is to control and acquisition of all ICRH system operation with all control loop and acquisition for post analysis of data with java based tool. For pre ionization startup as well as heating experiments using multiple RF Power of different powers and duration. The experiment based upon the idea of using single RF generator to energize antenna inside the tokamak to radiate power twise, out of which first analog pulse will produce pre ionization and second analog pulse will produce heating. The whole system is based on standard client server technology using tcp/ip protocol. DAC Software is based on linux operating system for highly reliable, secure and stable system operation in failsafe manner. Client system is based on tcl/tk like toolkit for user interface with c/c++ like environment which is reliable programming languages widely used on stand alone system operation with server as vxWorks real time operating system like environment. The paper is focused on the Data acquisition and monitoring system software on Aditya RF ICRH System with analog pulses in slave mode with digital pulse in master mode for control acquisition and monitoring and interlocking.

  4. Technology Assessment Software Package: Final Report.

    ERIC Educational Resources Information Center

    Hutinger, Patricia L.

    This final report describes the Technology Assessment Software Package (TASP) Project, which produced developmentally appropriate technology assessment software for children from 18 months through 8 years of age who have moderate to severe disabilities that interfere with their interaction with people, objects, tasks, and events in their…

  5. The Validation of a Software Evaluation Instrument.

    ERIC Educational Resources Information Center

    Schmitt, Dorren Rafael

    This study, conducted at six southern universities, analyzed the validity and reliability of a researcher developed instrument designed to evaluate educational software in secondary mathematics. The instrument called the Instrument for Software Evaluation for Educators uses measurement scales, presents a summary section of the evaluation, and…

  6. Software Engineering for Human Spaceflight

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.

    2014-01-01

    The Spacecraft Software Engineering Branch of NASA Johnson Space Center (JSC) provides world-class products, leadership, and technical expertise in software engineering, processes, technology, and systems management for human spaceflight. The branch contributes to major NASA programs (e.g. ISS, MPCV/Orion) with in-house software development and prime contractor oversight, and maintains the JSC Engineering Directorate CMMI rating for flight software development. Software engineering teams work with hardware developers, mission planners, and system operators to integrate flight vehicles, habitats, robotics, and other spacecraft elements. They seek to infuse automation and autonomy into missions, and apply new technologies to flight processor and computational architectures. This presentation will provide an overview of key software-related projects, software methodologies and tools, and technology pursuits of interest to the JSC Spacecraft Software Engineering Branch.

  7. Annotated bibliography of Software Engineering Laboratory literature

    NASA Technical Reports Server (NTRS)

    Morusiewicz, Linda; Valett, Jon D.

    1991-01-01

    An annotated bibliography of technical papers, documents, and memorandums produced by or related to the Software Engineering Laboratory is given. More than 100 publications are summarized. These publications cover many areas of software engineering and range from research reports to software documentation. All materials have been grouped into eight general subject areas for easy reference: The Software Engineering Laboratory; The Software Engineering Laboratory: Software Development Documents; Software Tools; Software Models; Software Measurement; Technology Evaluations; Ada Technology; and Data Collection. Subject and author indexes further classify these documents by specific topic and individual author.

  8. Software System Safety and the NASA Aeronautics Blueprint

    NASA Technical Reports Server (NTRS)

    Holloway, C. Michael; Hayhurst, Kelly J.

    2002-01-01

    NASA's Aeronautics Blueprint lays out a research agenda for the Agency s aeronautics program. The word software appears only four times in this Blueprint, but the critical importance of safe and correct software to the fulfillment of the proposed research is evident on almost every page. Most of the technology solutions proposed to address challenges in aviation are software dependent technologies. Of the fifty-two specific technology solutions described in the Blueprint, forty-one depend, at least in part, on software for success. For thirty-five of these forty-one, software is not only critical to success, but also to human safety. That is, implementing the technology solutions will require using software in such a way that it may, if not specified, designed, and implemented properly, lead to fatal accidents. These results have at least two implications for the research based on the Blueprint: (1) knowledge about the current state-of-the-art and state-of-the-practice in software engineering and software system safety is essential, and (2) research into current unsolved problems in these software disciplines is also essential.

  9. Easy and effective--web-based information systems designed and maintained by physicians: experience with two gynecological projects.

    PubMed

    Kupka, M S; Dorn, C; Richter, O; van der Ven, H; Baur, M

    2003-08-01

    It is well established that medical information sources develop continuously from printed media to digital online sources. To demonstrate effectiveness and feasibility of decentralized performed web-based information sources for health professionals, two projects are described. The information platform of the German Working Group for Information Technologies in Gynecology and Obstetrics (AIG) and the information source concerning the German Registry for in vitro fertilization (DIR) were implemented using ordinary software and standard computer equipment. Only minimal resources and training were necessary to perform safe and reliable web-based information sources with a high correlation of effectiveness in costs and time exposure.

  10. Health management and controls for earth to orbit propulsion systems

    NASA Technical Reports Server (NTRS)

    Bickford, R. L.

    1992-01-01

    Fault detection and isolation for advanced rocket engine controllers are discussed focusing on advanced sensing systems and software which significantly improve component failure detection for engine safety and health management. Aerojet's Space Transportation Main Engine controller for the National Launch System is the state of the art in fault tolerant engine avionics. Health management systems provide high levels of automated fault coverage and significantly improve vehicle delivered reliability and lower preflight operations costs. Key technologies, including the sensor data validation algorithms and flight capable spectrometers, have been demonstrated in ground applications and are found to be suitable for bridging programs into flight applications.

  11. A Holistic Approach to Systems Development

    NASA Technical Reports Server (NTRS)

    Wong, Douglas T.

    2008-01-01

    Introduces a Holistic and Iterative Design Process. Continuous process but can be loosely divided into four stages. More effort spent early on in the design. Human-centered and Multidisciplinary. Emphasis on Life-Cycle Cost. Extensive use of modeling, simulation, mockups, human subjects, and proven technologies. Human-centered design doesn t mean the human factors discipline is the most important Disciplines should be involved in the design: Subsystem vendors, configuration management, operations research, manufacturing engineering, simulation/modeling, cost engineering, hardware engineering, software engineering, test and evaluation, human factors, electromagnetic compatibility, integrated logistics support, reliability/maintainability/availability, safety engineering, test equipment, training systems, design-to-cost, life cycle cost, application engineering etc. 9

  12. Design of testbed and emulation tools

    NASA Technical Reports Server (NTRS)

    Lundstrom, S. F.; Flynn, M. J.

    1986-01-01

    The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems.

  13. Advancements for continuous miners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiscor, S.

    2007-06-15

    Design changes and new technology make the modern continuous miner more user friendly. Two of the major manufacturers, Joy Mining Machinery and DBT, both based near Pittsburgh, PA, USA, have recently acquired other OEMs to offer a greater product line. Joy's biggest development in terms of improving cutting time is the FACEBOSS Control System which has an operator assistance element and Joy Surface Reporting Software (JSRP). Joy's WetHead continuous miners have excellent performance. DBT is researching ways to make the machines more reliable with new drive systems. It has also been experimenting with water sprays to improve dust suppression. 4more » photos.« less

  14. Warning system against locomotive driving wheel flaccidity

    NASA Astrophysics Data System (ADS)

    Luo, Peng

    2014-09-01

    Causes of locomotive relaxation are discussed. Alarm system against locomotive driving wheel flaccidity is designed by means of techniques of infrared temperature measurement and Hall sensor measurement. The design scheme of the system, the principle of detecting locomotive driving wheel flaccidity with temperature and Hall sensor is introduced, threshold temperature of infrared alarm is determined. The circuit system is designed by microcontroller technology and the software is designed with the assembly language. The experiment of measuring the flaccid displacement with Hall sensor measurement is simulated. The results show that the system runs well with high reliability and low cost, which has a wide prospect of application and popularization.

  15. Lightweight UDP Pervasive Protocol in Smart Home Environment Based on Labview

    NASA Astrophysics Data System (ADS)

    Kurniawan, Wijaya; Hannats Hanafi Ichsan, Mochammad; Rizqika Akbar, Sabriansyah; Arwani, Issa

    2017-04-01

    TCP (Transmission Control Protocol) technology in a reliable environment was not a problem, but not in an environment where the entire Smart Home network connected locally. Currently employing pervasive protocols using TCP technology, when data transmission is sent, it would be slower because they have to perform handshaking process in advance and could not broadcast the data. On smart home environment, it does not need large size and complex data transmission between monitoring site and monitoring center required in Smart home strain monitoring system. UDP (User Datagram Protocol) technology is quick and simple on data transmission process. UDP can broadcast messages because the UDP did not require handshaking and with more efficient memory usage. LabVIEW is a programming language software for processing and visualization of data in the field of data acquisition. This paper proposes to examine Pervasive UDP protocol implementations in smart home environment based on LabVIEW. UDP coded in LabVIEW and experiments were performed on a PC and can work properly.

  16. Planning and Management of Real-Time Geospatialuas Missions Within a Virtual Globe Environment

    NASA Astrophysics Data System (ADS)

    Nebiker, S.; Eugster, H.; Flückiger, K.; Christen, M.

    2011-09-01

    This paper presents the design and development of a hardware and software framework supporting all phases of typical monitoring and mapping missions with mini and micro UAVs (unmanned aerial vehicles). The developed solution combines state-of-the art collaborative virtual globe technologies with advanced geospatial imaging techniques and wireless data link technologies supporting the combined and highly reliable transmission of digital video, high-resolution still imagery and mission control data over extended operational ranges. The framework enables the planning, simulation, control and real-time monitoring of UAS missions in application areas such as monitoring of forest fires, agronomical research, border patrol or pipeline inspection. The geospatial components of the project are based on the Virtual Globe Technology i3D OpenWebGlobe of the Institute of Geomatics Engineering at the University of Applied Sciences Northwestern Switzerland (FHNW). i3D OpenWebGlobe is a high-performance 3D geovisualisation engine supporting the web-based streaming of very large amounts of terrain and POI data.

  17. Does This Really Work? The Keys to Implementing New Technology while Providing Evidence that Technology Is Successful

    ERIC Educational Resources Information Center

    Sawtelle, Sara

    2008-01-01

    Proving that technology works is not as simple as proving that a new vendor for art supplies is more cost effective. Technology effectiveness requires both the right software and the right implementation. Just having the software is not enough. Proper planning, training, leadership, support, pedagogy, and software use--along with many other…

  18. 15 CFR Supplement No. 1 to Part 734 - Questions and Answers-Technology and Software Subject to the EAR

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Questions and Answers-Technology and... Supplement No. 1 to Part 734—Questions and Answers—Technology and Software Subject to the EAR This Supplement No. 1 contains explanatory questions and answers relating to technology and software that is subject...

  19. 15 CFR Supplement No. 2 to Part 730 - Technical Advisory Committees

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., materials, or supplies, including technology, software, and other information, that are subject to export... to a clearly defined grouping of articles, materials, or supplies, including technology, software, or..., including technology, software, and other information, that are subject to export controls because of their...

  20. 15 CFR Supplement No. 2 to Part 730 - Technical Advisory Committees

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., materials, or supplies, including technology, software, and other information, that are subject to export... to a clearly defined grouping of articles, materials, or supplies, including technology, software, or..., including technology, software, and other information, that are subject to export controls because of their...

  1. 15 CFR Supplement No. 2 to Part 730 - Technical Advisory Committees

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., materials, or supplies, including technology, software, and other information, that are subject to export... to a clearly defined grouping of articles, materials, or supplies, including technology, software, or..., including technology, software, and other information, that are subject to export controls because of their...

  2. Modular Rocket Engine Control Software (MRECS)

    NASA Technical Reports Server (NTRS)

    Tarrant, C.; Crook, J.

    1998-01-01

    The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for advanced engine control systems that will result in lower software maintenance (operations) costs. It effectively accommodates software requirement changes that occur due to hardware technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives, benefits, and status of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishments are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software architecture, reuse software, and reduced software reverification time related to software changes. MRECS was recently modified to support a Space Shuttle Main Engine (SSME) hot-fire test. Cold Flow and Flight Readiness Testing were completed before the test was cancelled. Currently, the program is focused on supporting NASA MSFC in accomplishing development testing of the Fastrac Engine, part of NASA's Low Cost Technologies (LCT) Program. MRECS will be used for all engine development testing.

  3. Integrated Application of Active Controls (IAAC) technology to an advanced subsonic transport project: Current and advanced act control system definition study, volume 1

    NASA Technical Reports Server (NTRS)

    Hanks, G. W.; Shomber, H. A.; Dethman, H. A.; Gratzer, L. B.; Maeshiro, A.; Gangsaas, D.; Blight, J. D.; Buchan, S. M.; Crumb, C. B.; Dorwart, R. J.

    1981-01-01

    An active controls technology (ACT) system architecture was selected based on current technology system elements and optimal control theory was evaluated for use in analyzing and synthesizing ACT multiple control laws. The system selected employs three redundant computers to implement all of the ACT functions, four redundant smaller computers to implement the crucial pitch-augmented stability function, and a separate maintenance and display computer. The reliability objective of probability of crucial function failure of less than 1 x 10 to the -9th power per flight of 1 hr can be met with current technology system components, if the software is assumed fault free and coverage approaching 1.0 can be provided. The optimal control theory approach to ACT control law synthesis yielded comparable control law performance much more systematically and directly than the classical s-domain approach. The ACT control law performance, although somewhat degraded by the inclusion of representative nonlinearities, remained quite effective. Certain high-frequency gust-load alleviation functions may require increased surface rate capability.

  4. Big Software for SmallSats: Adapting cFS to CubeSat Missions

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan P.; Crum, Gary Alex; Sheikh, Salman; Marshall, James

    2015-01-01

    Expanding capabilities and mission objectives for SmallSats and CubeSats is driving the need for reliable, reusable, and robust flight software. While missions are becoming more complicated and the scientific goals more ambitious, the level of acceptable risk has decreased. Design challenges are further compounded by budget and schedule constraints that have not kept pace. NASA's Core Flight Software System (cFS) is an open source solution which enables teams to build flagship satellite level flight software within a CubeSat schedule and budget. NASA originally developed cFS to reduce mission and schedule risk for flagship satellite missions by increasing code reuse and reliability. The Lunar Reconnaissance Orbiter, which launched in 2009, was the first of a growing list of Class B rated missions to use cFS.

  5. Development of an Environment for Software Reliability Model Selection

    DTIC Science & Technology

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  6. A Course in Real-Time Embedded Software

    ERIC Educational Resources Information Center

    Archibald, J. K.; Fife, W. S.

    2007-01-01

    Embedded systems are increasingly pervasive, and the creation of reliable controlling software offers unique challenges. Embedded software must interact directly with hardware, it must respond to events in a time-critical fashion, and it typically employs concurrency to meet response time requirements. This paper describes an innovative course…

  7. Corroded Anchor Structure Stability/Reliability (CAS_Stab-R) Software for Hydraulic Structures

    DTIC Science & Technology

    2017-12-01

    This report describes software that provides a probabilistic estimate of time -to-failure for a corroding anchor strand system. These anchor...stability to the structure. A series of unique pull-test experiments conducted by Ebeling et al. (2016) at the U.S. Army Engineer Research and...Reliability (CAS_Stab-R) produces probabilistic Remaining Anchor Life time estimates for anchor cables based upon the direct corrosion rate for the

  8. Modular preoperative planning software for computer-aided oral implantology and the application of a novel stereolithographic template: a pilot study.

    PubMed

    Chen, Xiaojun; Yuan, Jianbing; Wang, Chengtao; Huang, Yuanliang; Kang, Lu

    2010-09-01

    In the field of oral implantology, there is a trend toward computer-aided implant surgery, especially the application of computerized tomography (CT)-derived surgical templates. However, because of relatively unsatisfactory match between the templates and receptor sites, conventional surgical templates may not be accurate enough for the severely resorbed edentulous cases during the procedure of transferring the preoperative plan to the actual surgery. The purpose of this study is to introduce a novel bone-tooth-combined-supported surgical guide, which is designed by utilizing a special modular software and fabricated via stereolithography technique using both laser scanning and CT imaging, thus improving the fit accuracy and reliability. A modular preoperative planning software was developed for computer-aided oral implantology. With the introduction of dynamic link libraries and some well-known free, open-source software libraries such as Visualization Toolkit (Kitware, Inc., New York, USA) and Insight Toolkit (Kitware, Inc.) a plug-in evolutive software architecture was established, allowing for expandability, accessibility, and maintainability in our system. To provide a link between the preoperative plan and the actual surgery, a novel bone-tooth-combined-supported surgical template was fabricated, utilizing laser scanning, image registration, and rapid prototyping. Clinical studies were conducted on four partially edentulous cases to make a comparison with the conventional bone-supported templates. The fixation was more stable than tooth-supported templates because laser scanning technology obtained detailed dentition information, which brought about the unique topography between the match surface of the templates and the adjacent teeth. The average distance deviations at the coronal and apical point of the implant were 0.66 mm (range: 0.3-1.2) and 0.86 mm (range: 0.4-1.2), and the average angle deviation was 1.84 degrees (range: 0.6-2.8 degrees ). This pilot study proves that the novel combined-supported templates are superior to the conventional ones. However, more clinical cases will be conducted to demonstrate their feasibility and reliability.

  9. 15 CFR 732.2 - Steps regarding scope of the EAR.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) of this section. (b) Step 2: Publicly available technology and software. This step is relevant for both exports and reexports. Determine if your technology or software is publicly available as defined... practical examples describing publicly available technology and software that are outside the scope of the...

  10. 15 CFR 732.2 - Steps regarding scope of the EAR.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) of this section. (b) Step 2: Publicly available technology and software. This step is relevant for both exports and reexports. Determine if your technology or software is publicly available as defined... practical examples describing publicly available technology and software that are outside the scope of the...

  11. 15 CFR 732.2 - Steps regarding scope of the EAR.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) of this section. (b) Step 2: Publicly available technology and software. This step is relevant for both exports and reexports. Determine if your technology or software is publicly available as defined... practical examples describing publicly available technology and software that are outside the scope of the...

  12. 15 CFR 732.2 - Steps regarding scope of the EAR.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) of this section. (b) Step 2: Publicly available technology and software. This step is relevant for both exports and reexports. Determine if your technology or software is publicly available as defined... practical examples describing publicly available technology and software that are outside the scope of the...

  13. 15 CFR 732.2 - Steps regarding scope of the EAR.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) of this section. (b) Step 2: Publicly available technology and software. This step is relevant for both exports and reexports. Determine if your technology or software is publicly available as defined... practical examples describing publicly available technology and software that are outside the scope of the...

  14. 31 CFR 545.408 - Offshore transactions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) With respect to goods, software, technology, or services which the U.S. person knows, or has reason to... States of goods, software, technology or services owned or controlled by the Taliban or persons whose... dealing in such blocked goods, software, technology, or services. (c) Example. A U.S. person may not...

  15. SWIFT2: Software for continuous ensemble short-term streamflow forecasting for use in research and operations

    NASA Astrophysics Data System (ADS)

    Perraud, Jean-Michel; Bennett, James C.; Bridgart, Robert; Robertson, David E.

    2016-04-01

    Research undertaken through the Water Information Research and Development Alliance (WIRADA) has laid the foundations for continuous deterministic and ensemble short-term forecasting services. One output of this research is the software Short-term Water Information Forecasting Tools version 2 (SWIFT2). SWIFT2 is developed for use in research on short term streamflow forecasting techniques as well as operational forecasting services at the Australian Bureau of Meteorology. The variety of uses in research and operations requires a modular software system whose components can be arranged in applications that are fit for each particular purpose, without unnecessary software duplication. SWIFT2 modelling structures consist of sub-areas of hydrologic models, nodes and links with in-stream routing and reservoirs. While this modelling structure is customary, SWIFT2 is built from the ground up for computational and data intensive applications such as ensemble forecasts necessary for the estimation of the uncertainty in forecasts. Support for parallel computation on multiple processors or on a compute cluster is a primary use case. A convention is defined to store large multi-dimensional forecasting data and its metadata using the netCDF library. SWIFT2 is written in modern C++ with state of the art software engineering techniques and practices. A salient technical feature is a well-defined application programming interface (API) to facilitate access from different applications and technologies. SWIFT2 is already seamlessly accessible on Windows and Linux via packages in R, Python, Matlab and .NET languages such as C# and F#. Command line or graphical front-end applications are also feasible. This poster gives an overview of the technology stack, and illustrates the resulting features of SWIFT2 for users. Research and operational uses share the same common core C++ modelling shell for consistency, but augmented by different software modules suitable for each context. The accessibility via interactive modelling languages is particularly amenable to using SWIFT2 in exploratory research, with a dynamic and versatile experimental modelling workflow. This does not come at the expense of the stability and reliability required for use in operations, where only mature and stable components are used.

  16. The Future of Statistical Software. Proceedings of a Forum--Panel on Guidelines for Statistical Software (Washington, D.C., February 22, 1991).

    ERIC Educational Resources Information Center

    National Academy of Sciences - National Research Council, Washington, DC.

    The Panel on Guidelines for Statistical Software was organized in 1990 to document, assess, and prioritize problem areas regarding quality and reliability of statistical software; present prototype guidelines in high priority areas; and make recommendations for further research and discussion. This document provides the following papers presented…

  17. Research of real-time communication software

    NASA Astrophysics Data System (ADS)

    Li, Maotang; Guo, Jingbo; Liu, Yuzhong; Li, Jiahong

    2003-11-01

    Real-time communication has been playing an increasingly important role in our work, life and ocean monitor. With the rapid progress of computer and communication technique as well as the miniaturization of communication system, it is needed to develop the adaptable and reliable real-time communication software in the ocean monitor system. This paper involves the real-time communication software research based on the point-to-point satellite intercommunication system. The object-oriented design method is adopted, which can transmit and receive video data and audio data as well as engineering data by satellite channel. In the real-time communication software, some software modules are developed, which can realize the point-to-point satellite intercommunication in the ocean monitor system. There are three advantages for the real-time communication software. One is that the real-time communication software increases the reliability of the point-to-point satellite intercommunication system working. Second is that some optional parameters are intercalated, which greatly increases the flexibility of the system working. Third is that some hardware is substituted by the real-time communication software, which not only decrease the expense of the system and promotes the miniaturization of communication system, but also aggrandizes the agility of the system.

  18. Extracting data from figures with software was faster, with higher interrater reliability than manual extraction.

    PubMed

    Jelicic Kadic, Antonia; Vucic, Katarina; Dosenovic, Svjetlana; Sapunar, Damir; Puljak, Livia

    2016-06-01

    To compare speed and accuracy of graphical data extraction using manual estimation and open source software. Data points from eligible graphs/figures published in randomized controlled trials (RCTs) from 2009 to 2014 were extracted by two authors independently, both by manual estimation and with the Plot Digitizer, open source software. Corresponding authors of each RCT were contacted up to four times via e-mail to obtain exact numbers that were used to create graphs. Accuracy of each method was compared against the source data from which the original graphs were produced. Software data extraction was significantly faster, reducing time for extraction for 47%. Percent agreement between the two raters was 51% for manual and 53.5% for software data extraction. Percent agreement between the raters and original data was 66% vs. 75% for the first rater and 69% vs. 73% for the second rater, for manual and software extraction, respectively. Data extraction from figures should be conducted using software, whereas manual estimation should be avoided. Using software for data extraction of data presented only in figures is faster and enables higher interrater reliability. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. What does voice-processing technology support today?

    PubMed Central

    Nakatsu, R; Suzuki, Y

    1995-01-01

    This paper describes the state of the art in applications of voice-processing technologies. In the first part, technologies concerning the implementation of speech recognition and synthesis algorithms are described. Hardware technologies such as microprocessors and DSPs (digital signal processors) are discussed. Software development environment, which is a key technology in developing applications software, ranging from DSP software to support software also is described. In the second part, the state of the art of algorithms from the standpoint of applications is discussed. Several issues concerning evaluation of speech recognition/synthesis algorithms are covered, as well as issues concerning the robustness of algorithms in adverse conditions. Images Fig. 3 PMID:7479720

  20. SAGA: A project to automate the management of software production systems

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Beckman-Davies, C. S.; Benzinger, L.; Beshers, G.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.

    1986-01-01

    Research into software development is required to reduce its production cost and to improve its quality. Modern software systems, such as the embedded software required for NASA's space station initiative, stretch current software engineering techniques. The requirements to build large, reliable, and maintainable software systems increases with time. Much theoretical and practical research is in progress to improve software engineering techniques. One such technique is to build a software system or environment which directly supports the software engineering process, i.e., the SAGA project, comprising the research necessary to design and build a software development which automates the software engineering process. Progress under SAGA is described.

  1. Collected software engineering papers, volume 7

    NASA Technical Reports Server (NTRS)

    1989-01-01

    A collection is presented of selected technical papers produced by participants in the Software Engineering Laboratory (SEL) during the period Dec. 1988 to Oct. 1989. The purpose of the document is to make available, in one reference, some results of SEL research that originally appeared in a number of different forums. For the convenience of this presentation, the seven papers contained here are grouped into three major categories: (1) Software Measurement and Technology Studies; (2) Measurement Environment Studies; and (3) Ada Technology Studies. The first category presents experimental research and evaluation of software measurement and technology; the second presents studies on software environments pertaining to measurement. The last category represents Ada technology and includes research, development, and measurement studies.

  2. Collected software engineering papers, volume 6

    NASA Technical Reports Server (NTRS)

    1988-01-01

    A collection is presented of technical papers produced by participants in the Software Engineering Laboratory (SEL) during the period 1 Jun. 1987 to 1 Jan. 1989. The purpose of the document is to make available, in one reference, some results of SEL research that originally appeared in a number of different forums. For the convenience of this presentation, the twelve papers contained here are grouped into three major categories: (1) Software Measurement and Technology Studies; (2) Measurement Environment Studies; and (3) Ada Technology Studies. The first category presents experimental research and evaluation of software measurement and technology; the second presents studies on software environments pertaining to measurement. The last category represents Ada technology and includes research, development, and measurement studies.

  3. Four applications of a software data collection and analysis methodology

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.

    1985-01-01

    The evaluation of software technologies suffers because of the lack of quantitative assessment of their effect on software development and modification. A seven-step data collection and analysis methodology couples software technology evaluation with software measurement. Four in-depth applications of the methodology are presented. The four studies represent each of the general categories of analyses on the software product and development process: blocked subject-project studies, replicated project studies, multi-project variation studies, and single project strategies. The four applications are in the areas of, respectively, software testing, cleanroom software development, characteristic software metric sets, and software error analysis.

  4. Model Driven Engineering with Ontology Technologies

    NASA Astrophysics Data System (ADS)

    Staab, Steffen; Walter, Tobias; Gröner, Gerd; Parreiras, Fernando Silva

    Ontologies constitute formal models of some aspect of the world that may be used for drawing interesting logical conclusions even for large models. Software models capture relevant characteristics of a software artifact to be developed, yet, most often these software models have limited formal semantics, or the underlying (often graphical) software language varies from case to case in a way that makes it hard if not impossible to fix its semantics. In this contribution, we survey the use of ontology technologies for software modeling in order to carry over advantages from ontology technologies to the software modeling domain. It will turn out that ontology-based metamodels constitute a core means for exploiting expressive ontology reasoning in the software modeling domain while remaining flexible enough to accommodate varying needs of software modelers.

  5. Infusing Software Engineering Technology into Practice at NASA

    NASA Technical Reports Server (NTRS)

    Pressburger, Thomas; Feather, Martin S.; Hinchey, Michael; Markosia, Lawrence

    2006-01-01

    We present an ongoing effort of the NASA Software Engineering Initiative to encourage the use of advanced software engineering technology on NASA projects. Technology infusion is in general a difficult process yet this effort seems to have found a modest approach that is successful for some types of technologies. We outline the process and describe the experience of the technology infusions that occurred over a two year period. We also present some lessons from the experiences.

  6. Quantitative analysis of tympanic membrane perforation: a simple and reliable method.

    PubMed

    Ibekwe, T S; Adeosun, A A; Nwaorgu, O G

    2009-01-01

    Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.

  7. Development of a calibrated software reliability model for flight and supporting ground software for avionic systems

    NASA Technical Reports Server (NTRS)

    Lawrence, Stella

    1991-01-01

    The object of this project was to develop and calibrate quantitative models for predicting the quality of software. Reliable flight and supporting ground software is a highly important factor in the successful operation of the space shuttle program. The models used in the present study consisted of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software). There are ten models in SMERFS. For a first run, the results obtained in modeling the cumulative number of failures versus execution time showed fairly good results for our data. Plots of cumulative software failures versus calendar weeks were made and the model results were compared with the historical data on the same graph. If the model agrees with actual historical behavior for a set of data then there is confidence in future predictions for this data. Considering the quality of the data, the models have given some significant results, even at this early stage. With better care in data collection, data analysis, recording of the fixing of failures and CPU execution times, the models should prove extremely helpful in making predictions regarding the future pattern of failures, including an estimate of the number of errors remaining in the software and the additional testing time required for the software quality to reach acceptable levels. It appears that there is no one 'best' model for all cases. It is for this reason that the aim of this project was to test several models. One of the recommendations resulting from this study is that great care must be taken in the collection of data. When using a model, the data should satisfy the model assumptions.

  8. Efficacy of a Newly Designed Cephalometric Analysis Software for McNamara Analysis in Comparison with Dolphin Software.

    PubMed

    Nouri, Mahtab; Hamidiaval, Shadi; Akbarzadeh Baghban, Alireza; Basafa, Mohammad; Fahim, Mohammad

    2015-01-01

    Cephalometric norms of McNamara analysis have been studied in various populations due to their optimal efficiency. Dolphin cephalometric software greatly enhances the conduction of this analysis for orthodontic measurements. However, Dolphin is very expensive and cannot be afforded by many clinicians in developing countries. A suitable alternative software program in Farsi/English will greatly help Farsi speaking clinicians. The present study aimed to develop an affordable Iranian cephalometric analysis software program and compare it with Dolphin, the standard software available on the market for cephalometric analysis. In this diagnostic, descriptive study, 150 lateral cephalograms of normal occlusion individuals were selected in Mashhad and Qazvin, two major cities of Iran mainly populated with Fars ethnicity, the main Iranian ethnic group. After tracing the cephalograms, the McNamara analysis standards were measured both with Dolphin and the new software. The cephalometric software was designed using Microsoft Visual C++ program in Windows XP. Measurements made with the new software were compared with those of Dolphin software on both series of cephalograms. The validity and reliability were tested using intra-class correlation coefficient. Calculations showed a very high correlation between the results of the Iranian cephalometric analysis software and Dolphin. This confirms the validity and optimal efficacy of the newly designed software (ICC 0.570-1.0). According to our results, the newly designed software has acceptable validity and reliability and can be used for orthodontic diagnosis, treatment planning and assessment of treatment outcome.

  9. Final Report to the National Energy Technology Laboratory on FY14- FY15 Cooperative Research with the Consortium for Electric Reliability Technology Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vittal, Vijay; Lampis, Anna Rosa

    The Power System Engineering Research Center (PSERC) engages in technological, market, and policy research for an efficient, secure, resilient, adaptable, and economic U.S. electric power system. PSERC, as a founding partner of the Consortium for Electric Reliability Technology Solutions (CERTS), conducted a multi-year program of research for U.S. Department of Energy (DOE) Office of Electricity Delivery and Energy Reliability (OE) to develop new methods, tools, and technologies to protect and enhance the reliability and efficiency of the U.S. electric power system as competitive electricity market structures evolve, and as the grid moves toward wide-scale use of decentralized generation (such asmore » renewable energy sources) and demand-response programs. Phase I of OE’s funding for PSERC, under cooperative agreement DE-FC26-09NT43321, started in fiscal year (FY) 2009 and ended in FY2013. It was administered by DOE’s National Energy Technology Laboratory (NETL) through a cooperative agreement with Arizona State University (ASU). ASU provided sub-awards to the participating PSERC universities. This document is PSERC’s final report to NETL on the activities for OE, conducted through CERTS, from September 2015 through September 2017 utilizing FY 2014 to FY 2015 funding under cooperative agreement DE-OE0000670. PSERC is a thirteen-university consortium with over 30 industry members. Since 1996, PSERC has been engaged in research and education efforts with the mission of “empowering minds to engineer the future electric energy system.” Its work is focused on achieving: • An efficient, secure, resilient, adaptable, and economic electric power infrastructure serving society • A new generation of educated technical professionals in electric power • Knowledgeable decision-makers on critical energy policy issues • Sustained, quality university programs in electric power engineering. PSERC core research is funded by industry, with a budget supporting approximately 30 principal investigators and some 70 graduate students and other researchers. Its researchers are multi-disciplinary, conducting research in three principal areas: power systems, power markets and policy, and transmission and distribution technologies. The research is collaborative; each project involves researchers typically at two universities working with industry advisors who have expressed interest in the project. Examples of topics for recent PSERC research projects include grid integration of renewables and energy storage, new tools for taking advantage of increased penetration of real-time system measurements, advanced system protection methods to maintain grid reliability, and risk and reliability assessment of increasingly complex cyber-enabled power systems. A PSERC’s objective is to proactively address the technical and policy challenges of U.S. electric power systems. To achieve this objective, PSERC works with CERTS to conduct technical research on advanced applications and investigate the design of fair and transparent electricity markets; these research topics align with CERTS research areas 1 and 2: Real-time Grid Reliability Management (Area 1), and Reliability and Markets (Area 2). The CERTS research areas overlap with the PSERC research stems: Power Systems, Power Markets, and Transmission and Distribution Technologies, as described on the PSERC website (see http://www.pserc.org/research/research_program.aspx). The performers were with Arizona State University (ASU), Cornell University (CU), University of California at Berkeley (UCB), and University of Illinois at Urbana-Champaign (UIUC). PSERC research activities in the area of reliability and markets focused on electric market and power policy analyses. The resulting studies suggest ways to frame best practices using organized markets for managing U.S. grid assets reliably and to identify highest priority areas for improvement. PSERC research activities in the area of advanced applications focused on mid- to long-term software research and development, with anticipated outcomes that move innovative ideas toward real-world application. Under the CERTS research area of Real-time Grid Reliability Management, PSERC has been focused on Advanced Applications Research and Development (AARD), a subgroup of activities that works to develop advanced applications and tools to more effectively operate the electricity delivery system, by enabling advanced analysis, visualization, monitoring and alarming, and decision support capabilities for grid operators.« less

  10. Neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  11. Reliability and accuracy analysis of a new semiautomatic radiographic measurement software in adult scoliosis.

    PubMed

    Aubin, Carl-Eric; Bellefleur, Christian; Joncas, Julie; de Lanauze, Dominic; Kadoury, Samuel; Blanke, Kathy; Parent, Stefan; Labelle, Hubert

    2011-05-20

    Radiographic software measurement analysis in adult scoliosis. To assess the accuracy as well as the intra- and interobserver reliability of measuring different indices on preoperative adult scoliosis radiographs using a novel measurement software that includes a calibration procedure and semiautomatic features to facilitate the measurement process. Scoliosis requires a careful radiographic evaluation to assess the deformity. Manual and computer radiographic process measures have been studied extensively to determine the reliability and reproducibility in adolescent idiopathic scoliosis. Most studies rely on comparing given measurements, which are repeated by the same user or by an expert user. A given measure with a small intra- or interobserver error might be deemed as good repeatability, but all measurements might not be truly accurate because the ground-truth value is often unknown. Thorough accuracy assessment of radiographic measures is necessary to assess scoliotic deformities, compare these measures at different stages or to permit valid multicenter studies. Thirty-four sets of adult scoliosis digital radiographs were measured two times by three independent observers using a novel radiographic measurement software that includes semiautomatic features to facilitate the measurement process. Twenty different measures taken from the Spinal Deformity Study Group radiographic measurement manual were performed on the coronal and sagittal images. Intra- and intermeasurer reliability for each measure was assessed. The accuracy of the measurement software was also assessed using a physical spine model in six different scoliotic configurations as a true reference. The majority of the measures demonstrated good to excellent intra- and intermeasurer reliability, except for sacral obliquity. The standard variation of all the measures was very small: ≤ 4.2° for Cobb angles, ≤ 4.2° for the kyphosis, ≤ 5.7° for the lordosis, ≤ 3.9° for the pelvic angles, and ≤5.3° for the sacral angles. The variability in the linear measurements (distances) was <4 mm. The variance of the measures was 1.7 and 2.6 times greater, respectively, for the angular and linear measures between the inter- and intrameasurer reliability. The image quality positively influenced the intermeasurer reliability especially for the proximal thoracic Cobb angle, T10-L2 lordosis, sacral slope and L5 seating. The accuracy study revealed that on average the difference in the angular measures was < 2° for the Cobb angles, and < 4° for the other angles, except T2-T12 kyphosis (5.3°). The linear measures were all <3.5 mm difference on average. The majority of the measures, which were analyzed in this study demonstrated good to excellent reliability and accuracy. The novel semiautomatic measurement software can be recommended for use for clinical, research or multicenter study purposes.

  12. A Survey of Hardware and Software Technologies for the Rapid Development of Multimedia Instructional Modules

    ERIC Educational Resources Information Center

    Ganesan, Nanda

    2008-01-01

    A survey of hardware and software technologies was conducted to identify suitable technologies for the development of instructional modules representing various instructional approaches. The approaches modeled were short PowerPoint presentations, chalk-and-talk type of lectures and software tutorials. The survey focused on identifying application…

  13. Temporary Shell Proof-of-Concept Technique: Digital-Assisted Workflow to Enable Customized Immediate Function in Two Visits in Partially Edentulous Patients

    PubMed

    Pozzi, Alessandro; Arcuri, Lorenzo; Moy, Peter K

    2018-03-01

    The growing interest in minimally invasive implant placement and delivery of a prefabricated provisional prosthesis immediately, thus minimizing "time to teeth," has led to the development of numerous 3-dimensional (3D) planning software programs. Given the enhancements associated with fully digital workflows, such as better 3D soft-tissue visualization and virtual tooth rendering, computer-guided implant surgery and immediate function has become an effective and reliable procedure. This article describes how modern implant planning software programs provide a comprehensive digital platform that enables efficient interplay between the surgical and restorative aspects of implant treatment. These new technologies that streamline the overall digital workflow allow transformation of the digital wax-up into a personalized, CAD/CAM-milled provisional restoration. Thus, collaborative digital workflows provide a novel approach for time-efficient delivery of a customized, screw-retained provisional restoration on the day of implant surgery, resulting in improved predictability for immediate function in the partially edentate patient.

  14. Engineering Software Suite Validates System Design

    NASA Technical Reports Server (NTRS)

    2007-01-01

    EDAptive Computing Inc.'s (ECI) EDAstar engineering software tool suite, created to capture and validate system design requirements, was significantly funded by NASA's Ames Research Center through five Small Business Innovation Research (SBIR) contracts. These programs specifically developed Syscape, used to capture executable specifications of multi-disciplinary systems, and VectorGen, used to automatically generate tests to ensure system implementations meet specifications. According to the company, the VectorGen tests considerably reduce the time and effort required to validate implementation of components, thereby ensuring their safe and reliable operation. EDASHIELD, an additional product offering from ECI, can be used to diagnose, predict, and correct errors after a system has been deployed using EDASTAR -created models. Initial commercialization for EDASTAR included application by a large prime contractor in a military setting, and customers include various branches within the U.S. Department of Defense, industry giants like the Lockheed Martin Corporation, Science Applications International Corporation, and Ball Aerospace and Technologies Corporation, as well as NASA's Langley and Glenn Research Centers

  15. Validation of the Mobile Information Software Evaluation Tool (MISET) With Nursing Students.

    PubMed

    Secco, M Loretta; Furlong, Karen E; Doyle, Glynda; Bailey, Judy

    2016-07-01

    This study evaluated the Mobile Information Software Evaluation Tool (MISET) with a sample of Canadian undergraduate nursing students (N = 240). Psychometric analyses determined how well the MISET assessed the extent that nursing students find mobile device-based information resources useful and supportive of learning in the clinical and classroom settings. The MISET has a valid three-factor structure with high explained variance (74.7%). Internal consistency reliabilities were high for the MISET total (.90) and three subscales: Usefulness/Helpfulness, Information Literacy Support, and Use of Evidence-Based Sources (.87 to .94). Construct validity evidence included significantly higher mean total MISET, Helpfulness/Usefulness, and Information Literacy Support scores for senior students and those with higher computer competence. The MISET is a promising tool to evaluate mobile information technologies and information literacy support; however, longitudinal assessment of changes in scores over time would determine scale sensitivity and responsiveness. [J Nurs Educ. 2016;55(7):385-390.]. Copyright 2016, SLACK Incorporated.

  16. [Portable Epileptic Seizure Monitoring Intelligent System Based on Android System].

    PubMed

    Liang, Zhenhu; Wu, Shufeng; Yang, Chunlin; Jiang, Zhenzhou; Yu, Tao; Lu, Chengbiao; Li, Xiaoli

    2016-02-01

    The clinical electroencephalogram (EEG) monitoring systems based on personal computer system can not meet the requirements of portability and home usage. The epilepsy patients have to be monitored in hospital for an extended period of time, which imposes a heavy burden on hospitals. In the present study, we designed a portable 16-lead networked monitoring system based on the Android smart phone. The system uses some technologies including the active electrode, the WiFi wireless transmission, the multi-scale permutation entropy (MPE) algorithm, the back-propagation (BP) neural network algorithm, etc. Moreover, the software of Android mobile application can realize the processing and analysis of EEG data, the display of EEG waveform and the alarm of epileptic seizure. The system has been tested on the mobile phones with Android 2. 3 operating system or higher version and the results showed that this software ran accurately and steadily in the detection of epileptic seizure. In conclusion, this paper provides a portable and reliable solution for epileptic seizure monitoring in clinical and home applications.

  17. Federated software defined network operations for LHC experiments

    NASA Astrophysics Data System (ADS)

    Kim, Dongkyun; Byeon, Okhwan; Cho, Kihyeon

    2013-09-01

    The most well-known high-energy physics collaboration, the Large Hadron Collider (LHC), which is based on e-Science, has been facing several challenges presented by its extraordinary instruments in terms of the generation, distribution, and analysis of large amounts of scientific data. Currently, data distribution issues are being resolved by adopting an advanced Internet technology called software defined networking (SDN). Stability of the SDN operations and management is demanded to keep the federated LHC data distribution networks reliable. Therefore, in this paper, an SDN operation architecture based on the distributed virtual network operations center (DvNOC) is proposed to enable LHC researchers to assume full control of their own global end-to-end data dissemination. This may achieve an enhanced data delivery performance based on data traffic offloading with delay variation. The evaluation results indicate that the overall end-to-end data delivery performance can be improved over multi-domain SDN environments based on the proposed federated SDN/DvNOC operation framework.

  18. Intelligent systems technology infrastructure for integrated systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry, Jr.

    1991-01-01

    Significant advances have occurred during the last decade in intelligent systems technologies (a.k.a. knowledge-based systems, KBS) including research, feasibility demonstrations, and technology implementations in operational environments. Evaluation and simulation data obtained to date in real-time operational environments suggest that cost-effective utilization of intelligent systems technologies can be realized for Automated Rendezvous and Capture applications. The successful implementation of these technologies involve a complex system infrastructure integrating the requirements of transportation, vehicle checkout and health management, and communication systems without compromise to systems reliability and performance. The resources that must be invoked to accomplish these tasks include remote ground operations and control, built-in system fault management and control, and intelligent robotics. To ensure long-term evolution and integration of new validated technologies over the lifetime of the vehicle, system interfaces must also be addressed and integrated into the overall system interface requirements. An approach for defining and evaluating the system infrastructures including the testbed currently being used to support the on-going evaluations for the evolutionary Space Station Freedom Data Management System is presented and discussed. Intelligent system technologies discussed include artificial intelligence (real-time replanning and scheduling), high performance computational elements (parallel processors, photonic processors, and neural networks), real-time fault management and control, and system software development tools for rapid prototyping capabilities.

  19. Video streaming technologies using ActiveX and LabVIEW

    NASA Astrophysics Data System (ADS)

    Panoiu, M.; Rat, C. L.; Panoiu, C.

    2015-06-01

    The goal of this paper is to present the possibilities of remote image processing through data exchange between two programming technologies: LabVIEW and ActiveX. ActiveX refers to the process of controlling one program from another via ActiveX component; where one program acts as the client and the other as the server. LabVIEW can be either client or server. Both programs (client and server) exist independent of each other but are able to share information. The client communicates with the ActiveX objects that the server opens to allow the sharing of information [7]. In the case of video streaming [1] [2], most ActiveX controls can only display the data, being incapable of transforming it into a data type that LabVIEW can process. This becomes problematic when the system is used for remote image processing. The LabVIEW environment itself provides little if any possibilities for video streaming, and the methods it does offer are usually not high performance, but it possesses high performance toolkits and modules specialized in image processing, making it ideal for processing the captured data. Therefore, we chose to use existing software, specialized in video streaming along with LabVIEW and to capture the data provided by them, for further use, within LabVIEW. The software we studied (the ActiveX controls of a series of media players that utilize streaming technology) provide high quality data and a very small transmission delay, ensuring the reliability of the results of the image processing.

  20. Evaluation of the adaptation of zirconia-based fixed partial dentures using micro-CT technology.

    PubMed

    Borba, Márcia; Miranda, Walter Gomes; Cesar, Paulo Francisco; Griggs, Jason Allan; Bona, Alvaro Della

    2013-01-01

    The objective of the study was to measure the marginal and internal fit of zirconia-based all-ceramic three-unit fixed partial dentures (FPDs) (Y-TZP - LAVA, 3M-ESPE), using a novel methodology based on micro-computed tomography (micro-CT) technology. Stainless steel models of prepared abutments were fabricated to design FPDs. Ten frameworks were produced with 9 mm2 connector cross-sections using a LAVATM CAD-CAM system. All FPDs were veneered with a compatible porcelain. Each FPD was seated on the original model and scanned using micro-CT. Files were processed using NRecon and CTAn software. Adobe Photoshop and Image J software were used to analyze the cross-sectional images. Five measuring points were selected, as follows: MG - marginal gap; CA - chamfer area; AW - axial wall; AOT - axio-occlusal transition area; OA - occlusal area. Results were statistically analyzed by Kruskall-Wallis and Tukey's post hoc test (α= 0.05). There were significant differences for the gap width between the measurement points evaluated. MG showed the smallest median gap width (42 µm). OA had the highest median gap dimension (125 µm), followed by the AOT point (105 µm). CA and AW gap width values were statistically similar, 66 and 65 µm respectively. Thus, it was possible to conclude that different levels of adaptation were observed within the FPD, at the different measuring points. In addition, the micro-CT technology seems to be a reliable tool to evaluate the fit of dental restorations.

Top