Sample records for software acceptance test

  1. The Rapid Integration and Test Environment: A Process for Achieving Software Test Acceptance

    DTIC Science & Technology

    2010-05-01

    Test Environment : A Process for Achieving Software Test Acceptance 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...mlif`v= 365= k^s^i=mlpqdo^ar^qb=p`elli= The Rapid Integration and Test Environment : A Process for Achieving Software Test Acceptance Patrick V...was awarded the Bronze Star. Introduction The Rapid Integration and Test Environment (RITE) initiative, implemented by the Program Executive Office

  2. Proceedings of the Joint Logistics Commanders Joint Policy Coordinating Group on Computer Resource Management; Computer Software Management Software Workshop, 2-5 April 1979.

    DTIC Science & Technology

    1979-08-21

    Appendix s - Outline and Draft Material for Proposed Triservice Interim Guideline on Application of Software Acceptance Criteria....... 269 Appendix 9...AND DRAFT MATERIAL FOR PROPOSED TRISERVICE INTERIM GUIDELINE ON APPLICATION OF SOFTWARE ACCEPTANCE CRITERIA I I INTRODUCTION The purpose of this guide...contract item (CPCI) (code) 5. CPCI test plan 6. CPCI test procedures 7. CPCI test report 8. Handbooks and manuals. Al though additional material does

  3. Agile Acceptance Test-Driven Development of Clinical Decision Support Advisories: Feasibility of Using Open Source Software.

    PubMed

    Basit, Mujeeb A; Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L

    2018-04-13

    Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test-driven development and automated regression testing promotes reliability. Test-driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a "safety net" for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and "living" design documentation. Rapid-cycle development or "agile" methods are being successfully applied to CDS development. The agile practice of automated test-driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as "executable requirements." We aimed to establish feasibility of acceptance test-driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory's expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. We used test-driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the "executable requirements" are shown prior to building the CDS alert, during build, and after successful build. Automated acceptance test-driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test-driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. ©Mujeeb A Basit, Krystal L Baldwin, Vaishnavi Kannan, Emily L Flahaven, Cassandra J Parks, Jason M Ott, Duwayne L Willett. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.04.2018.

  4. Agile Acceptance Test–Driven Development of Clinical Decision Support Advisories: Feasibility of Using Open Source Software

    PubMed Central

    Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L

    2018-01-01

    Background Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test–driven development and automated regression testing promotes reliability. Test–driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a “safety net” for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and “living” design documentation. Rapid-cycle development or “agile” methods are being successfully applied to CDS development. The agile practice of automated test–driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as “executable requirements.” Objective We aimed to establish feasibility of acceptance test–driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Methods Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory’s expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. Results We used test–driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the “executable requirements” are shown prior to building the CDS alert, during build, and after successful build. Conclusions Automated acceptance test–driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test–driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. PMID:29653922

  5. Detection and avoidance of errors in computer software

    NASA Technical Reports Server (NTRS)

    Kinsler, Les

    1989-01-01

    The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.

  6. Taking advantage of ground data systems attributes to achieve quality results in testing software

    NASA Technical Reports Server (NTRS)

    Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.

    1994-01-01

    During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.

  7. PDSS/IMC qualification test software acceptance procedures

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Tests to be performed for qualifying the payload development support system image motion compensator (IMC) are identified. The performance of these tests will verify the IMC interfaces and thereby verify the qualification test software.

  8. Cargo Movement Operations System (CMOS). Software Test Description

    DTIC Science & Technology

    1990-10-28

    resulting in errors in paragraph numbers and titles. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION...location to test the update of the truck manifest. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION...CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [

  9. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  10. The Alignment of Software Testing Skills of IS Students with Industry Practices--A South African Perspective

    ERIC Educational Resources Information Center

    Scott, Elsje; Zadirov, Alexander; Feinberg, Sean; Jayakody, Ruwanga

    2004-01-01

    Software testing is a crucial component in the development of good quality systems in industry. For this reason it was considered important to investigate the extent to which the Information Systems (IS) syllabus at the University of Cape Town (UCT) was aligned with accepted software testing practices in South Africa. For students to be effective…

  11. Contracting for Computer Software in Standardized Computer Languages

    PubMed Central

    Brannigan, Vincent M.; Dayhoff, Ruth E.

    1982-01-01

    The interaction between standardized computer languages and contracts for programs which use these languages is important to the buyer or seller of software. The rationale for standardization, the problems in standardizing computer languages, and the difficulties of determining whether the product conforms to the standard are issues which must be understood. The contract law processes of delivery, acceptance testing, acceptance, rejection, and revocation of acceptance are applicable to the contracting process for standard language software. Appropriate contract language is suggested for requiring strict compliance with a standard, and an overview of remedies is given for failure to comply.

  12. TPADANA 2.0: draft user's manual of TPAD data analysis software.

    DOT National Transportation Integrated Search

    2016-08-01

    The Total Pavement Acceptance Device (TPAD) is a continuous pavement deflection test : device. Since the device is designed for total acceptance of pavements, the researchers have : combined the deflection testing with Ground Penetrating Radar (GPR),...

  13. Standard practices for the implementation of computer software

    NASA Technical Reports Server (NTRS)

    Irvine, A. P. (Editor)

    1978-01-01

    A standard approach to the development of computer program is provided that covers the file cycle of software development from the planning and requirements phase through the software acceptance testing phase. All documents necessary to provide the required visibility into the software life cycle process are discussed in detail.

  14. Automatically generated acceptance test: A software reliability experiment

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  15. 76 FR 54800 - International Business Machines (IBM), Software Group Business Unit, Quality Assurance Group, San...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-02

    ... Machines (IBM), Software Group Business Unit, Quality Assurance Group, San Jose, California; Notice of... workers of International Business Machines (IBM), Software Group Business Unit, Optim Data Studio Tools QA... February 2, 2011 (76 FR 5832). The subject worker group supplies acceptance testing services, design...

  16. Software reliability experiments data analysis and investigation

    NASA Technical Reports Server (NTRS)

    Walker, J. Leslie; Caglayan, Alper K.

    1991-01-01

    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  17. An assessment of space shuttle flight software development processes

    NASA Technical Reports Server (NTRS)

    1993-01-01

    In early 1991, the National Aeronautics and Space Administration's (NASA's) Office of Space Flight commissioned the Aeronautics and Space Engineering Board (ASEB) of the National Research Council (NRC) to investigate the adequacy of the current process by which NASA develops and verifies changes and updates to the Space Shuttle flight software. The Committee for Review of Oversight Mechanisms for Space Shuttle Flight Software Processes was convened in Jan. 1992 to accomplish the following tasks: (1) review the entire flight software development process from the initial requirements definition phase to final implementation, including object code build and final machine loading; (2) review and critique NASA's independent verification and validation process and mechanisms, including NASA's established software development and testing standards; (3) determine the acceptability and adequacy of the complete flight software development process, including the embedded validation and verification processes through comparison with (1) generally accepted industry practices, and (2) generally accepted Department of Defense and/or other government practices (comparing NASA's program with organizations and projects having similar volumes of software development, software maturity, complexity, criticality, lines of code, and national standards); (4) consider whether independent verification and validation should continue. An overview of the study, independent verification and validation of critical software, and the Space Shuttle flight software development process are addressed. Findings and recommendations are presented.

  18. Tank Monitor and Control System (TMACS) Rev 11.0 Acceptance Test Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HOLM, M.J.

    The purpose of this document is to describe tests performed to validate Revision 11 of the TMACS Monitor and Control System (TMACS) and verify that the software functions as intended by design. This document is intended to test the software portion of TMACS. The tests will be performed on the development system. The software to be tested is the TMACS knowledge bases (KB) and the I/O driver/services. The development system will not be talking to field equipment; instead, the field equipment is simulated using emulators or multiplexers in the lab.

  19. Product assurance policies and procedures for flight dynamics software development

    NASA Technical Reports Server (NTRS)

    Perry, Sandra; Jordan, Leon; Decker, William; Page, Gerald; Mcgarry, Frank E.; Valett, Jon

    1987-01-01

    The product assurance policies and procedures necessary to support flight dynamics software development projects for Goddard Space Flight Center are presented. The quality assurance and configuration management methods and tools for each phase of the software development life cycles are described, from requirements analysis through acceptance testing; maintenance and operation are not addressed.

  20. 242A Distributed Control System Year 2000 Acceptance Test Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TEATS, M.C.

    1999-08-31

    This report documents acceptance test results for the 242-A Evaporator distributive control system upgrade to D/3 version 9.0-2 for year 2000 compliance. This report documents the test results obtained by acceptance testing as directed by procedure HNF-2695. This verification procedure will document the initial testing and evaluation of the potential 242-A Distributed Control System (DCS) operating difficulties across the year 2000 boundary and the calendar adjustments needed for the leap year. Baseline system performance data will be recorded using current, as-is operating system software. Data will also be collected for operating system software that has been modified to correct yearmore » 2000 problems. This verification procedure is intended to be generic such that it may be performed on any D/3{trademark} (GSE Process Solutions, Inc.) distributed control system that runs with the VMSTM (Digital Equipment Corporation) operating system. This test may be run on simulation or production systems depending upon facility status. On production systems, DCS outages will occur nine times throughout performance of the test. These outages are expected to last about 10 minutes each.« less

  1. Software Quality Assurance and Verification for the MPACT Library Generation Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yuxuan; Williams, Mark L.; Wiarda, Dorothea

    This report fulfills the requirements for the Consortium for the Advanced Simulation of Light-Water Reactors (CASL) milestone L2:RTM.P14.02, “SQA and Verification for MPACT Library Generation,” by documenting the current status of the software quality, verification, and acceptance testing of nuclear data libraries for MPACT. It provides a brief overview of the library generation process, from general-purpose evaluated nuclear data files (ENDF/B) to a problem-dependent cross section library for modeling of light-water reactors (LWRs). The software quality assurance (SQA) programs associated with each of the software used to generate the nuclear data libraries are discussed; specific tests within the SCALE/AMPX andmore » VERA/XSTools repositories are described. The methods and associated tests to verify the quality of the library during the generation process are described in detail. The library generation process has been automated to a degree to (1) ensure that it can be run without user intervention and (2) to ensure that the library can be reproduced. Finally, the acceptance testing process that will be performed by representatives from the Radiation Transport Methods (RTM) Focus Area prior to the production library’s release is described in detail.« less

  2. 48 CFR 1845.7101-3 - Unit acquisition cost.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... services for designs, plans, specifications, and surveys. (6) Acquisition and preparation costs of... acquisition cost is under $100,000, it shall be reported as under $100,000. (g) Software acquisition costs include software costs incurred up through acceptance testing and material internal costs incurred to...

  3. MoniQA: a general approach to monitor quality assurance

    NASA Astrophysics Data System (ADS)

    Jacobs, J.; Deprez, T.; Marchal, G.; Bosmans, H.

    2006-03-01

    MoniQA ("Monitor Quality Assurance") is a new, non-commercial, independent quality assurance software application developed in our medical physics team. It is a complete Java TM - based modular environment for the evaluation of radiological viewing devices and it thus fits in the global quality assurance network of our (film less) radiology department. The purpose of the software tool is to guide the medical physicist through an acceptance protocol and the radiologist through a constancy check protocol by presentation of the necessary test patterns and by automated data collection. Data are then sent to a central management system for further analysis. At the moment more than 55 patterns have been implemented, which can be grouped in schemes to implement protocols (i.e. AAPMtg18, DIN and EUREF). Some test patterns are dynamically created and 'drawn' on the viewing device with random parameters as is the case in a recently proposed new pattern for constancy testing. The software is installed on 35 diagnostic stations (70 monitors) in a film less radiology department. Learning time was very limited. A constancy check -with the new pattern that assesses luminance decrease, resolution problems and geometric distortion- takes only 2 minutes and 28 seconds per monitor. The modular approach of the software allows the evaluation of new or emerging test patterns. We will report on the software and its usability: practicality of the constancy check tests in our hospital and on the results from acceptance tests of viewing stations for digital mammography.

  4. NCCS Regression Test Harness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tharrington, Arnold N.

    2015-09-09

    The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.

  5. Recommended approach to software development, revision 3

    NASA Technical Reports Server (NTRS)

    Landis, Linda; Waligora, Sharon; Mcgarry, Frank; Pajerski, Rose; Stark, Mike; Johnson, Kevin Orlin; Cover, Donna

    1992-01-01

    Guidelines for an organized, disciplined approach to software development that is based on studies conducted by the Software Engineering Laboratory (SEL) since 1976 are presented. It describes methods and practices for each phase of a software development life cycle that starts with requirements definition and ends with acceptance testing. For each defined life cycle phase, guidelines for the development process and its management, and for the products produced and their reviews are presented.

  6. SigmaPlot 2000, Version 6.00, SPSS Inc. Computer Software Test Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HURLBUT, S.T.

    2000-10-24

    SigmaPlot is a vendor software product used in conjunction with the supercritical fluid extraction Fourier transform infrared spectrometer (SFE-FTIR) system. This product converts the raw spectral data to useful area numbers. SigmaPlot will be used in conjunction with procedure ZA-565-301, ''Determination of Moisture by Supercritical Fluid Extraction and Infrared Detection.'' This test plan will be performed in conjunction with or prior to HNF-6936, ''HA-53 Supercritical Fluid Extraction System Acceptance Test Plan'', to perform analyses for water. The test will ensure that the software can be installed properly and will manipulate the analytical data correctly.

  7. Cargo Movement Operations System (CMOS) Preliminary Software Test Description, Increment II

    DTIC Science & Technology

    1991-06-26

    occurred within this shall statement. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS...COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED ( ] ORIGINATOR CONTROL NUMBER: STD1-0004 PROGRAM OFFICE CONTROL NUMBER...ERCI ACCEPTS COMMENT: YES [ ] NO ( ] COMMENT DISPOSITION: COMMENT STATUS: OPEN ( ] CLOSED [ ] SYSTEM ENVIRONMENT STD The following comment is related

  8. Wall adjustment strategy software for use with the NASA Langley 0.3-meter transonic cryogenic tunnel adaptive wall test section

    NASA Technical Reports Server (NTRS)

    Wolf, Stephen W. D.

    1988-01-01

    The Wall Adjustment Strategy (WAS) software provides successful on-line control of the 2-D flexible walled test section of the Langley 0.3-m Transonic Cryogenic Tunnel. This software package allows the level of operator intervention to be regulated as necessary for research and production type 2-D testing using and Adaptive Wall Test Section (AWTS). The software is designed to accept modification for future requirements, such as 3-D testing, with a minimum of complexity. The WAS software described is an attempt to provide a user friendly package which could be used to control any flexible walled AWTS. Control system constraints influence the details of data transfer, not the data type. Then this entire software package could be used in different control systems, if suitable interface software is available. A complete overview of the software highlights the data flow paths, the modular architecture of the software and the various operating and analysis modes available. A detailed description of the software modules includes listings of the code. A user's manual is provided to explain task generation, operating environment, user options and what to expect at execution.

  9. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1992-01-01

    Accomplishments in the following research areas are summarized: structure based testing, reliability growth, and design testability with risk evaluation; reliability growth models and software risk management; and evaluation of consensus voting, consensus recovery block, and acceptance voting. Four papers generated during the reporting period are included as appendices.

  10. Development and application of an acceptance testing model

    NASA Technical Reports Server (NTRS)

    Pendley, Rex D.; Noonan, Caroline H.; Hall, Kenneth R.

    1992-01-01

    The process of acceptance testing large software systems for NASA has been analyzed, and an empirical planning model of the process constructed. This model gives managers accurate predictions of the staffing needed, the productivity of a test team, and the rate at which the system will pass. Applying the model to a new system shows a high level of agreement between the model and actual performance. The model also gives managers an objective measure of process improvement.

  11. Cargo Movement Operations System (CMOS) Draft Software Test Plan. Increment II

    DTIC Science & Technology

    1991-02-14

    COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: ACCEPT [ ] REJECT [ J COMMENT STATUS: OPEN [ ] CLOSED [ ] Cmnt Page Paragraph No. No. Number Comment 1. 5 2...ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ J CLOSED [ ] ORIGINATOR CONTROL NUMBER: STP-0003 PROGRAM OFFICE CONTROL...NO [1 ERCI ACCEPTS COMMENT: YES [ 1 NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: STP-0004 PROGRAM

  12. Test Driven Development of Scientific Models

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.

    2014-01-01

    Test-Driven Development (TDD), a software development process that promises many advantages for developer productivity and software reliability, has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices.After a brief overview of the TDD process and my experience in applying the methodology for development activities at Goddard, I will delve more deeply into some of the challenges that are posed by numerical and scientific software as well as tools and implementation approaches that should address those challenges.

  13. Guidelines for testing and release procedures

    NASA Technical Reports Server (NTRS)

    Molari, R.; Conway, M.

    1984-01-01

    Guidelines and procedures are recommended for the testing and release of the types of computer software efforts commonly performed at NASA/Ames Research Center. All recommendations are based on the premise that testing and release activities must be specifically selected for the environment, size, and purpose of each individual software project. Guidelines are presented for building a Test Plan and using formal Test Plan and Test Care Inspections on it. Frequent references are made to NASA/Ames Guidelines for Software Inspections. Guidelines are presented for selecting an Overall Test Approach and for each of the four main phases of testing: (1) Unit Testing of Components, (2) Integration Testing of Components, (3) System Integration Testing, and (4) Acceptance Testing. Tools used for testing are listed, including those available from operating systems used at Ames, specialized tools which can be developed, unit test drivers, stub module generators, and the use of format test reporting schemes.

  14. WRAP low level waste restricted waste management (LLW RWM) glovebox acceptance test report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leist, K.J.

    1997-11-24

    On April 22, 1997, the Low Level Waste Restricted Waste Management (LLW RWM) glovebox was tested using acceptance test procedure 13027A-87. Mr. Robert L. Warmenhoven served as test director, Mr. Kendrick Leist acted as test operator and test witness, and Michael Lane provided miscellaneous software support. The primary focus of the glovebox acceptance test was to examine glovebox control system interlocks, operator Interface Unit (OIU) menus, alarms, and messages. Basic drum port and lift table control sequences were demonstrated. OIU menus, messages, and alarm sequences were examined, with few exceptions noted. Barcode testing was bypassed, due to the lack ofmore » installed equipment as well as the switch from basic reliance on fixed bar code readers to the enhanced use of portable bar code readers. Bar code testing was completed during performance of the LLW RWM OTP. Mechanical and control deficiencies were documented as Test Exceptions during performance of this Acceptance Test. These items are attached as Appendix A to this report.« less

  15. Recommended approach to sofware development

    NASA Technical Reports Server (NTRS)

    Mcgarry, F. E.; Page, J.; Eslinger, S.; Church, V.; Merwarth, P.

    1983-01-01

    A set of guideline for an organized, disciplined approach to software development, based on data collected and studied for 46 flight dynamics software development projects. Methods and practices for each phase of a software development life cycle that starts with requirements analysis and ends with acceptance testing are described; maintenance and operation is not addressed. For each defined life cycle phase, guidelines for the development process and its management, and the products produced and their reviews are presented.

  16. Simplified three microphone acoustic test method

    USDA-ARS?s Scientific Manuscript database

    Accepted acoustic testing standards are available; however, they require specialized hardware and software that are typically out of reach economically to the occasional practitioner. What is needed is a simple and inexpensive screening method that could provide a quick comparison for rapid identifi...

  17. Acceptance test procedure bldg. 271-U remote monitoring of project W-059 B-Plant canyon exhaust system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MCDANIEL, K.S.

    1999-09-01

    The test procedure provides for verifying indications and alarms The test procedure provides for verifying indications and alarms associated with the B Plant Canyon Ventilation System as they are being displayed on a remote monitoring workstation located in building 271-U. The system application software was installed by PLCS Plus under contract from B&W Hanford Company. The application software was installed on an existing operator workstation in building 271U which is owned and operated by Bechtel Hanford Inc.

  18. Test/score/report: Simulation techniques for automating the test process

    NASA Technical Reports Server (NTRS)

    Hageman, Barbara H.; Sigman, Clayton B.; Koslosky, John T.

    1994-01-01

    A Test/Score/Report capability is currently being developed for the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) system which will automate testing of the Goddard Space Flight Center (GSFC) Payload Operations Control Center (POCC) and Mission Operations Center (MOC) software in three areas: telemetry decommutation, spacecraft command processing, and spacecraft memory load and dump processing. Automated computer control of the acceptance test process is one of the primary goals of a test team. With the proper simulation tools and user interface, the task of acceptance testing, regression testing, and repeatability of specific test procedures of a ground data system can be a simpler task. Ideally, the goal for complete automation would be to plug the operational deliverable into the simulator, press the start button, execute the test procedure, accumulate and analyze the data, score the results, and report the results to the test team along with a go/no recommendation to the test team. In practice, this may not be possible because of inadequate test tools, pressures of schedules, limited resources, etc. Most tests are accomplished using a certain degree of automation and test procedures that are labor intensive. This paper discusses some simulation techniques that can improve the automation of the test process. The TASS system tests the POCC/MOC software and provides a score based on the test results. The TASS system displays statistics on the success of the POCC/MOC system processing in each of the three areas as well as event messages pertaining to the Test/Score/Report processing. The TASS system also provides formatted reports documenting each step performed during the tests and the results of each step. A prototype of the Test/Score/Report capability is available and currently being used to test some POCC/MOC software deliveries. When this capability is fully operational it should greatly reduce the time necessary to test a POCC/MOC software delivery, as well as improve the quality of the test process.

  19. 48 CFR 227.7203-14 - Conformity, acceptance, and warranty of computer software and computer software documentation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., and warranty of computer software and computer software documentation. 227.7203-14 Section 227.7203-14... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-14 Conformity, acceptance, and warranty of computer software and computer...

  20. 48 CFR 227.7203-14 - Conformity, acceptance, and warranty of computer software and computer software documentation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., and warranty of computer software and computer software documentation. 227.7203-14 Section 227.7203-14... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-14 Conformity, acceptance, and warranty of computer software and computer...

  1. 48 CFR 227.7203-14 - Conformity, acceptance, and warranty of computer software and computer software documentation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., and warranty of computer software and computer software documentation. 227.7203-14 Section 227.7203-14... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-14 Conformity, acceptance, and warranty of computer software and computer...

  2. 48 CFR 227.7203-14 - Conformity, acceptance, and warranty of computer software and computer software documentation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., and warranty of computer software and computer software documentation. 227.7203-14 Section 227.7203-14... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-14 Conformity, acceptance, and warranty of computer software and computer...

  3. 48 CFR 227.7203-14 - Conformity, acceptance, and warranty of computer software and computer software documentation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., and warranty of computer software and computer software documentation. 227.7203-14 Section 227.7203-14... GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Rights in Computer Software and Computer Software Documentation 227.7203-14 Conformity, acceptance, and warranty of computer software and computer...

  4. When is Testing Sufficient

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda H.; Arthur, James D.; Stapko, Ruth K.; Davani, Darush

    1999-01-01

    The Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center has been investigating how projects can determine when sufficient testing has been completed. For most projects, schedules are underestimated, and the last phase of the software development, testing, must be decreased. Two questions are frequently asked: "To what extent is the software error-free? " and "How much time and effort is required to detect and remove the remaining errors? " Clearly, neither question can be answered with absolute certainty. Nonetheless, the ability to answer these questions with some acceptable level of confidence is highly desirable. First, knowing the extent to which a product is error-free, we can judge when it is time to terminate testing. Secondly, if errors are judged to be present, we can perform a cost/benefit trade-off analysis to estimate when the software will be ready for use and at what cost. This paper explains the efforts of the SATC to help projects determine what is sufficient testing and when is the most cost-effective time to stop testing.

  5. Test Driven Development of Scientific Models

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.

    2012-01-01

    Test-Driven Development (TDD) is a software development process that promises many advantages for developer productivity and has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices. Of course, scientific/technical software differs from other software categories in a number of important respects, but I nonetheless believe that TDD is quite applicable to the development of such software and has the potential to significantly improve programmer productivity and code quality within the scientific community. After a detailed introduction to TDD, I will present the experience within the Software Systems Support Office (SSSO) in applying the technique to various scientific applications. This discussion will emphasize the various direct and indirect benefits as well as some of the difficulties and limitations of the methodology. I will conclude with a brief description of pFUnit, a unit testing framework I co-developed to support test-driven development of parallel Fortran applications.

  6. Analyzing the test process using structural coverage

    NASA Technical Reports Server (NTRS)

    Ramsey, James; Basili, Victor R.

    1985-01-01

    A large, commercially developed FORTRAN program was modified to produce structural coverage metrics. The modified program was executed on a set of functionally generated acceptance tests and a large sample of operational usage cases. The resulting structural coverage metrics are combined with fault and error data to evaluate structural coverage. It was shown that in the software environment the functionally generated tests seem to be a good approximation of operational use. The relative proportions of the exercised statement subclasses change as the structural coverage of the program increases. A method was also proposed for evaluating if two sets of input data exercise a program in a similar manner. Evidence was provided that implies that in this environment, faults revealed in a procedure are independent of the number of times the procedure is executed and that it may be reasonable to use procedure coverage in software models that use statement coverage. Finally, the evidence suggests that it may be possible to use structural coverage to aid in the management of the acceptance test processed.

  7. Acceptance test report for portable exhauster POR-007/Skid E

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kriskovich, J.R.

    1998-07-24

    This document describes Acceptance Testing performed on Portable Exhauster POR-007/Skid E. It includes measurements of bearing vibration levels, pressure decay testing, programmable logic controller interlocks, high vacuum, flow and pressure control functional testing. The purpose of Acceptance testing documented by this report was to demonstrate compliance of the exhausters with the performance criteria established within HNF-0490, Rev. 1 following a repair and upgrade effort at Hanford. In addition, data obtained during this testing is required for the resolution of outstanding Non-conformance Reports (NCR), and finally, to demonstrate the functionality of the associated software for the pressure control and high vacuummore » exhauster operating modes provided for by W-320. Additional testing not required by the ATP was also performed to assist in the disposition and close out of receiving inspection report and for application design information (system curve). Results of this testing are also captured within this document.« less

  8. Application of QC_DR software for acceptance testing and routine quality control of direct digital radiography systems: initial experiences using the Italian Association of Physicist in Medicine quality control protocol.

    PubMed

    Nitrosi, Andrea; Bertolini, Marco; Borasi, Giovanni; Botti, Andrea; Barani, Adriana; Rivetti, Stefano; Pierotti, Luisa

    2009-12-01

    Ideally, medical x-ray imaging systems should be designed to deliver maximum image quality at an acceptable radiation risk to the patient. Quality assurance procedures are employed to ensure that these standards are maintained. A quality control protocol for direct digital radiography (DDR) systems is described and discussed. Software to automatically process and analyze the required images was developed. In this paper, the initial results obtained on equipment of different DDR manufacturers were reported. The protocol was developed to highlight even small discrepancies in standard operating performance.

  9. General Mission Analysis Tool (GMAT) Acceptance Test Plan [Draft

    NASA Technical Reports Server (NTRS)

    Dove, Edwin; Hughes, Steve

    2007-01-01

    The information presented in this Acceptance Test Plan document shows the current status of the General Mission Analysis Tool (GMAT). GMAT is a software system developed by NASA Goddard Space Flight Center (GSFC) in collaboration with the private sector. The GMAT development team continuously performs acceptance tests in order to verify that the software continues to operate properly after updates are made. The GMAT Development team consists of NASA/GSFC Code 583 software developers, NASA/GSFC Code 595 analysts, and contractors of varying professions. GMAT was developed to provide a development approach that maintains involvement from the private sector and academia, encourages collaborative funding from multiple government agencies and the private sector, and promotes the transfer of technology from government funded research to the private sector. GMAT contains many capabilities, such as integrated formation flying modeling and MATLAB compatibility. The propagation capabilities in GMAT allow for fully coupled dynamics modeling of multiple spacecraft, in any flight regime. Other capabilities in GMAT inclucle: user definable coordinate systems, 3-D graphics in any coordinate system GMAT can calculate, 2-D plots, branch commands, solvers, optimizers, GMAT functions, planetary ephemeris sources including DE405, DE200, SLP and analytic models, script events, impulsive and finite maneuver models, and many more. GMAT runs on Windows, Mac, and Linux platforms. Both the Graphical User Interface (GUI) and the GMAT engine were built and tested on all of the mentioned platforms. GMAT was designed for intuitive use from both the GUI and with an importable script language similar to that of MATLAB.

  10. Simplified through-transmission test method for determination of a material's acoustic properties

    USDA-ARS?s Scientific Manuscript database

    Accepted acoustic testing standards are available; however, they require specialized hardware and software that are typically out of reach economically to the occasional practitioner. What is needed is a simple and inexpensive screening method that can provide a quick comparison for rapid identifica...

  11. 40 CFR 86.1830-01 - Acceptance of vehicles for emission testing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... good engineering judgment. (3) Test vehicles must have air conditioning installed and operational if... whole-vehicle cycle, all emission-related hardware and software must be installed and operational during.... Manufacturers shall use good engineering judgment in making such determinations. (c) Special provisions for...

  12. 40 CFR 86.1830-01 - Acceptance of vehicles for emission testing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... good engineering judgment. (3) Test vehicles must have air conditioning installed and operational if... whole-vehicle cycle, all emission-related hardware and software must be installed and operational during.... Manufacturers shall use good engineering judgment in making such determinations. (c) Special provisions for...

  13. 40 CFR 86.1830-01 - Acceptance of vehicles for emission testing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... good engineering judgment. (3) Test vehicles must have air conditioning installed and operational if... whole-vehicle cycle, all emission-related hardware and software must be installed and operational during.... Manufacturers shall use good engineering judgment in making such determinations. (c) Special provisions for...

  14. Automation Hooks Architecture Trade Study for Flexible Test Orchestration

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.

    2010-01-01

    We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.

  15. Please Reduce Cycle Time

    DTIC Science & Technology

    2014-12-01

    observed an ERP system implementation that encountered this exact model. The modified COTS software worked and passed the acceptance tests but never... software -intensive program. We decided to create a very detailed master sched- ule with multiple supporting subschedules that linked and Implementing ...processes in place as part of the COTS implementation . For hardware , COTS can also present some risks. Many pro- grams use COTS computers and servers

  16. Developing interpretable models with optimized set reduction for identifying high risk software components

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1993-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.

  17. Optimal Sample Size Determinations for the Heteroscedastic Two One-Sided Tests of Mean Equivalence: Design Schemes and Software Implementations

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2017-01-01

    Equivalence assessment is becoming an increasingly important topic in many application areas including behavioral and social sciences research. Although there exist more powerful tests, the two one-sided tests (TOST) procedure is a technically transparent and widely accepted method for establishing statistical equivalence. Alternatively, a direct…

  18. Stand Alone Pressure Measurement Device (SAPMD) for the space shuttle Orbiter, part 2

    NASA Technical Reports Server (NTRS)

    Tomlinson, Bill

    1989-01-01

    The Stand Alone Pressure Measurement Device (SAPMD) specifications are examined. The HP.SAPMD GSE software is listed; the HP/SGA readme program is presented; and the SPMD acceptance test procedure is described.

  19. Statewide test of construction quality index for pavement software : final report, October 2008.

    DOT National Transportation Integrated Search

    2008-10-01

    All Florida Department of Transportation (FDOT) pavement projects are accepted in accordance with : one or more construction specifications. The purposes of these specifications are to provide guidance : and establish minimum requirements that enable...

  20. Performance evaluation of the RITG148+ set of TomoTherapy quality assurance tools using RTQA2 radiochromic film.

    PubMed

    Lobb, Eric C

    2016-07-08

    Version 6.3 of the RITG148+ software package offers eight automated analysis routines for quality assurance of the TomoTherapy platform. A performance evaluation of each routine was performed in order to compare RITG148+ results with traditionally accepted analysis techniques and verify that simulated changes in machine parameters are correctly identified by the software. Reference films were exposed according to AAPM TG-148 methodology for each routine and the RITG148+ results were compared with either alternative software analysis techniques or manual analysis techniques in order to assess baseline agreement. Changes in machine performance were simulated through translational and rotational adjustments to subsequently irradiated films, and these films were analyzed to verify that the applied changes were accurately detected by each of the RITG148+ routines. For the Hounsfield unit routine, an assessment of the "Frame Averaging" functionality and the effects of phantom roll on the routine results are presented. All RITG148+ routines reported acceptable baseline results consistent with alternative analysis techniques, with 9 of the 11 baseline test results showing agreement of 0.1mm/0.1° or better. Simulated changes were correctly identified by the RITG148+ routines within approximately 0.2 mm/0.2° with the exception of the Field Centervs. Jaw Setting routine, which was found to have limited accuracy in cases where field centers were not aligned for all jaw settings due to inaccurate autorotation of the film during analysis. The performance of the RITG148+ software package was found to be acceptable for introduction into our clinical environment as an automated alternative to traditional analysis techniques for routine TomoTherapy quality assurance testing.

  1. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  2. Fault Tree Analysis Application for Safety and Reliability

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    Many commercial software tools exist for fault tree analysis (FTA), an accepted method for mitigating risk in systems. The method embedded in the tools identifies a root as use in system components, but when software is identified as a root cause, it does not build trees into the software component. No commercial software tools have been built specifically for development and analysis of software fault trees. Research indicates that the methods of FTA could be applied to software, but the method is not practical without automated tool support. With appropriate automated tool support, software fault tree analysis (SFTA) may be a practical technique for identifying the underlying cause of software faults that may lead to critical system failures. We strive to demonstrate that existing commercial tools for FTA can be adapted for use with SFTA, and that applied to a safety-critical system, SFTA can be used to identify serious potential problems long before integrator and system testing.

  3. Test Driven Development: Lessons from a Simple Scientific Model

    NASA Astrophysics Data System (ADS)

    Clune, T. L.; Kuo, K.

    2010-12-01

    In the commercial software industry, unit testing frameworks have emerged as a disruptive technology that has permanently altered the process by which software is developed. Unit testing frameworks significantly reduce traditional barriers, both practical and psychological, to creating and executing tests that verify software implementations. A new development paradigm, known as test driven development (TDD), has emerged from unit testing practices, in which low-level tests (i.e. unit tests) are created by developers prior to implementing new pieces of code. Although somewhat counter-intuitive, this approach actually improves developer productivity. In addition to reducing the average time for detecting software defects (bugs), the requirement to provide procedure interfaces that enable testing frequently leads to superior design decisions. Although TDD is widely accepted in many software domains, its applicability to scientific modeling still warrants reasonable skepticism. While the technique is clearly relevant for infrastructure layers of scientific models such as the Earth System Modeling Framework (ESMF), numerical and scientific components pose a number of challenges to TDD that are not often encountered in commercial software. Nonetheless, our experience leads us to believe that the technique has great potential not only for developer productivity, but also as a tool for understanding and documenting the basic scientific assumptions upon which our models are implemented. We will provide a brief introduction to test driven development and then discuss our experience in using TDD to implement a relatively simple numerical model that simulates the growth of snowflakes. Many of the lessons learned are directly applicable to larger scientific models.

  4. AZ-101 Mixer Pump Test Qualification Test Procedures (QTP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    THOMAS, W.K.

    2000-01-10

    Describes the Qualification test procedure for the AZ-101 Mixer Pump Data Acquisition System (DAS). The purpose of this Qualification Test Procedure (QTP) is to confirm that the AZ-101 Mixer Pump System has been properly programmed and hardware configured correctly. This QTP will test the software setpoints for the alarms and also check the wiring configuration from the SIMcart to the HMI. An Acceptance Test Procedure (ATP), similar to this QTP will be performed to test field devices and connections from the field.

  5. Characterization of a Two-Stage Pulse Tube Cooler for Space Applications

    NASA Astrophysics Data System (ADS)

    Orsini, R.; Nguyen, T.; Colbert, R.; Raab, J.

    2010-04-01

    A two-stage long-life, low mass and efficient pulse tube cooler for space applications has been developed and acceptance tested for flight applications. This paper presents the data collected on four flight coolers during acceptance testing. Flight acceptance test of these cryocoolers includes thermal performance mapping over a range of reject temperatures, launch vibration testing and thermal cycling testing. Designed conservatively for a 10-year life, the coolers are required to provide simultaneous cooling powers at 95 K and 180 K while rejecting to 300 K with less than 187 W input power to the electronics. The total mass of each cooler and electronics system is 8.7 kg. The radiation-hardened and software driven control electronics provides cooler control functions which are fully re-configurable in orbit. These functions include precision temperature control to better than 100 mK p-p. This 2 stage cooler has heritage to the 12 Northrop Grumman Aerospace Systems (NGAS) coolers currently on orbit with 2 operating for more than 11.5 years.

  6. Systems aspects of COBE science data compression

    NASA Technical Reports Server (NTRS)

    Freedman, I.; Boggess, E.; Seiler, E.

    1993-01-01

    A general approach to compression of diverse data from large scientific projects has been developed and this paper addresses the appropriate system and scientific constraints together with the algorithm development and test strategy. This framework has been implemented for the COsmic Background Explorer spacecraft (COBE) by retrofitting the existing VAS-based data management system with high-performance compression software permitting random access to the data. Algorithms which incorporate scientific knowledge and consume relatively few system resources are preferred over ad hoc methods. COBE exceeded its planned storage by a large and growing factor and the retrieval of data significantly affects the processing, delaying the availability of data for scientific usage and software test. Embedded compression software is planned to make the project tractable by reducing the data storage volume to an acceptable level during normal processing.

  7. Cooperative GN&C development in a rapid prototyping environment. [flight software design for space vehicles

    NASA Technical Reports Server (NTRS)

    Bordano, Aldo; Uhde-Lacovara, JO; Devall, Ray; Partin, Charles; Sugano, Jeff; Doane, Kent; Compton, Jim

    1993-01-01

    The Navigation, Control and Aeronautics Division (NCAD) at NASA-JSC is exploring ways of producing Guidance, Navigation and Control (GN&C) flight software faster, better, and cheaper. To achieve these goals NCAD established two hardware/software facilities that take an avionics design project from initial inception through high fidelity real-time hardware-in-the-loop testing. Commercially available software products are used to develop the GN&C algorithms in block diagram form and then automatically generate source code from these diagrams. A high fidelity real-time hardware-in-the-loop laboratory provides users with the capability to analyze mass memory usage within the targeted flight computer, verify hardware interfaces, conduct system level verification, performance, acceptance testing, as well as mission verification using reconfigurable and mission unique data. To evaluate these concepts and tools, NCAD embarked on a project to build a real-time 6 DOF simulation of the Soyuz Assured Crew Return Vehicle flight software. To date, a productivity increase of 185 percent has been seen over traditional NASA methods for developing flight software.

  8. Analysis of Source Selection Methods and Performance Outcomes: Lowest Price Technically Acceptable vs. Tradeoff in Air Force Acquisitions

    DTIC Science & Technology

    2015-12-01

    issues. A weighted mean can be used in place of the grand mean3 and the STATA software automatically handles the assignment of the sums of squares. Thus...between groups (i.e., sphericity) using the multivariate test of means provided in STATA 12.1. This test checks whether or not population variances and

  9. Process Acceptance and Adoption by IT Software Project Practitioners

    ERIC Educational Resources Information Center

    Guardado, Deana R.

    2012-01-01

    This study addresses the question of what factors determine acceptance and adoption of processes in the context of Information Technology (IT) software development projects. This specific context was selected because processes required for managing software development projects are less prescriptive than in other, more straightforward, IT…

  10. Assessing contextual factors that influence acceptance of pedestrian alerts by a night vision system.

    PubMed

    Källhammer, Jan-Erik; Smith, Kip

    2012-08-01

    We investigated five contextual variables that we hypothesized would influence driver acceptance of alerts to pedestrians issued by a night vision active safety system to inform the specification of the system's alerting strategies. Driver acceptance of automotive active safety systems is a key factor to promote their use and implies a need to assess factors influencing driver acceptance. In a field operational test, 10 drivers drove instrumented vehicles equipped with a preproduction night vision system with pedestrian detection software. In a follow-up experiment, the 10 drivers and 25 additional volunteers without experience with the system watched 57 clips with pedestrian encounters gathered during the field operational test. They rated the acceptance of an alert to each pedestrian encounter. Levels of rating concordance were significant between drivers who experienced the encounters and participants who did not. Two contextual variables, pedestrian location and motion, were found to influence ratings. Alerts were more accepted when pedestrians were close to or moving toward the vehicle's path. The study demonstrates the utility of using subjective driver acceptance ratings to inform the design of active safety systems and to leverage expensive field operational test data within the confines of the laboratory. The design of alerting strategies for active safety systems needs to heed the driver's contextual sensitivity to issued alerts.

  11. Computer output microfilm (FR80) systems software documentation, volume 2

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The system consists of a series of programs which convert digital data from magnetic tapes into alpha-numeric characters, graphic plots, and imagery that is recorded on the face of a cathode ray tube. A special camera photographs the face of the tube on microfilm for subsequent display on a film reader. The applicable documents which apply to this system are delineated. The functional relationship between the system software, the standard insert routines, and the applications programs is described; all the applications programs are described in detail. Instructions for locating those documents are presented along with test preparations sheets for all baseline and/or program modification acceptance tests.

  12. Design Your Own Instructional Software: It's Easy.

    ERIC Educational Resources Information Center

    Pauline, Ronald F.

    Computer Assisted Instruction (CAI) is, quite simply, an instance in which instructional content activities are delivered via a computer. Many commercially-available software programs, although excellent programs, may not be acceptable for each individual teacher's classroom. One way to insure that software is not only acceptable but also targets…

  13. Acceptance test report for portable exhauster POR-008/Skid F

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kriskovich, J.R.

    1998-07-24

    Portable Exhauster POR-008 was procured via HNF-0490, Specification for a Portable Exhausted System for Waste Tank Ventilation. Prior to taking ownership, acceptance testing was performed at the vendors. However at the conclusion of testing a number of issues remained that required resolution before the exhausters could be used by Project W-320. The purpose of acceptance testing documented by this report was to demonstrate compliance of the exhausters with the performance criteria established within HNF-O49O, Rev. 1 following a repair and upgrade effort at Hanford. In addition, data obtained during this testing is required for the resolution of outstanding Non-conformance Reportsmore » (NCR), and finally, to demonstrate the functionality of the associated software for the pressure control and high vacuum exhauster operating modes provided for by W-320. Additional testing not required by the ATP was also performed to assist in the disposition and close out of receiving inspection report and for application design information (system curve). Results of this testing are also captured within this document.« less

  14. Study of fault tolerant software technology for dynamic systems

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Zacharias, G. L.

    1985-01-01

    The major aim of this study is to investigate the feasibility of using systems-based failure detection isolation and compensation (FDIC) techniques in building fault-tolerant software and extending them, whenever possible, to the domain of software fault tolerance. First, it is shown that systems-based FDIC methods can be extended to develop software error detection techniques by using system models for software modules. In particular, it is demonstrated that systems-based FDIC techniques can yield consistency checks that are easier to implement than acceptance tests based on software specifications. Next, it is shown that systems-based failure compensation techniques can be generalized to the domain of software fault tolerance in developing software error recovery procedures. Finally, the feasibility of using fault-tolerant software in flight software is investigated. In particular, possible system and version instabilities, and functional performance degradation that may occur in N-Version programming applications to flight software are illustrated. Finally, a comparative analysis of N-Version and recovery block techniques in the context of generic blocks in flight software is presented.

  15. Further development of the dynamic gas temperature measurement system. Volume 2: Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Stocks, Dana R.

    1986-01-01

    The Dynamic Gas Temperature Measurement System compensation software accepts digitized data from two different diameter thermocouples and computes a compensated frequency response spectrum for one of the thermocouples. Detailed discussions of the physical system, analytical model, and computer software are presented in this volume and in Volume 1 of this report under Task 3. Computer program software restrictions and test cases are also presented. Compensated and uncompensated data may be presented in either the time or frequency domain. Time domain data are presented as instantaneous temperature vs time. Frequency domain data may be presented in several forms such as power spectral density vs frequency.

  16. Software for roof defects recognition on aerial photographs

    NASA Astrophysics Data System (ADS)

    Yudin, D.; Naumov, A.; Dolzhenko, A.; Patrakova, E.

    2018-05-01

    The article presents information on software for roof defects recognition on aerial photographs, made with air drones. An areal image segmentation mechanism is described. It allows detecting roof defects – unsmoothness that causes water stagnation after rain. It is shown that HSV-transformation approach allows quick detection of stagnation areas, their size and perimeters, but is sensitive to shadows and changes of the roofing-types. Deep Fully Convolutional Network software solution eliminates this drawback. The tested data set consists of the roofing photos with defects and binary masks for them. FCN approach gave acceptable results of image segmentation in Dice metric average value. This software can be used in inspection automation of roof conditions in the production sector and housing and utilities infrastructure.

  17. Software Manages Documentation in a Large Test Facility

    NASA Technical Reports Server (NTRS)

    Gurneck, Joseph M.

    2001-01-01

    The 3MCS computer program assists and instrumentation engineer in performing the 3 essential functions of design, documentation, and configuration management of measurement and control systems in a large test facility. Services provided by 3MCS are acceptance of input from multiple engineers and technicians working at multiple locations;standardization of drawings;automated cross-referencing; identification of errors;listing of components and resources; downloading of test settings; and provision of information to customers.

  18. China SLAT Plan Template

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietrich, Richard E.

    2016-07-01

    This document serves as the System-Level Acceptance Test (SLAT) Plan for Site Name, City, Country. This test plan is to provide independent testing of the Radiation Detection System (RDS) installed at Site Name to verify that Customs has been delivered a fully-functioning system as required by all contractual commitments. The system includes all installed hardware and software components. The SLAT plan will verify that separate components are working individually and collectively from a system perspective.

  19. Software for Optimizing Quality Assurance of Other Software

    NASA Technical Reports Server (NTRS)

    Feather, Martin; Cornford, Steven; Menzies, Tim

    2004-01-01

    Software assurance is the planned and systematic set of activities that ensures that software processes and products conform to requirements, standards, and procedures. Examples of such activities are the following: code inspections, unit tests, design reviews, performance analyses, construction of traceability matrices, etc. In practice, software development projects have only limited resources (e.g., schedule, budget, and availability of personnel) to cover the entire development effort, of which assurance is but a part. Projects must therefore select judiciously from among the possible assurance activities. At its heart, this can be viewed as an optimization problem; namely, to determine the allocation of limited resources (time, money, and personnel) to minimize risk or, alternatively, to minimize the resources needed to reduce risk to an acceptable level. The end result of the work reported here is a means to optimize quality-assurance processes used in developing software.

  20. NEXT GENERATION ANALYSIS SOFTWARE FOR COMPONENT EVALUATION - Results of Rotational Seismometer Evaluation

    NASA Astrophysics Data System (ADS)

    Hart, D. M.; Merchant, B. J.; Abbott, R. E.

    2012-12-01

    The Component Evaluation project at Sandia National Laboratories supports the Ground-based Nuclear Explosion Monitoring program by performing testing and evaluation of the components that are used in seismic and infrasound monitoring systems. In order to perform this work, Component Evaluation maintains a testing facility called the FACT (Facility for Acceptance, Calibration, and Testing) site, a variety of test bed equipment, and a suite of software tools for analyzing test data. Recently, Component Evaluation has successfully integrated several improvements to its software analysis tools and test bed equipment that have substantially improved our ability to test and evaluate components. The software tool that is used to analyze test data is called TALENT: Test and AnaLysis EvaluatioN Tool. TALENT is designed to be a single, standard interface to all test configuration, metadata, parameters, waveforms, and results that are generated in the course of testing monitoring systems. It provides traceability by capturing everything about a test in a relational database that is required to reproduce the results of that test. TALENT provides a simple, yet powerful, user interface to quickly acquire, process, and analyze waveform test data. The software tool has also been expanded recently to handle sensors whose output is proportional to rotation angle, or rotation rate. As an example of this new processing capability, we show results from testing the new ATA ARS-16 rotational seismometer. The test data was collected at the USGS ASL. Four datasets were processed: 1) 1 Hz with increasing amplitude, 2) 4 Hz with increasing amplitude, 3) 16 Hz with increasing amplitude and 4) twenty-six discrete frequencies between 0.353 Hz to 64 Hz. The results are compared to manufacture-supplied data sheets.

  1. 40 CFR 86.1830-01 - Acceptance of vehicles for emission testing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... tolerance range. The manufacturer will determine which components affect emissions using good engineering... hardware and software must be installed and operational during all mileage accumulation after the 5000-mile... representativeness of the emission results will not be affected. Manufacturers shall use good engineering judgment in...

  2. Effectiveness comparison of partially executed t-way test suite based generated by existing strategies

    NASA Astrophysics Data System (ADS)

    Othman, Rozmie R.; Ahmad, Mohd Zamri Zahir; Ali, Mohd Shaiful Aziz Rashid; Zakaria, Hasneeza Liza; Rahman, Md. Mostafijur

    2015-05-01

    Consuming 40 to 50 percent of software development cost, software testing is one of the most resource consuming activities in software development lifecycle. To ensure an acceptable level of quality and reliability of a typical software product, it is desirable to test every possible combination of input data under various configurations. Due to combinatorial explosion problem, considering all exhaustive testing is practically impossible. Resource constraints, costing factors as well as strict time-to-market deadlines are amongst the main factors that inhibit such consideration. Earlier work suggests that sampling strategy (i.e. based on t-way parameter interaction or called as t-way testing) can be effective to reduce number of test cases without effecting the fault detection capability. However, for a very large system, even t-way strategy will produce a large test suite that need to be executed. In the end, only part of the planned test suite can be executed in order to meet the aforementioned constraints. Here, there is a need for test engineers to measure the effectiveness of partially executed test suite in order for them to assess the risk they have to take. Motivated by the abovementioned problem, this paper presents the effectiveness comparison of partially executed t-way test suite generated by existing strategies using tuples coverage method. Here, test engineers can predict the effectiveness of the testing process if only part of the original test cases is executed.

  3. Promon's participation in the Brasilsat program: first & second generations

    NASA Astrophysics Data System (ADS)

    Depaiva, Ricardo N.

    This paper presents an overview of the Brasilsat program, space and ground segments, developed by Hughes and Promon. Promon is a Brazilian engineering company that has been actively participating in the Brasilsat Satellite Telecommunications Program since its beginning. During the first generation, as subcontractor of the Spar/Hughes/SED consortium, Promon had a significant participation in the site installation of the Ground Segment, including the antennas. During the second generation, as partner of a consortium with Hughes, Promon participated in the upgrade of Brasilsat's Ground Segment systems: the TT&C (TCR1, TCR2, and SCC) and the COCC (Communications and Operations Control Center). This upgrade consisted of the design and development of hardware and software to support the second generation requirements, followed by integration and tests, factory acceptance tests, transport to site, site installation, site acceptance tests and warranty support. The upgraded systems are distributed over four sites with remote access to the main ground station. The solutions adopted provide a high level of automation, and easy operator interaction. The hardware and software technologies were selected to provide the flexibility to incorporate new technologies and services from the demanding satellite telecommunications market.

  4. SDO FlatSat Facility

    NASA Technical Reports Server (NTRS)

    Amason, David L.

    2008-01-01

    The goal of the Solar Dynamics Observatory (SDO) is to understand and, ideally, predict the solar variations that influence life and society. It's instruments will measure the properties of the Sun and will take hifh definition images of the Sun every few seconds, all day every day. The FlatSat is a high fidelity electrical and functional representation of the SDO spacecraft bus. It is a high fidelity test bed for Integration & Test (I & T), flight software, and flight operations. For I & T purposes FlatSat will be a driver to development and dry run electrical integration procedures, STOL test procedures, page displays, and the command and telemetry database. FlatSat will also serve as a platform for flight software acceptance and systems testing for the flight software system component including the spacecraft main processors, power supply electronics, attitude control electronic, gimbal control electrons and the S-band communications card. FlatSat will also benefit the flight operations team through post-launch flight software code and table update development and verification and verification of new and updated flight operations products. This document highlights the benefits of FlatSat; describes the building of FlatSat; provides FlatSat facility requirements, access roles and responsibilities; and, and discusses FlatSat mechanical and electrical integration and functional testing.

  5. Software for Analyzing Laminar-to-Turbulent Flow Transitions

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2004-01-01

    Software assurance is the planned and systematic set of activities that ensures that software processes and products conform to requirements, standards, and procedures. Examples of such activities are the following: code inspections, unit tests, design reviews, performance analyses, construction of traceability matrices, etc. In practice, software development projects have only limited resources (e.g., schedule, budget, and availability of personnel) to cover the entire development effort, of which assurance is but a part. Projects must therefore select judiciously from among the possible assurance activities. At its heart, this can be viewed as an optimization problem; namely, to determine the allocation of limited resources (time, money, and personnel) to minimize risk or, alternatively, to minimize the resources needed to reduce risk to an acceptable level. The end result of the work reported here is a means to optimize quality-assurance processes used in developing software. This is achieved by combining two prior programs in an innovative manner

  6. CoRoTlog

    NASA Astrophysics Data System (ADS)

    Plasson, Ph.

    2006-11-01

    LESIA, in close cooperation with CNES, DLR and IWF, is responsible for the tests and validation of the CoRoT instrument digital process unit which is made up of the BEX and DPU assembly. The main part of the work has consisted in validating the DPU software and in testing the BEX/DPU coupling. This work took more than two years due to the central role of the software tested and its technical complexity. The first task, in the validation process, was to carry out the acceptance tests of the DPU software. These tests consisted in checking each of the 325 requirements identified in the URD (User Requirements Document) and were played in a configuration using the DPU coupled to a BEX simulator. During the acceptance tests, all the transversal functionalities of the DPU software, like the TC/TM management, the state machine management, the BEX driving, the system monitoring or the maintenance functionalities were checked in depth. The functionalities associated with the seismology and exoplanetology processing, like the loading of window and mask descriptors or the configuration of the service execution parameters, were also exhaustively tested. After having validated the DPU software against the user requirements using a BEX simulator, the following step consisted in coupling the DPU and the BEX in order to check that the formed unit worked correctly and met the performance requirements. These tests were conducted in two phases: the first one was devoted to the functional aspects and the tests of interface, the second one to the performance aspects. The performance tests were based on the use of the DPU software scientific services and on the use of full images representative of a realistic sky as inputs. These tests were also based on the use of a reference set of windows and parameters, which was provided by the scientific team and was representative, in terms of load and complexity, of the one that could be used during the observation mode of the CoRoT instrument. Theywere played in a configuration using either a BCC simulator or a real BCC coupled to a video simulator, to feed the BEX/DPU unit. The validation of the scientific algorithms was conducted in parallel to the phase of the BEX/DPU coupling tests. The objective of this phase was to check that the algorithms implemented in the scientific services of the DPU software were in good conformity with those specified in the URD and that the obtained numerical precision corresponded to that expected. Forty cases of tests were defined covering the fine and rough angular error measurement processing, the rejection of the brilliant pixels, the subtraction of the offset and the sky background, the photometry algorithms, the SAA handling and reference image management. For each test case, the LESIA scientific team produced, by simulation, using the model instrument, the dynamic data files and the parameter sets allowing to feed the DPU on the one hand, and, on the other hand, a model of the onboard software. These data files correspond to FITS images (black windows, star windows, offset windows) containing more or less disturbances and making it possible to test the DPU software in dynamic mode over durations of up to 48 hours. To perform the test and validation activities of the CoRoT instrument digital process unit, a set of software testing tools was developed by LESIA (Software Ground Support Equipment, hereafter "SGSE"). Thanks to their versatility and modularity, these software testing tools were actually used during all the activities of integration, tests and validation of the instrument and its subsystems CoRoTCase and CoRoTCam. The CoRoT SGSE were specified, designed and developed by LESIA. The objective was to have a software system allowing the users (validation team of the onboard software, instrument integration team, etc.) to remotely control and monitor the whole instrument or only one of the subsystems of the instrument like the DPU coupled to a simulator BEX or the BEX/DPU unit coupled to a BCC simulator. The idea was to be able to interact in real time with the system under test by driving the various EGSE, but also to play test procedures implemented as scripts organized into libraries, to record the telemetries and housekeeping data in a database, and to be able to carry out post-mortem analyses.

  7. Generating Safety-Critical PLC Code From a High-Level Application Software Specification

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is shown.

  8. Automatic segmentation software in locally advanced rectal cancer: READY (REsearch program in Auto Delineation sYstem)-RECTAL 02: prospective study.

    PubMed

    Gambacorta, Maria A; Boldrini, Luca; Valentini, Chiara; Dinapoli, Nicola; Mattiucci, Gian C; Chiloiro, Giuditta; Pasini, Danilo; Manfrida, Stefania; Caria, Nicola; Minsky, Bruce D; Valentini, Vincenzo

    2016-07-05

    To validate autocontouring software (AS) in a clinical practice including a two steps delineation quality assurance (QA) procedure.The existing delineation agreement among experts for rectal cancer and the overlap and time criteria that have to be verified to allow the use of AS were defined.Median Dice Similarity Coefficient (MDSC), Mean slicewise Hausdorff Distances (MSHD) and Total-Time saving (TT) were analyzed.Two expert Radiation Oncologists reviewed CT-scans of 44 patients and agreed the reference-CTV: the first 14 consecutive cases were used to populate the software Atlas and 30 were used as Test.Each expert performed a manual (group A) and an automatic delineation (group B) of 15 Test patients.The delineations were compared with the reference contours.The overlap between the manual and automatic delineations with MDSC and MSHD and the TT were analyzed.Three acceptance criteria were set: MDSC ≥ 0.75, MSHD ≤1mm and TT sparing ≥ 50%.At least 2 criteria had to be met, one of which had to be TT saving, to validate the system.The MDSC was 0.75, MSHD 2.00 mm and the TT saving 55.5% between group A and group B. MDSC among experts was 0.84.Autosegmentation systems in rectal cancer partially met acceptability criteria with the present version.

  9. Human-rated Safety Certification of a High Voltage Robonaut Lithium-ion Battery

    NASA Technical Reports Server (NTRS)

    Jeevarajan, Judith; Yayathi, S.; Johnson, M.; Waligora, T.; Verdeyen, W.

    2013-01-01

    NASA's rigorous certification process is being followed for the R2 high voltage battery program for use of R2 on International Space Station (ISS). Rigorous development testing at appropriate levels to credible off-nominal conditions and review of test data led to design improvements for safety at the virtual cell, cartridge and battery levels. Tests were carried out at all levels to confirm that both hardware and software controls work. Stringent flight acceptance testing of the flight battery will be completed before launch for mission use on ISS.

  10. Brief Report: A Mobile Application to Treat Prosodic Deficits in Autism Spectrum Disorder and Other Communication Impairments: A Pilot Study.

    PubMed

    Simmons, Elizabeth Schoen; Paul, Rhea; Shic, Frederick

    2016-01-01

    This study examined the acceptability of a mobile application, SpeechPrompts, designed to treat prosodic disorders in children with ASD and other communication impairments. Ten speech-language pathologists (SLPs) in public schools and 40 of their students, 5-19 years with prosody deficits participated. Students received treatment with the software over eight weeks. Pre- and post-treatment speech samples and student engagement data were collected. Feedback on the utility of the software was also obtained. SLPs implemented the software with their students in an authentic education setting. Student engagement ratings indicated students' attention to the software was maintained during treatment. Although more testing is warranted, post-treatment prosody ratings suggest that SpeechPrompts has potential to be a useful tool in the treatment of prosodic disorders.

  11. Project W-211, initial tank retrieval systems, retrieval control system software configuration management plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    RIECK, C.A.

    1999-02-23

    This Software Configuration Management Plan (SCMP) provides the instructions for change control of the W-211 Project, Retrieval Control System (RCS) software after initial approval/release but prior to the transfer of custody to the waste tank operations contractor. This plan applies to the W-211 system software developed by the project, consisting of the computer human-machine interface (HMI) and programmable logic controller (PLC) software source and executable code, for production use by the waste tank operations contractor. The plan encompasses that portion of the W-211 RCS software represented on project-specific AUTOCAD drawings that are released as part of the C1 definitive designmore » package (these drawings are identified on the drawing list associated with each C-1 package), and the associated software code. Implementation of the plan is required for formal acceptance testing and production release. The software configuration management plan does not apply to reports and data generated by the software except where specifically identified. Control of information produced by the software once it has been transferred for operation is the responsibility of the receiving organization.« less

  12. An evaluation of the documented requirements of the SSP UIL and a review of commercial software packages for the development and testing of UIL prototypes

    NASA Technical Reports Server (NTRS)

    Gill, Esther Naomi

    1986-01-01

    A review was conducted of software packages currently on the market which might be integrated with the interface language and aid in reaching the objectives of customization, standardization, transparency, reliability, maintainability, language substitutions, expandability, portability, and flexibility. Recommendations are given for best choices in hardware and software acquisition for inhouse testing of these possible integrations. Software acquisition in the line of tools to aid expert-system development and/or novice program development, artificial intelligent voice technology and touch screen or joystick or mouse utilization as well as networking were recommended. Other recommendations concerned using the language Ada for the user interface language shell because of its high level of standardization, structure, and ability to accept and execute programs written in other programming languages, its DOD ownership and control, and keeping the user interface language simple so that multiples of users will find the commercialization of space within their realm of possibility which is, after all, the purpose of the Space Station.

  13. Gas turbine engines and transmissions for bus demonstration program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigro, D.N.

    1981-11-01

    This final report is to fulfill the contractural requirements of Contract DE-AC02-78CS54867 which required the delivery of 11 Allison GT 404-4 Industrial Gas Turbine Engines and five HT740CT and six V730CT Allison Automatic Transmissions for the Greyhound and Transit Coaches, respectively. In addition, software items such as cost reports, technical reports, installation drawings, acceptance test data and parts lists were required. Engine and transmission deliveries were completed with shipment of the last power package on 11 April 1980. Software items were submitted when required during the performance period of this contract.

  14. Matching software practitioner needs to researcher activities

    NASA Technical Reports Server (NTRS)

    Feather, M. S.; Menzies, T.; Connelly, J. R.

    2003-01-01

    We present an approach to matching software practitioners' needs to software researchers' activities. It uses an accepted taxonomical software classfication scheme as intermediary, in terms of which practitioners express needs, and researchers express activities.

  15. Iteratively Developing an mHealth HIV Prevention Program for Sexual Minority Adolescent Men

    PubMed Central

    Prescott, Tonya L.; Philips, Gregory L.; Bull, Sheana S.; Parsons, Jeffrey T.; Mustanski, Brian

    2015-01-01

    Five activities were implemented between November 2012 and June 2014 to develop an mHealth HIV prevention program for adolescent gay, bisexual, and queer men (AGBM): (1) focus groups to gather acceptability of the program components; (2) ongoing development of content; (3) Content Advisory Teams to confirm the tone, flow, and understandability of program content; (4) an internal team test to alpha test software functionality; and (5) a beta test to test the protocol and intervention messages. Findings suggest that AGBM preferred positive and friendly content that at the same time, did not try to sound like a peer. They deemed the number of daily text messages (i.e., 8–15 per day) to be acceptable. The Text Buddy component was well received but youth needed concrete direction about appropriate discussion topics. AGBM determined the self-safety assessment also was acceptable. Its feasible implementation in the beta test suggests that AGBM can actively self-determine their potential danger when participating in sexual health programs. Partnering with the target population in intervention development is critical to ensure that a salient final product and feasible protocol are created. PMID:26238038

  16. Electric power system test and verification program

    NASA Technical Reports Server (NTRS)

    Rylicki, Daniel S.; Robinson, Frank, Jr.

    1994-01-01

    Space Station Freedom's (SSF's) electric power system (EPS) hardware and software verification is performed at all levels of integration, from components to assembly and system level tests. Careful planning is essential to ensure the EPS is tested properly on the ground prior to launch. The results of the test performed on breadboard model hardware and analyses completed to date have been evaluated and used to plan for design qualification and flight acceptance test phases. These results and plans indicate the verification program for SSF's 75-kW EPS would have been successful and completed in time to support the scheduled first element launch.

  17. SPE (trademark) Oxygen Generator Assembly (OGA). (Refurbishment of the technology demonstrator LFSPE oxygen generation subsystem)

    NASA Technical Reports Server (NTRS)

    Roy, Robert J.

    1995-01-01

    The SPE Oxygen Generator Assembly (OGA) has been modified to correct operational deficiencies present in the original system, and to effect changes to the system hardware and software such that its operating conditions are consistent with the latest configuration requirements for the International Space Station Alpha (ISSA). The effectiveness of these changes has recently been verified through a comprehensive test program which saw the SPE OGA operate for over 740 hours at various test conditions, including over 690 hours, or approximately 460 cycles, simulating the orbit of the space station. This report documents the changes made to the SPE OGA, presents and discusses the test results from the acceptance test program, and provides recommendations for additional development activities pertinent to evolution of the SPE OGA to a flight configuration. Copies of the test data from the acceptance test program are provided with this report on 3.5 inch diskettes in self-extracting archive files.

  18. Quality-control issues on high-resolution diagnostic monitors.

    PubMed

    Parr, L F; Anderson, A L; Glennon, B K; Fetherston, P

    2001-06-01

    Previous literature indicates a need for more data collection in the area of quality control of high-resolution diagnostic monitors. Throughout acceptance testing, which began in June 2000, stability of monitor calibration was analyzed. Although image quality on all monitors was found to be acceptable upon initial acceptance testing using VeriLUM software by Image Smiths, Inc (Germantown, MD), it was determined to be unacceptable during the clinical phase of acceptance testing. High-resolution monitors were evaluated for quality assurance on a weekly basis from installation through acceptance testing and beyond. During clinical utilization determination (CUD), monitor calibration was identified as a problem and the manufacturer returned and recalibrated all workstations. From that time through final acceptance testing, high-resolution monitor calibration and monitor failure rate remained a problem. The monitor vendor then returned to the site to address these areas. Monitor defocus was still noticeable and calibration checks were increased to three times per week. White and black level drift on medium-resolution monitors had been attributed to raster size settings. Measurements of white and black level at several different size settings were taken to determine the effect of size on white and black level settings. Black level remained steady with size change. White level appeared to increase by 2.0 cd/m2 for every 0.1 inches decrease in horizontal raster size. This was determined not to be the cause of the observed brightness drift. Frequency of calibration/testing is an issue in a clinical environment. The increased frequency required at our site cannot be sustained. The medical physics division cannot provide dedicated personnel to conduct the quality-assurance testing on all monitors at this interval due to other physics commitments throughout the hospital. Monitor access is also an issue due to radiologists' need to read images. Some workstations are in use 7 AM to 11 PM daily. An appropriate monitor calibration frequency must be established during acceptance testing to ensure unacceptable drift is not masked by excessive calibration frequency. Standards for acceptable black level and white level drift also need to be determined. The monitor vendor and hospital staff agree that currently, very small printed text is an acceptable method of determining monitor blur, however, a better method of determining monitor blur is being pursued. Although monitors may show acceptable quality during initial acceptance testing, they need to show sustained quality during the clinical acceptance-testing phase. Defocus, black level, and white level are image quality concerns, which need to be evaluated during the clinical phase of acceptance testing. Image quality deficiencies can have a negative impact on patient care and raise serious medical-legal concerns. The attention to quality control required of the hospital staff needs to be realistic and not have a significant impact on radiology workflow.

  19. Encouraging Learning of Industry Technology: A Merchandising Example

    ERIC Educational Resources Information Center

    Reilly, Andrew; Huss, Megan; Stoel, Leslie

    2005-01-01

    The application of the technology acceptance model to a merchandising course teaching industry software was evaluated. Based on technology acceptance research, industry software was presented emphasizing ease-of-use and usefulness. The final course project gave students a quasi real-life experience of combining merchandising skills with the…

  20. Autonomous Real Time Requirements Tracing

    NASA Technical Reports Server (NTRS)

    Plattsmier, George I.; Stetson, Howard K.

    2014-01-01

    One of the more challenging aspects of software development is the ability to verify and validate the functional software requirements dictated by the Software Requirements Specification (SRS) and the Software Detail Design (SDD). Insuring the software has achieved the intended requirements is the responsibility of the Software Quality team and the Software Test team. The utilization of Timeliner-TLX(sup TM) Auto-Procedures for relocating ground operations positions to ISS automated on-board operations has begun the transition that would be required for manned deep space missions with minimal crew requirements. This transition also moves the auto-procedures from the procedure realm into the flight software arena and as such the operational requirements and testing will be more structured and rigorous. The autoprocedures would be required to meet NASA software standards as specified in the Software Safety Standard (NASASTD- 8719), the Software Engineering Requirements (NPR 7150), the Software Assurance Standard (NASA-STD-8739) and also the Human Rating Requirements (NPR-8705). The Autonomous Fluid Transfer System (AFTS) test-bed utilizes the Timeliner-TLX(sup TM) Language for development of autonomous command and control software. The Timeliner- TLX(sup TM) system has the unique feature of providing the current line of the statement in execution during real-time execution of the software. The feature of execution line number internal reporting unlocks the capability of monitoring the execution autonomously by use of a companion Timeliner-TLX(sup TM) sequence as the line number reporting is embedded inside the Timeliner-TLX(sup TM) execution engine. This negates I/O processing of this type data as the line number status of executing sequences is built-in as a function reference. This paper will outline the design and capabilities of the AFTS Autonomous Requirements Tracker, which traces and logs SRS requirements as they are being met during real-time execution of the targeted system. It is envisioned that real time requirements tracing will greatly assist the movement of autoprocedures to flight software enhancing the software assurance of auto-procedures and also their acceptance as reliable commanders

  1. Autonomous Real Time Requirements Tracing

    NASA Technical Reports Server (NTRS)

    Plattsmier, George; Stetson, Howard

    2014-01-01

    One of the more challenging aspects of software development is the ability to verify and validate the functional software requirements dictated by the Software Requirements Specification (SRS) and the Software Detail Design (SDD). Insuring the software has achieved the intended requirements is the responsibility of the Software Quality team and the Software Test team. The utilization of Timeliner-TLX(sup TM) Auto- Procedures for relocating ground operations positions to ISS automated on-board operations has begun the transition that would be required for manned deep space missions with minimal crew requirements. This transition also moves the auto-procedures from the procedure realm into the flight software arena and as such the operational requirements and testing will be more structured and rigorous. The autoprocedures would be required to meet NASA software standards as specified in the Software Safety Standard (NASASTD- 8719), the Software Engineering Requirements (NPR 7150), the Software Assurance Standard (NASA-STD-8739) and also the Human Rating Requirements (NPR-8705). The Autonomous Fluid Transfer System (AFTS) test-bed utilizes the Timeliner-TLX(sup TM) Language for development of autonomous command and control software. The Timeliner-TLX(sup TM) system has the unique feature of providing the current line of the statement in execution during real-time execution of the software. The feature of execution line number internal reporting unlocks the capability of monitoring the execution autonomously by use of a companion Timeliner-TLX(sup TM) sequence as the line number reporting is embedded inside the Timeliner-TLX(sup TM) execution engine. This negates I/O processing of this type data as the line number status of executing sequences is built-in as a function reference. This paper will outline the design and capabilities of the AFTS Autonomous Requirements Tracker, which traces and logs SRS requirements as they are being met during real-time execution of the targeted system. It is envisioned that real time requirements tracing will greatly assist the movement of autoprocedures to flight software enhancing the software assurance of auto-procedures and also their acceptance as reliable commanders.

  2. Nonparametric Statistics Test Software Package.

    DTIC Science & Technology

    1983-09-01

    statis- tics because of their acceptance in the academic world, the availability of computer support, and flexibility in model builling. Nonparametric...25 I1l,lCELL WRITE(NCF,12 ) IvE (I ,RCCT(I) 122 FORMAT(IlXt 3(H5 9 1) IF( IeLT *NCELL) WRITE (NOF1123 J PARTV(I1J 123 FORMAT( Xll----’,FIo.3J 25 CONT

  3. Lessons learned from the usability assessment of home-based telemedicine systems.

    PubMed

    Agnisarman, Sruthy Orozhiyathumana; Chalil Madathil, Kapil; Smith, Kevin; Ashok, Aparna; Welch, Brandon; McElligott, James T

    2017-01-01

    At-home telemedicine visits are quickly becoming an acceptable alternative for in-person patient visits. However, little work has been done to understand the usability of these home-based telemedicine solutions. It is critical for user acceptance and real-world applicability to evaluate available telemedicine solutions within the context-specific needs of the users of this technology. To address this need, this study evaluated the usability of four home-based telemedicine software platforms: Doxy.me, Vidyo, VSee, and Polycom. Using a within-subjects experimental design, twenty participants were asked to complete a telemedicine session involving several tasks using the four platforms. Upon completion of these tasks for each platform, participants completed the IBM computer system usability questionnaire (CSUQ) and the NASA Task Load Index test. Upon completing the tasks on all four platforms, the participants completed a final post-test subjective questionnaire ranking the platforms based on their preference. Of the twenty participants, 19 completed the study. Statistically significant differences among the telemedicine software platforms were found for task completion time, total workload, mental demand, effort, frustration, preference ranking and computer system usability scores. Usability problems with installation and account creation led to high mental demand and task completion time, suggesting the participants preferred a system without such requirements. Majority of the usability issues were identified at the telemedicine initiation phase. The findings from this study can be used by software developers to develop user-friendly telemedicine systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Test Driven Development of a Parameterized Ice Sheet Component

    NASA Astrophysics Data System (ADS)

    Clune, T.

    2011-12-01

    Test driven development (TDD) is a software development methodology that offers many advantages over traditional approaches including reduced development and maintenance costs, improved reliability, and superior design quality. Although TDD is widely accepted in many software communities, the suitability to scientific software is largely undemonstrated and warrants a degree of skepticism. Indeed, numerical algorithms pose several challenges to unit testing in general, and TDD in particular. Among these challenges are the need to have simple, non-redundant closed-form expressions to compare against the results obtained from the implementation as well as realistic error estimates. The necessity for serial and parallel performance raises additional concerns for many scientific applicaitons. In previous work I demonstrated that TDD performed well for the development of a relatively simple numerical model that simulates the growth of snowflakes, but the results were anecdotal and of limited relevance to far more complex software components typical of climate models. This investigation has now been extended by successfully applying TDD to the implementation of a substantial portion of a new parameterized ice sheet component within a full climate model. After a brief introduction to TDD, I will present techniques that address some of the obstacles encountered with numerical algorithms. I will conclude with some quantitative and qualitative comparisons against climate components developed in a more traditional manner.

  5. The integration of automated knowledge acquisition with computer-aided software engineering for space shuttle expert systems

    NASA Technical Reports Server (NTRS)

    Modesitt, Kenneth L.

    1990-01-01

    A prediction was made that the terms expert systems and knowledge acquisition would begin to disappear over the next several years. This is not because they are falling into disuse; it is rather that practitioners are realizing that they are valuable adjuncts to software engineering, in terms of problem domains addressed, user acceptance, and in development methodologies. A specific problem was discussed, that of constructing an automated test analysis system for the Space Shuttle Main Engine. In this domain, knowledge acquisition was part of requirements systems analysis, and was performed with the aid of a powerful inductive ESBT in conjunction with a computer aided software engineering (CASE) tool. The original prediction is not a very risky one -- it has already been accomplished.

  6. Automatic documentation system extension to multi-manufacturers' computers and to measure, improve, and predict software reliability

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.

    1975-01-01

    The DOMONIC system has been modified to run on the Univac 1108 and the CDC 6600 as well as the IBM 370 computer system. The DOMONIC monitor system has been implemented to gather data which can be used to optimize the DOMONIC system and to predict the reliability of software developed using DOMONIC. The areas of quality metrics, error characterization, program complexity, program testing, validation and verification are analyzed. A software reliability model for estimating program completion levels and one on which to base system acceptance have been developed. The DAVE system which performs flow analysis and error detection has been converted from the University of Colorado CDC 6400/6600 computer to the IBM 360/370 computer system for use with the DOMONIC system.

  7. Understanding Acceptance of Software Metrics--A Developer Perspective

    ERIC Educational Resources Information Center

    Umarji, Medha

    2009-01-01

    Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…

  8. Certification of highly complex safety-related systems.

    PubMed

    Reinert, D; Schaefer, M

    1999-01-01

    The BIA has now 15 years of experience with the certification of complex electronic systems for safety-related applications in the machinery sector. Using the example of machining centres this presentation will show the systematic procedure for verifying and validating control systems using Application Specific Integrated Circuits (ASICs) and microcomputers for safety functions. One section will describe the control structure of machining centres with control systems using "integrated safety." A diverse redundant architecture combined with crossmonitoring and forced dynamization is explained. In the main section the steps of the systematic certification procedure are explained showing some results of the certification of drilling machines. Specification reviews, design reviews with test case specification, statistical analysis, and walk-throughs are the analytical measures in the testing process. Systematic tests based on the test case specification, Electro Magnetic Interference (EMI), and environmental testing, and site acceptance tests on the machines are the testing measures for validation. A complex software driven system is always undergoing modification. Most of the changes are not safety-relevant but this has to be proven. A systematic procedure for certifying software modifications is presented in the last section of the paper.

  9. ICT Teachers' Acceptance of "Scratch" as Algorithm Visualization Software

    ERIC Educational Resources Information Center

    Saltan, Fatih; Kara, Mehmet

    2016-01-01

    This study aims to investigate the acceptance of ICT teachers pertaining to the use of Scratch as an Algorithm Visualization (AV) software in terms of perceived ease of use and perceived usefulness. An embedded mixed method research design was used in the study, in which qualitative data were embedded in quantitative ones and used to explain the…

  10. EpHLA: an innovative and user-friendly software automating the HLAMatchmaker algorithm for antibody analysis.

    PubMed

    Sousa, Luiz Cláudio Demes da Mata; Filho, Herton Luiz Alves Sales; Von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; Neto, Pedro de Alcântara dos Santos; de Castro, José Adail Fonseca; do Monte, Semíramis Jamil Hadad

    2011-12-01

    The global challenge for solid organ transplantation programs is to distribute organs to the highly sensitized recipients. The purpose of this work is to describe and test the functionality of the EpHLA software, a program that automates the analysis of acceptable and unacceptable HLA epitopes on the basis of the HLAMatchmaker algorithm. HLAMatchmaker considers small configurations of polymorphic residues referred to as eplets as essential components of HLA-epitopes. Currently, the analyses require the creation of temporary files and the manual cut and paste of laboratory tests results between electronic spreadsheets, which is time-consuming and prone to administrative errors. The EpHLA software was developed in Object Pascal programming language and uses the HLAMatchmaker algorithm to generate histocompatibility reports. The automated generation of reports requires the integration of files containing the results of laboratory tests (HLA typing, anti-HLA antibody signature) and public data banks (NMDP, IMGT). The integration and the access to this data were accomplished by means of the framework called eDAFramework. The eDAFramework was developed in Object Pascal and PHP and it provides data access functionalities for software developed in these languages. The tool functionality was successfully tested in comparison to actual, manually derived reports of patients from a renal transplantation program with related donors. We successfully developed software, which enables the automated definition of the epitope specificities of HLA antibodies. This new tool will benefit the management of recipient/donor pairs selection for highly sensitized patients. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution

    NASA Astrophysics Data System (ADS)

    Samohyl, Robert Wayne

    2017-10-01

    This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The article presents R code throughout.

  12. Exploring User Acceptance of FOSS: The Role of the Age of the Users

    NASA Astrophysics Data System (ADS)

    Gallego, M. Dolores; Bueno, Salvador

    Free and open source software (FOSS) movement essentially arises like answer to the evolution occurred in the market from the software, characterized by the closing of the source code. Furthermore, some FOSS characteristics, such as (1) the advance of this movement and (2) the attractiveness that contributes the voluntary and cooperative work, have increased the interest of the users towards free software. Traditionally, research in FOSS has focused on identifying individual personal motives for participating in the development of a FOSS project, analyzing specific FOSS solutions, or the FOSS movement itself. Nevertheless, the advantages of the FOSS for users and the effect of the demographic dimensions on user acceptance for FOSS have been two research topics with little attention. Specifically, this paper's aim is to focus on the influence of the userś age with FOSS the FOSS acceptance. Based on the literature, userś age is an essential demographic dimension for explaining the Information Systems acceptance. With this purpose, the authors have developed a research model based on the Technological Acceptance Model (TAM).

  13. Cargo Movement Operations System (CMOS). Software Requirements Specification

    DTIC Science & Technology

    1990-03-12

    was erroneously deleted. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN...previous SRS. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED...ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] 0 ORIGINATOR CONTROL NUMBER

  14. Enterprise Architecture Planning in developing A planning Information System: a Case Study of Semarang State University

    NASA Astrophysics Data System (ADS)

    Budiman, Kholiq; Prahasto, Toni; Kusumawardhani, Amie

    2018-02-01

    This research has applied an integrated design and development of planning information system, which is been designed using Enterprise Architecture Planning. Frequent discrepancy between planning and realization of the budget that has been made, resulted in ineffective planning, is one of the reason for doing this research. The design using EAP aims to keep development aligned and in line with the strategic direction of the organization. In the practice, EAP is carried out in several stages of the planning initiation, identification and definition of business functions, proceeded with architectural design and EA implementation plan that has been built. In addition to the design of the Enterprise Architecture, this research carried out the implementation, and was tested by several methods of black box and white box. Black box testing method is used to test the fundamental aspects of the software, tested by two kinds of testing, first is using User Acceptance Testing and the second is using software functionality testing. White box testing method is used to test the effectiveness of the code in the software, tested using unit testing. Tests conducted using white box and black box on the integrated planning information system, is declared successful. Success in the software testing can not be ascertained if the software built has not shown any distinction from prior circumstance to the development of this integrated planning information system. For ensuring the success of this system implementation, the authors test consistency between the planning of data and the realization of prior-use of the information system, until after-use information system. This consistency test is done by reducing the time data of the planning and realization time. From the tabulated data, the planning information system that has been built reduces the difference between the planning time and the realization time, in which indicates that the planning information system can motivate the planner unit in realizing the budget that has been designed. It also proves that the value chain of the information planning system has brought implications for budget realization.

  15. International Space Station alpha remote manipulator system workstation controls test report

    NASA Astrophysics Data System (ADS)

    Ehrenstrom, William A.; Swaney, Colin; Forrester, Patrick

    1994-05-01

    Previous development testing for the space station remote manipulator system workstation controls determined the need for hardware controls for the emergency stop, brakes on/off, and some camera functions. This report documents the results of an evaluation to further determine control implementation requirements, requested by the Canadian Space Agency (CSA), to close outstanding review item discrepancies. This test was conducted at the Johnson Space Center's Space Station Mockup and Trainer Facility in Houston, Texas, with nine NASA astronauts and one CSA astronaut as operators. This test evaluated camera iris and focus, back-up drive, latching end effector release, and autosequence controls using several types of hardware and software implementations. Recommendations resulting from the testing included providing guarded hardware buttons to prevent accidental actuation, providing autosequence controls and back-up drive controls on a dedicated hardware control panel, and that 'latch on/latch off', or on-screen software, controls not be considered. Generally, the operators preferred hardware controls although other control implementations were acceptable. The results of this evaluation will be used along with further testing to define specific requirements for the workstation design.

  16. International Space Station alpha remote manipulator system workstation controls test report

    NASA Technical Reports Server (NTRS)

    Ehrenstrom, William A.; Swaney, Colin; Forrester, Patrick

    1994-01-01

    Previous development testing for the space station remote manipulator system workstation controls determined the need for hardware controls for the emergency stop, brakes on/off, and some camera functions. This report documents the results of an evaluation to further determine control implementation requirements, requested by the Canadian Space Agency (CSA), to close outstanding review item discrepancies. This test was conducted at the Johnson Space Center's Space Station Mockup and Trainer Facility in Houston, Texas, with nine NASA astronauts and one CSA astronaut as operators. This test evaluated camera iris and focus, back-up drive, latching end effector release, and autosequence controls using several types of hardware and software implementations. Recommendations resulting from the testing included providing guarded hardware buttons to prevent accidental actuation, providing autosequence controls and back-up drive controls on a dedicated hardware control panel, and that 'latch on/latch off', or on-screen software, controls not be considered. Generally, the operators preferred hardware controls although other control implementations were acceptable. The results of this evaluation will be used along with further testing to define specific requirements for the workstation design.

  17. The Implications of Using Integrated Software Support Environment for Design of Guidance and Control Systems Software

    DTIC Science & Technology

    1990-02-01

    inspections are performed before each formal review of each software life cycle phase. * Required software audits are performed . " The software is acceptable... Audits : Software audits are performed bySQA consistent with thegeneral audit rules and an auditreportis prepared. Software Quality Inspection (SQI...DSD Software Development Method 3-34 DEFINITION OF ACRONYMS Acronym Full Name or Description MACH Methode d’Analyse et de Conception Flierarchisee

  18. Retinopathy of Prematurity-assist: Novel Software for Detecting Plus Disease

    PubMed Central

    Pour, Elias Khalili; Pourreza, Hamidreza; Zamani, Kambiz Ameli; Mahmoudi, Alireza; Sadeghi, Arash Mir Mohammad; Shadravan, Mahla; Karkhaneh, Reza; Pour, Ramak Rouhi

    2017-01-01

    Purpose To design software with a novel algorithm, which analyzes the tortuosity and vascular dilatation in fundal images of retinopathy of prematurity (ROP) patients with an acceptable accuracy for detecting plus disease. Methods Eighty-seven well-focused fundal images taken with RetCam were classified to three groups of plus, non-plus, and pre-plus by agreement between three ROP experts. Automated algorithms in this study were designed based on two methods: the curvature measure and distance transform for assessment of tortuosity and vascular dilatation, respectively as two major parameters of plus disease detection. Results Thirty-eight plus, 12 pre-plus, and 37 non-plus images, which were classified by three experts, were tested by an automated algorithm and software evaluated the correct grouping of images in comparison to expert voting with three different classifiers, k-nearest neighbor, support vector machine and multilayer perceptron network. The plus, pre-plus, and non-plus images were analyzed with 72.3%, 83.7%, and 84.4% accuracy, respectively. Conclusions The new automated algorithm used in this pilot scheme for diagnosis and screening of patients with plus ROP has acceptable accuracy. With more improvements, it may become particularly useful, especially in centers without a skilled person in the ROP field. PMID:29022295

  19. A Quantitative Analysis of Open Source Software's Acceptability as Production-Quality Code

    ERIC Educational Resources Information Center

    Fischer, Michael

    2011-01-01

    The difficulty in writing defect-free software has been long acknowledged both by academia and industry. A constant battle occurs as developers seek to craft software that works within aggressive business schedules and deadlines. Many tools and techniques are used in attempt to manage these software projects. Software metrics are a tool that has…

  20. Efficacy of a Newly Designed Cephalometric Analysis Software for McNamara Analysis in Comparison with Dolphin Software.

    PubMed

    Nouri, Mahtab; Hamidiaval, Shadi; Akbarzadeh Baghban, Alireza; Basafa, Mohammad; Fahim, Mohammad

    2015-01-01

    Cephalometric norms of McNamara analysis have been studied in various populations due to their optimal efficiency. Dolphin cephalometric software greatly enhances the conduction of this analysis for orthodontic measurements. However, Dolphin is very expensive and cannot be afforded by many clinicians in developing countries. A suitable alternative software program in Farsi/English will greatly help Farsi speaking clinicians. The present study aimed to develop an affordable Iranian cephalometric analysis software program and compare it with Dolphin, the standard software available on the market for cephalometric analysis. In this diagnostic, descriptive study, 150 lateral cephalograms of normal occlusion individuals were selected in Mashhad and Qazvin, two major cities of Iran mainly populated with Fars ethnicity, the main Iranian ethnic group. After tracing the cephalograms, the McNamara analysis standards were measured both with Dolphin and the new software. The cephalometric software was designed using Microsoft Visual C++ program in Windows XP. Measurements made with the new software were compared with those of Dolphin software on both series of cephalograms. The validity and reliability were tested using intra-class correlation coefficient. Calculations showed a very high correlation between the results of the Iranian cephalometric analysis software and Dolphin. This confirms the validity and optimal efficacy of the newly designed software (ICC 0.570-1.0). According to our results, the newly designed software has acceptable validity and reliability and can be used for orthodontic diagnosis, treatment planning and assessment of treatment outcome.

  1. Reliability of Single-Leg Balance and Landing Tests in Rugby Union; Prospect of Using Postural Control to Monitor Fatigue.

    PubMed

    Troester, Jordan C; Jasmin, Jason G; Duffield, Rob

    2018-06-01

    The present study examined the inter-trial (within test) and inter-test (between test) reliability of single-leg balance and single-leg landing measures performed on a force plate in professional rugby union players using commercially available software (SpartaMARS, Menlo Park, USA). Twenty-four players undertook test - re-test measures on two occasions (7 days apart) on the first training day of two respective pre-season weeks following 48h rest and similar weekly training loads. Two 20s single-leg balance trials were performed on a force plate with eyes closed. Three single-leg landing trials were performed by jumping off two feet and landing on one foot in the middle of a force plate 1m from the starting position. Single-leg balance results demonstrated acceptable inter-trial reliability (ICC = 0.60-0.81, CV = 11-13%) for sway velocity, anterior-posterior sway velocity, and mediolateral sway velocity variables. Acceptable inter-test reliability (ICC = 0.61-0.89, CV = 7-13%) was evident for all variables except mediolateral sway velocity on the dominant leg (ICC = 0.41, CV = 15%). Single-leg landing results only demonstrated acceptable inter-trial reliability for force based measures of relative peak landing force and impulse (ICC = 0.54-0.72, CV = 9-15%). Inter-test results indicate improved reliability through the averaging of three trials with force based measures again demonstrating acceptable reliability (ICC = 0.58-0.71, CV = 7-14%). Of the variables investigated here, total sway velocity and relative landing impulse are the most reliable measures of single-leg balance and landing performance, respectively. These measures should be considered for monitoring potential changes in postural control in professional rugby union.

  2. Cargo Movement Operations System (CMOS). Software User’s Manual

    DTIC Science & Technology

    1990-06-27

    RATIONALE: N/A CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: ACCEPT [ ] REJECT [ ] COMMENT STATUS...NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ 3 ORIGINATOR CONTROL NUMBER: SUM-0003 PROGRAM...3.1.11. RATIONALE: Clarity. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mundy, D; Tryggestad, E; Beltran, C

    Purpose: To develop daily and monthly quality assurance (QA) programs in support of a new spot-scanning proton treatment facility using a combination of commercial and custom equipment and software. Emphasis was placed on efficiency and evaluation of key quality parameters. Methods: The daily QA program was developed to test output, spot size and position, proton beam energy, and image guidance using the Sun Nuclear Corporation rf-DQA™3 device and Atlas QA software. The program utilizes standard Atlas linear accelerator tests repurposed for proton measurements and a custom jig for indexing the device to the treatment couch. The monthly QA program wasmore » designed to test mechanical performance, image quality, radiation quality, isocenter coincidence, and safety features. Many of these tests are similar to linear accelerator QA counterparts, but many require customized test design and equipment. Coincidence of imaging, laser marker, mechanical, and radiation isocenters, for instance, is verified using a custom film-based device devised and manufactured at our facility. Proton spot size and position as a function of energy are verified using a custom spot pattern incident on film and analysis software developed in-house. More details concerning the equipment and software developed for monthly QA are included in the supporting document. Thresholds for daily and monthly tests were established via perturbation analysis, early experience, and/or proton system specifications and associated acceptance test results. Results: The periodic QA program described here has been in effect for approximately 9 months and has proven efficient and sensitive to sub-clinical variations in treatment delivery characteristics. Conclusion: Tools and professional guidelines for periodic proton system QA are not as well developed as their photon and electron counterparts. The program described here efficiently evaluates key quality parameters and, while specific to the needs of our facility, could be readily adapted to other proton centers.« less

  4. Software Engineering Laboratory (SEL) cleanroom process model

    NASA Technical Reports Server (NTRS)

    Green, Scott; Basili, Victor; Godfrey, Sally; Mcgarry, Frank; Pajerski, Rose; Waligora, Sharon

    1991-01-01

    The Software Engineering Laboratory (SEL) cleanroom process model is described. The term 'cleanroom' originates in the integrated circuit (IC) production process, where IC's are assembled in dust free 'clean rooms' to prevent the destructive effects of dust. When applying the clean room methodology to the development of software systems, the primary focus is on software defect prevention rather than defect removal. The model is based on data and analysis from previous cleanroom efforts within the SEL and is tailored to serve as a guideline in applying the methodology to future production software efforts. The phases that are part of the process model life cycle from the delivery of requirements to the start of acceptance testing are described. For each defined phase, a set of specific activities is discussed, and the appropriate data flow is described. Pertinent managerial issues, key similarities and differences between the SEL's cleanroom process model and the standard development approach used on SEL projects, and significant lessons learned from prior cleanroom projects are presented. It is intended that the process model described here will be further tailored as additional SEL cleanroom projects are analyzed.

  5. Preprototype SAWD subsystem

    NASA Technical Reports Server (NTRS)

    Nalette, T. A.

    1984-01-01

    A regenerable, three man preprototype solid amine, water desorbed (SAWD) CO2 removal and concentation subsystem was designed, fabricated, and successfully acceptance tested by Hamilton Standard. The preprototype SAWD incorporates a single solid amine canister to perform the CO2 removal function, an accumulator to provide the CO2 storage and delivery function, and a microprocessor which automatically controls the subsystem sequential operation and performance. The SAWD subsystem was configured to have a CO2 removal and CO2 delivery capability at the rate of 0.12 kg/hr (0.264 lb/hr) over the relative humidity range of 35 to 70%. The controller was developed to provide fully automatic control over the relative humidity range via custom software that was generated specifically for the SAWD subsystem. The preprototype SAWD subsystem demonstrated a total of 281 hours (208) cycles of operation during ten acceptance tests that were conducted over the 3 to 70% relative humidity range. This operation was comprised of 178 hours (128 cycles) in the CO2 overboard mode and 103 hours (80 cycles) in the CO2 reduction mode. The average CO2 removal/delivery rate met or exceeded the design specification rate of 0.12 kg/hr (0.254 lb/hr) for all ten of the acceptance tests.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turkington, T.

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Bemore » able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images for SPECT reconstructions. Become knowledgeable of items to be included in annual acceptance testing reports including CT dosimetry and PACS monitor measurements. T. Turkington, GE Healthcare.« less

  7. Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic

    NASA Technical Reports Server (NTRS)

    Leucht, Kurt W.; Semmel, Glenn S.

    2008-01-01

    The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.

  8. Development of a calibrated software reliability model for flight and supporting ground software for avionic systems

    NASA Technical Reports Server (NTRS)

    Lawrence, Stella

    1991-01-01

    The object of this project was to develop and calibrate quantitative models for predicting the quality of software. Reliable flight and supporting ground software is a highly important factor in the successful operation of the space shuttle program. The models used in the present study consisted of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software). There are ten models in SMERFS. For a first run, the results obtained in modeling the cumulative number of failures versus execution time showed fairly good results for our data. Plots of cumulative software failures versus calendar weeks were made and the model results were compared with the historical data on the same graph. If the model agrees with actual historical behavior for a set of data then there is confidence in future predictions for this data. Considering the quality of the data, the models have given some significant results, even at this early stage. With better care in data collection, data analysis, recording of the fixing of failures and CPU execution times, the models should prove extremely helpful in making predictions regarding the future pattern of failures, including an estimate of the number of errors remaining in the software and the additional testing time required for the software quality to reach acceptable levels. It appears that there is no one 'best' model for all cases. It is for this reason that the aim of this project was to test several models. One of the recommendations resulting from this study is that great care must be taken in the collection of data. When using a model, the data should satisfy the model assumptions.

  9. Cargo Movement Operations System (CMOS) Final Software User’s Manual

    DTIC Science & Technology

    1990-12-20

    CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: ACCEPT [ ] REJECT [ I COMMENT STATUS: OPEN...is correct. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS CO1MENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED...RATIONALE: .."DA001041" is in the SUM but not in the SDD. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [

  10. "Test" is a Four Letter Word

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G M

    2005-05-03

    For a number of years I had the pleasure of teaching Testing Seminars all over the world and meeting and learning from others in our field. Over a twelve year period, I always asked the following questions to Software Developers, Test Engineers, and Managers who took my two or three day seminar on Software Testing: 'When was the first time you heard the word test'? 'Where were you when you first heard the word test'? 'Who said the word test'? 'How did the word test make you feel'? Most of the thousands of responses were similar to 'It was mymore » third grade teacher at school, and I felt nervous and afraid'. Now there were a few exceptions like 'It was my third grade teacher, and I was happy and excited to show how smart I was'. But by and large, my informal survey found that 'testing' is a word to which most people attach negative meanings, based on its historical context. So why is this important to those of us in the software development business? Because I have found that a preponderance of software developers do not get real excited about hearing that the software they just wrote is going to be 'tested' by the Test Group. Typical reactions I have heard over the years run from: 'I'm sure there is nothing wrong with the software, so go ahead and test it, better you find defects than our customers'. to these extremes: 'There is no need to test my software because there is nothing wrong with it'. 'You are not qualified to test my software because you don't know as much as I do about it'. 'If any Test Engineers come into our office again to test our software we will throw them through the third floor window'. So why is there such a strong negative reaction to testing? It is primitive. It goes back to grade school for many of us. It is a negative word that congers up negative emotions. In other words, 'test' is a four letter word. How many of us associate 'Joy' with 'Test'? Not many. It is hard for most of us to reprogram associations learned at an early age. So what can we do about it (short of hypnotic therapy for software developers)? Well one concept I have used (and still use) is to not call testing 'testing'. Call it something else. Ever wonder why most of the Independent Software Testing groups are called Software Quality Assurance groups? Now you know. Software Quality Assurance is not such a negatively charged phrase, even though Software Quality Assurance is much more than simply testing. It was a real blessing when the concept of Validation and Verification came about for software. Now I define Validation to mean assuring that the product produced does the right thing (usually what the customer wants it to do), and verification means that the product was built the right way (in accordance with some good design principles and practices). So I have deliberately called the System Test Group the Verification and Validation Group, or V&V Group, as a way of avoiding the negative image problem. I remember once having a conversation with a developer colleague who said, in the heat of battle, that it was fine to V&V his code, just don't test it! Once again V&V includes many things besides testing, but it just doesn't sound like an onerous thing to do to software. In my current job, working at a highly regarded national laboratory with world renowned physicists, I have again encountered the negativity about testing software. Except here they don't take kindly to Software Quality Assurance or Software Verification and Validation either. After all, software is just a trivial tool to automate algorithms that implement physics models. Testing, SQA, and V&V take time and get in the way of completing ground breaking science experiments. So I have again had to change the name of software testing to something less negative in the physics world. I found (the hard way) that if I requested more time to do software experimentation, the physicist's resistance melted. And so the conversation continues, 'We have time to run more software experiments. Just don't waste any time testing the software'! In case the concept of not calling testing 'testing' appeals to you, and there may be an opportunity for you to take the sting out of the name at your place of employment, I have compiled a table of things that testing could be called besides 'testing'. Of course we can embellish this by adding some good sounding prefixes and suffixes also. To come up with alternate names for testing, pick a word from columns A, B, and C in the table below. For instance Unified Acceptance Trials (A2,B7,C3) or Tailored Observational Demonstration (A6,B5,C5) or Agile Criteria Scoring (A3,B8,C8) or Rapid Requirement Proof (A1,B9,C7) or Satisfaction Assurance (B10,C1). You can probably think of some additional combinations appropriate for your industry.« less

  11. Using software metrics and software reliability models to attain acceptable quality software for flight and ground support software for avionic systems

    NASA Technical Reports Server (NTRS)

    Lawrence, Stella

    1992-01-01

    This paper is concerned with methods of measuring and developing quality software. Reliable flight and ground support software is a highly important factor in the successful operation of the space shuttle program. Reliability is probably the most important of the characteristics inherent in the concept of 'software quality'. It is the probability of failure free operation of a computer program for a specified time and environment.

  12. Data flow modeling techniques

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  13. Cargo Movement Operations System (CMOS). Software Design Document

    DTIC Science & Technology

    1990-04-29

    order. RATIONALE: N/A CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: ACCEPT [ ] REJECT [ ] COMMENT...inadvertently omitted from the table. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN...YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SDDI-0005 PROGRAM OFFICE CONTROL NUMBER: DATA ITEM

  14. Software for the EVLA

    NASA Astrophysics Data System (ADS)

    Butler, Bryan J.; van Moorsel, Gustaaf; Tody, Doug

    2004-09-01

    The Expanded Very Large Array (EVLA) project is the next generation instrument for high resolution long-millimeter to short-meter wavelength radio astronomy. It is currently funded by NSF, with completion scheduled for 2012. The EVLA will upgrade the VLA with new feeds, receivers, data transmission hardware, correlator, and a new software system to enable the instrument to achieve its full potential. This software includes both that required for controlling and monitoring the instrument and that involved with the scientific dataflow. We concentrate here on a portion of the dataflow software, including: proposal preparation, submission, and handling; observation preparation, scheduling, and remote monitoring; data archiving; and data post-processing, including both automated (pipeline) and manual processing. The primary goals of the software are: to maximize the scientific return of the EVLA; provide ease of use, for both novices and experts; exploit commonality amongst all NRAO telescopes where possible. This last point is both a bane and a blessing: we are not at liberty to do whatever we want in the software, but on the other hand we may borrow from other projects (notably ALMA and GBT) where appropriate. The software design methodology includes detailed initial use-cases and requirements from the scientists, intimate interaction between the scientists and the programmers during design and implementation, and a thorough testing and acceptance plan.

  15. GTest: a software tool for graphical assessment of empirical distributions' Gaussianity.

    PubMed

    Barca, E; Bruno, E; Bruno, D E; Passarella, G

    2016-03-01

    In the present paper, the novel software GTest is introduced, designed for testing the normality of a user-specified empirical distribution. It has been implemented with two unusual characteristics; the first is the user option of selecting four different versions of the normality test, each of them suited to be applied to a specific dataset or goal, and the second is the inferential paradigm that informs the output of such tests: it is basically graphical and intrinsically self-explanatory. The concept of inference-by-eye is an emerging inferential approach which will find a successful application in the near future due to the growing need of widening the audience of users of statistical methods to people with informal statistical skills. For instance, the latest European regulation concerning environmental issues introduced strict protocols for data handling (data quality assurance, outliers detection, etc.) and information exchange (areal statistics, trend detection, etc.) between regional and central environmental agencies. Therefore, more and more frequently, laboratory and field technicians will be requested to utilize complex software applications for subjecting data coming from monitoring, surveying or laboratory activities to specific statistical analyses. Unfortunately, inferential statistics, which actually influence the decisional processes for the correct managing of environmental resources, are often implemented in a way which expresses its outcomes in a numerical form with brief comments in a strict statistical jargon (degrees of freedom, level of significance, accepted/rejected H0, etc.). Therefore, often, the interpretation of such outcomes is really difficult for people with poor statistical knowledge. In such framework, the paradigm of the visual inference can contribute to fill in such gap, providing outcomes in self-explanatory graphical forms with a brief comment in the common language. Actually, the difficulties experienced by colleagues and their request for an effective tool for addressing such difficulties motivated us in adopting the inference-by-eye paradigm and implementing an easy-to-use, quick and reliable statistical tool. GTest visualizes its outcomes as a modified version of the Q-Q plot. The application has been developed in Visual Basic for Applications (VBA) within MS Excel 2010, which demonstrated to have all the characteristics of robustness and reliability needed. GTest provides true graphical normality tests which are as reliable as any statistical quantitative approach but much easier to understand. The Q-Q plots have been integrated with the outlining of an acceptance region around the representation of the theoretical distribution, defined in accordance with the alpha level of significance and the data sample size. The test decision rule is the following: if the empirical scatterplot falls completely within the acceptance region, then it can be concluded that the empirical distribution fits the theoretical one at the given alpha level. A comprehensive case study has been carried out with simulated and real-world data in order to check the robustness and reliability of the software.

  16. Comparison of methods for quantitative evaluation of endoscopic distortion

    NASA Astrophysics Data System (ADS)

    Wang, Quanzeng; Castro, Kurt; Desai, Viraj N.; Cheng, Wei-Chung; Pfefer, Joshua

    2015-03-01

    Endoscopy is a well-established paradigm in medical imaging, and emerging endoscopic technologies such as high resolution, capsule and disposable endoscopes promise significant improvements in effectiveness, as well as patient safety and acceptance of endoscopy. However, the field lacks practical standardized test methods to evaluate key optical performance characteristics (OPCs), in particular the geometric distortion caused by fisheye lens effects in clinical endoscopic systems. As a result, it has been difficult to evaluate an endoscope's image quality or assess its changes over time. The goal of this work was to identify optimal techniques for objective, quantitative characterization of distortion that are effective and not burdensome. Specifically, distortion measurements from a commercially available distortion evaluation/correction software package were compared with a custom algorithm based on a local magnification (ML) approach. Measurements were performed using a clinical gastroscope to image square grid targets. Recorded images were analyzed with the ML approach and the commercial software where the results were used to obtain corrected images. Corrected images based on the ML approach and the software were compared. The study showed that the ML method could assess distortion patterns more accurately than the commercial software. Overall, the development of standardized test methods for characterizing distortion and other OPCs will facilitate development, clinical translation, manufacturing quality and assurance of performance during clinical use of endoscopic technologies.

  17. 10 CFR 603.550 - Acceptability of intellectual property.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... property (e.g., copyrighted material, including software) as cost sharing because: (1) It is difficult to... the contribution. For example, a for-profit firm may offer the use of commercially available software... the software would not be a reasonable basis for valuing its use. ...

  18. 10 CFR 603.550 - Acceptability of intellectual property.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... property (e.g., copyrighted material, including software) as cost sharing because: (1) It is difficult to... the contribution. For example, a for-profit firm may offer the use of commercially available software... the software would not be a reasonable basis for valuing its use. ...

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, K.; Tsai, H.; Decision and Information Sciences

    The technical basis for extending the Model 9977 shipping package periodic maintenance beyond the one-year interval to a maximum of five years is based on the performance of the O-ring seals and the environmental conditions. The DOE Packaging Certification Program (PCP) has tasked Argonne National Laboratory to develop a Radio-Frequency Identification (RFID) temperature monitoring system for use by the facility personnel at DAF/NTS. The RFID temperature monitoring system, depicted in the figure below, consists of the Mk-1 RFId tags, a reader, and a control computer mounted on a mobile platform that can operate as a stand-alone system, or it canmore » be connected to the local IT network. As part of the Conditions of Approval of the CoC, the user must complete the prescribed training to become qualified and be certified for operation of the RFID temperature monitoring system. The training course will be administered by Argonne National Laboratory on behalf of the Headquarters Certifying Official. This is a complete documentation package for the RFID temperature monitoring system of the Model 9977 packagings at NTS. The documentation package will be used for training and certification. The table of contents are: Acceptance Testing Procedure of MK-1 RFID Tags for DOE/EM Nuclear Materials Management Applications; Acceptance Testing Result of MK-1 RFID Tags for DOE/EM Nuclear Materials Management Applications; Performance Test of the Single Bolt Seal Sensor for the Model 9977 Packaging; Calibration of Built-in Thermistors in RFID Tags for Nevada Test Site; Results of Calibration of Built-in Thermistors in RFID Tags; Results of Thermal Calibration of Second Batch of MK-I RFID Tags; Procedure for Installing and Removing MK-1 RFID Tag on Model 9977 Drum; User Guide for RFID Reader and Software for Temperature Monitoring of Model 9977 Drums at NTS; Software Quality Assurance Plan (SQAP) for the ARG-US System; Quality Category for the RFID Temperature Monitoring System; The Documentation Package for the RFID Temperature Monitoring System; Software Test Plan and Results for ARG-US OnSite; Configuration Management Plan (CMP) for the ARG-US System; Requirements Management Plan for the ARG-US System; and Design Management Plan for ARG-US.« less

  20. Reliability of Single-Leg Balance and Landing Tests in Rugby Union; Prospect of Using Postural Control to Monitor Fatigue

    PubMed Central

    Troester, Jordan C.; Jasmin, Jason G.; Duffield, Rob

    2018-01-01

    The present study examined the inter-trial (within test) and inter-test (between test) reliability of single-leg balance and single-leg landing measures performed on a force plate in professional rugby union players using commercially available software (SpartaMARS, Menlo Park, USA). Twenty-four players undertook test – re-test measures on two occasions (7 days apart) on the first training day of two respective pre-season weeks following 48h rest and similar weekly training loads. Two 20s single-leg balance trials were performed on a force plate with eyes closed. Three single-leg landing trials were performed by jumping off two feet and landing on one foot in the middle of a force plate 1m from the starting position. Single-leg balance results demonstrated acceptable inter-trial reliability (ICC = 0.60-0.81, CV = 11-13%) for sway velocity, anterior-posterior sway velocity, and mediolateral sway velocity variables. Acceptable inter-test reliability (ICC = 0.61-0.89, CV = 7-13%) was evident for all variables except mediolateral sway velocity on the dominant leg (ICC = 0.41, CV = 15%). Single-leg landing results only demonstrated acceptable inter-trial reliability for force based measures of relative peak landing force and impulse (ICC = 0.54-0.72, CV = 9-15%). Inter-test results indicate improved reliability through the averaging of three trials with force based measures again demonstrating acceptable reliability (ICC = 0.58-0.71, CV = 7-14%). Of the variables investigated here, total sway velocity and relative landing impulse are the most reliable measures of single-leg balance and landing performance, respectively. These measures should be considered for monitoring potential changes in postural control in professional rugby union. Key points Single-leg balance demonstrated acceptable inter-trial and inter-test reliability. Single-leg landing demonstrated good inter-trial and inter-test reliability for measures of relative peak landing force and relative impulse, but not time to stabilization. Of the variables investigated, sway velocity and relative landing impulse are the most reliable measures of single-leg balance and landing respectively, and should considered for monitoring changes in postural control. PMID:29769817

  1. Cargo Movement Operations System (CMOS): Revised Preliminary Software Design Document (Applications CSCI), Increment II

    DTIC Science & Technology

    1991-05-23

    background color does not change. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO ( ] COMMENT DISPOSITION: CONMENT STATUS: OPEN...NO ( ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ) CLOSED [ ] ,$ ...collected on this worksheet and are arranged in page number order. RATIONALE: N/A CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO

  2. Cargo Movement Operations System (CMOS). Software Requirements Specification (Applications CSCI) Increment 1, Update

    DTIC Science & Technology

    1990-05-31

    12. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED...ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ 3 ORIGINATOR CONTROL NUMBER: SRS1-0004 PROGRAM OFFICE...operational state of the SBSS. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN

  3. A usability evaluation of medical software at an expert conference setting.

    PubMed

    Bond, Raymond Robert; Finlay, Dewar D; Nugent, Chris D; Moore, George; Guldenring, Daniel

    2014-01-01

    A usability test was employed to evaluate two medical software applications at an expert conference setting. One software application is a medical diagnostic tool (electrocardiogram [ECG] viewer) and the other is a medical research tool (electrode misplacement simulator [EMS]). These novel applications have yet to be adopted by the healthcare domain, thus, (1) we wanted to determine the potential user acceptance of these applications and (2) we wanted to determine the feasibility of evaluating medical diagnostic and medical research software at a conference setting as opposed to the conventional laboratory setting. The medical diagnostic tool (ECG viewer) was evaluated using seven delegates and the medical research tool (EMS) was evaluated using 17 delegates that were recruited at the 2010 International Conference on Computing in Cardiology. Each delegate/participant was required to use the software and undertake a set of predefined tasks during the session breaks at the conference. User interactions with the software were recorded using screen-recording software. The 'think-aloud' protocol was also used to elicit verbal feedback from the participants whilst they attempted the pre-defined tasks. Before and after each session, participants completed a pre-test and a post-test questionnaire respectively. The average duration of a usability session at the conference was 34.69 min (SD=10.28). However, taking into account that 10 min was dedicated to the pre-test and post-test questionnaires, the average time dedication to user interaction of the medical software was 24.69 min (SD=10.28). Given we have shown that usability data can be collected at conferences, this paper details the advantages of conference-based usability studies over the laboratory-based approach. For example, given delegates gather at one geographical location, a conference-based usability evaluation facilitates recruitment of a convenient sample of international subject experts. This would otherwise be very expensive to arrange. A conference-based approach also allows for data to be collected over a few days as opposed to months by avoiding administration duties normally involved in laboratory based approach, e.g. mailing invitation letters as part of a recruitment campaign. Following analysis of the user video recordings, 41 (previously unknown) use errors were identified in the advanced ECG viewer and 29 were identified in the EMS application. All use errors were given a consensus severity rating from two independent usability experts. Out of a rating scale of 4 (where 1=cosmetic and 4=critical), the average severity rating for the ECG viewer was 2.24 (SD=1.09) and the average severity rating for the EMS application was 2.34 (SD=0.97). We were also able to extract task completion rates and times from the video recordings to determine the effectiveness of the software applications. For example, six out of seven tasks were completed by all participants when using both applications. This statistic alone suggests both applications already have a high degree of usability. As well as extracting data from the video recordings, we were also able to extract data from the questionnaires. Using a semantic differential scale (where 1=poor and 5=excellent), delegates highly rated the 'responsiveness', 'usefulness', 'learnability' and the 'look and feel' of both applications. This study has shown the potential user acceptance and user-friendliness of the novel EMS and the ECG viewer applications within the healthcare domain. It has also shown that both medical diagnostic software and medical research software can be evaluated for their usability at an expert conference setting. The primary advantage of a conference-based usability evaluation over a laboratory-based evaluation is the high concentration of experts at one location, which is convenient, less time consuming and less expensive. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Development and Validation of an Interactive Internet Platform for Older People: The Healthy Ageing Through Internet Counselling in the Elderly Study.

    PubMed

    Jongstra, Susan; Beishuizen, Cathrien; Andrieu, Sandrine; Barbera, Mariagnese; van Dorp, Matthijs; van de Groep, Bram; Guillemont, Juliette; Mangialasche, Francesca; van Middelaar, Tessa; Moll van Charante, Eric; Soininen, Hilkka; Kivipelto, Miia; Richard, Edo

    2017-02-01

    A myriad of Web-based applications on self-management have been developed, but few focus on older people. In the face of global aging, older people form an important target population for cardiovascular prevention. This article describes the full development of an interactive Internet platform for older people, which was designed for the Healthy Ageing Through Internet Counselling in the Elderly (HATICE) study. We provide recommendations to design senior-friendly Web-based applications for a new approach to multicomponent cardiovascular prevention. The development of the platform followed five phases: (1) conceptual framework; (2) platform concept and functional design; (3) platform building (software and content); (4) testing and pilot study; and (5) final product. We performed a meta-analysis, reviewed guidelines for cardiovascular diseases, and consulted end users, experts, and software developers to create the platform concept and content. The software was built in iterative cycles. In the pilot study, 41 people aged ≥65 years used the platform for 8 weeks. Participants used the interactive features of the platform and appreciated the coach support. During all phases adjustments were made to incorporate all improvements from the previous phases. The final platform is a personal, secured, and interactive platform supported by a coach. When carefully designed, an interactive Internet platform is acceptable and feasible for use by older people with basic computer skills. To improve acceptability by older people, we recommend involving the end users in the process of development, to personalize the platform and to combine the application with human support. The interactive HATICE platform will be tested for efficacy in a multinational randomized controlled trial (ISRCTN48151589).

  5. CrossTalk: The Journal of Defense Software Engineering. Volume 24, Number 2, March/April 2011

    DTIC Science & Technology

    2011-04-01

    and insider at- tacks, we plan to conduct experiments and collect concrete and empirical evidence. As we have done in prior research projects [11...subsequent service failure.” Yet, a faulty state can continue to render service; an er- roneous state cannot. Consider a system that receives concrete ...that does not satisfy specifications. The faults in the concrete are not detected during (faulty) acceptance testing. A two-deck bridge is built using

  6. Stepwise Development of a Text Messaging-Based Bullying Prevention Program for Middle School Students (BullyDown)

    PubMed Central

    Prescott, Tonya L; Espelage, Dorothy L

    2016-01-01

    Background Bullying is a significant public health issue among middle school-aged youth. Current prevention programs have only a moderate impact. Cell phone text messaging technology (mHealth) can potentially overcome existing challenges, particularly those that are structural (e.g., limited time that teachers can devote to non-educational topics). To date, the description of the development of empirically-based mHealth-delivered bullying prevention programs are lacking in the literature. Objective To describe the development of BullyDown, a text messaging-based bullying prevention program for middle school students, guided by the Social-Emotional Learning model. Methods We implemented five activities over a 12-month period: (1) national focus groups (n=37 youth) to gather acceptability of program components; (2) development of content; (3) a national Content Advisory Team (n=9 youth) to confirm content tone; and (4) an internal team test of software functionality followed by a beta test (n=22 youth) to confirm the enrollment protocol and the feasibility and acceptability of the program. Results Recruitment experiences suggested that Facebook advertising was less efficient than using a recruitment firm to recruit youth nationally, and recruiting within schools for the pilot test was feasible. Feedback from the Content Advisory Team suggests a preference for 2-4 brief text messages per day. Beta test findings suggest that BullyDown is both feasible and acceptable: 100% of youth completed the follow-up survey, 86% of whom liked the program. Conclusions Text messaging appears to be a feasible and acceptable delivery method for bullying prevention programming delivered to middle school students. PMID:27296471

  7. Stepwise Development a Text Messaging-Based Bullying Prevention Program for Middle School Students (BullyDown).

    PubMed

    Ybarra, Michele L; Prescott, Tonya L; Espelage, Dorothy L

    2016-06-13

    Bullying is a significant public health issue among middle school-aged youth. Current prevention programs have only a moderate impact. Cell phone text messaging technology (mHealth) can potentially overcome existing challenges, particularly those that are structural (e.g., limited time that teachers can devote to non-educational topics). To date, the description of the development of empirically-based mHealth-delivered bullying prevention programs are lacking in the literature. To describe the development of BullyDown, a text messaging-based bullying prevention program for middle school students, guided by the Social-Emotional Learning model. We implemented five activities over a 12-month period: (1) national focus groups (n=37 youth) to gather acceptability of program components; (2) development of content; (3) a national Content Advisory Team (n=9 youth) to confirm content tone; and (4) an internal team test of software functionality followed by a beta test (n=22 youth) to confirm the enrollment protocol and the feasibility and acceptability of the program. Recruitment experiences suggested that Facebook advertising was less efficient than using a recruitment firm to recruit youth nationally, and recruiting within schools for the pilot test was feasible. Feedback from the Content Advisory Team suggests a preference for 2-4 brief text messages per day. Beta test findings suggest that BullyDown is both feasible and acceptable: 100% of youth completed the follow-up survey, 86% of whom liked the program. Text messaging appears to be a feasible and acceptable delivery method for bullying prevention programming delivered to middle school students.

  8. Cargo Movement Operations System (CMOS). Final Software Design Document. Increment III. (PC Unix - Air Force Configuration)

    DTIC Science & Technology

    1991-07-03

    required changes to this matrix. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN...this appendix should be updated to include all necessary changes. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION...ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SDD3-0004 PROGRAM OFFICE

  9. Integration of the instrument control electronics for the ESPRESSO spectrograph at ESO-VLT

    NASA Astrophysics Data System (ADS)

    Baldini, V.; Calderone, G.; Cirami, R.; Coretti, I.; Cristiani, S.; Di Marcantonio, P.; Mégevand, D.; Riva, M.; Santin, P.

    2016-07-01

    ESPRESSO, the Echelle SPectrograph for Rocky Exoplanet and Stable Spectroscopic Observations of the ESO - Very Large Telescope site, is now in its integration phase. The large number of functions of this complex instrument are fully controlled by a Beckhoff PLC based control electronics architecture. Four small and one large cabinets host the main electronic parts to control all the sensors, motorized stages and other analogue and digital functions of ESPRESSO. The Instrument Control Electronics (ICE) is built following the latest ESO standards and requirements. Two main PLC CPUs are used and are programmed through the TwinCAT Beckhoff dedicated software. The assembly, integration and verification phase of ESPRESSO, due to its distributed nature and different geographical locations of the consortium partners, is quite challenging. After the preliminary assembling and test of the electronic components at the Astronomical Observatory of Trieste and the test of some electronics and software parts at ESO (Garching), the complete system for the control of the four Front End Unit (FEU) arms of ESPRESSO has been fully assembled and tested in Merate (Italy) at the beginning of 2016. After these first tests, the system will be located at the Geneva Observatory (Switzerland) until the Preliminary Acceptance Europe (PAE) and finally shipped to Chile for the commissioning. This paper describes the integration strategy of the ICE workpackage of ESPRESSO, the hardware and software tests that have been performed, with an overall view of the experience gained during these project's phases.

  10. Cargo Movement Operations System (CMOS). Draft Software Programmer’s Manual

    DTIC Science & Technology

    1990-07-12

    NO ( ] COMMENT DISPOSITION: ACCEPT [ ] REJECT [ ] COMMENT STATUS: OPEN [ ] CLOSED [ ] Cmnt Page Paragraph No. No. Number Comment 1. 3-4 3.2 Change...reader in locating pertinent information. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO ( ] COMMENT DISPOSITION: COMMENT...NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN ( ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SPM-0006 PROGRAM

  11. Cargo Movement Operations System (CMOS). Updated Draft Software User’s Manual. Increment I

    DTIC Science & Technology

    1991-03-22

    ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ...shall be combined. Therefore, the menu structure should reflect that change. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO ... COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SUM-0002 PROGRAM OFFICE CONTROL NUMBER: DATA ITEM DISCREPANCY

  12. VLBI Analysis with the Multi-Technique Software GEOSAT

    NASA Technical Reports Server (NTRS)

    Kierulf, Halfdan Pascal; Andersen, Per-Helge; Boeckmann, Sarah; Kristiansen, Oddgeir

    2010-01-01

    GEOSAT is a multi-technique geodetic analysis software developed at Forsvarets Forsknings Institutt (Norwegian defense research establishment). The Norwegian Mapping Authority has now installed the software and has, together with Forsvarets Forsknings Institutt, adapted the software to deliver datum-free normal equation systems in SINEX format. The goal is to be accepted as an IVS Associate Analysis Center and to provide contributions to the IVS EOP combination on a routine basis. GEOSAT is based on an upper diagonal factorized Kalman filter which allows estimation of time variable parameters like the troposphere and clocks as stochastic parameters. The tropospheric delays in various directions are mapped to tropospheric zenith delay using ray-tracing. Meteorological data from ECMWF with a resolution of six hours is used to perform the ray-tracing which depends both on elevation and azimuth. Other models are following the IERS and IVS conventions. The Norwegian Mapping Authority has submitted test SINEX files produced with GEOSAT to IVS. The results have been compared with the existing IVS combined products. In this paper the outcome of these comparisons is presented.

  13. Metric analysis and data validation across FORTRAN projects

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Selby, Richard W., Jr.; Phillips, Tsai-Yun

    1983-01-01

    The desire to predict the effort in developing or explaining the quality of software has led to the proposal of several metrics. As a step toward validating these metrics, the Software Engineering Laboratory (SEL) has analyzed the software science metrics, cyclomatic complexity, and various standard program measures for their relation to effort (including design through acceptance testing), development errors (both discrete and weighted according to the amount of time to locate and fix), and one another. The data investigated are collected from a project FORTRAN environment and examined across several projects at once, within individual projects and by reporting accuracy checks demonstrating the need to validate a database. When the data comes from individual programmers or certain validated projects, the metrics' correlations with actual effort seem to be strongest. For modules developed entirely by individual programmers, the validity ratios induce a statistically significant ordering of several of the metrics' correlations. When comparing the strongest correlations, neither software science's E metric cyclomatic complexity not source lines of code appears to relate convincingly better with effort than the others.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    HUBER, J.H.

    An Enraf Densitometer is installed on tank 241-AY-102. The Densitometer will frequently be tasked to obtain and log density profiles. The activity can be effected a number of ways. Enraf Incorporated provides a software package called ''Logger18'' to its customers for the purpose of in-shop testing of their gauges. Logger18 is capable of accepting an input file which can direct the gauge to obtain a density profile for a given tank level and bottom limit. Logger18 is a complex, DOS based program which will require trained technicians and/or tank farm entries to obtain the data. ALARA considerations have prompted themore » development of a more user-friendly, computer-based interface to the Enraf densitometers. This document records the plan by which this new Enraf data acquisition software will be developed, reviewed, verified, and released. This plan applies to the development and implementation of a one-time-use software program, which will be called ''Enraf Control Panel.'' The software will be primarily used for remote operation of Enraf Densitometers for the purpose of obtaining and logging tank product density profiles.« less

  15. Extreme Programming: Maestro Style

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2009-01-01

    "Extreme Programming: Maestro Style" is the name of a computer programming methodology that has evolved as a custom version of a methodology, called extreme programming that has been practiced in the software industry since the late 1990s. The name of this version reflects its origin in the work of the Maestro team at NASA's Jet Propulsion Laboratory that develops software for Mars exploration missions. Extreme programming is oriented toward agile development of software resting on values of simplicity, communication, testing, and aggressiveness. Extreme programming involves use of methods of rapidly building and disseminating institutional knowledge among members of a computer-programming team to give all the members a shared view that matches the view of the customers for whom the software system is to be developed. Extreme programming includes frequent planning by programmers in collaboration with customers, continually examining and rewriting code in striving for the simplest workable software designs, a system metaphor (basically, an abstraction of the system that provides easy-to-remember software-naming conventions and insight into the architecture of the system), programmers working in pairs, adherence to a set of coding standards, collaboration of customers and programmers, frequent verbal communication, frequent releases of software in small increments of development, repeated testing of the developmental software by both programmers and customers, and continuous interaction between the team and the customers. The environment in which the Maestro team works requires the team to quickly adapt to changing needs of its customers. In addition, the team cannot afford to accept unnecessary development risk. Extreme programming enables the Maestro team to remain agile and provide high-quality software and service to its customers. However, several factors in the Maestro environment have made it necessary to modify some of the conventional extreme-programming practices. The single most influential of these factors is that continuous interaction between customers and programmers is not feasible.

  16. Evaluate the Usability of the Mobile Instant Messaging Software in the Elderly.

    PubMed

    Wen, Tzu-Ning; Cheng, Po-Liang; Chang, Po-Lun

    2017-01-01

    Instant messaging (IM) is one kind of online chat that provides real-time text transmission over the Internet. It becomes one of the popular communication tools. Even it is currnetly an era of smartphones, it still a great challenge to teach and promote the elderly to use smart phone. Besides, the acceptance of the elderly to use IM remains unknown. This study describes the usability and evaluates the acceptance of the IM in the elderly, who use the smartphone for the first time. This study is a quasi-experimental design study. The study period started from October, 2012 to December, 2013. There were totally 41 elderly recruited in the study. All of them were the first time to use LINE app on the smartphones. The usability was evaluated by using the Technology Acceptance Model which consisted of four constructs: cognitive usability, cognitive ease of use, attitude and willingness to use. Overall, the elderly had the best "attitude" for LINE APP communication software, with the highest rating averaging 4.07 points on four constructs, followed by an average of 4 points on "cognitive usefulness". The socres of "cognitive ease of use" and "willingness to use" scores were equal which are an average score of 3.86. It can be interpreted that (1) the elders thought that the LINE APP as an excellent communication tool for them; (2) they found the software is useful (3) it was convenient for them to communicate. However, it was necessary to additionally assist and explain the certain functions such as the options. It would play a great role in the "willingness to use". The positive acceptance of LINE APP in elderly refer to the probable similar acceptance for them to use other communication software. Encouraging the willingness the elderly to explore more technology products and understanding their behavior will be the basic knowledge to develop further software.

  17. DOCLIB: a software library for document processing

    NASA Astrophysics Data System (ADS)

    Jaeger, Stefan; Zhu, Guangyu; Doermann, David; Chen, Kevin; Sampat, Summit

    2006-01-01

    Most researchers would agree that research in the field of document processing can benefit tremendously from a common software library through which institutions are able to develop and share research-related software and applications across academic, business, and government domains. However, despite several attempts in the past, the research community still lacks a widely-accepted standard software library for document processing. This paper describes a new library called DOCLIB, which tries to overcome the drawbacks of earlier approaches. Many of DOCLIB's features are unique either in themselves or in their combination with others, e.g. the factory concept for support of different image types, the juxtaposition of image data and metadata, or the add-on mechanism. We cherish the hope that DOCLIB serves the needs of researchers better than previous approaches and will readily be accepted by a larger group of scientists.

  18. MO-AB-206-02: Testing Gamma Cameras Based On TG177 WG Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halama, J.

    2016-06-15

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Bemore » able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images for SPECT reconstructions. Become knowledgeable of items to be included in annual acceptance testing reports including CT dosimetry and PACS monitor measurements. T. Turkington, GE Healthcare.« less

  19. MO-AB-206-00: Nuclear Medicine Physics and Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Bemore » able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images for SPECT reconstructions. Become knowledgeable of items to be included in annual acceptance testing reports including CT dosimetry and PACS monitor measurements. T. Turkington, GE Healthcare.« less

  20. 32 CFR 37.550 - May I accept intellectual property as cost sharing?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... software) as cost sharing, because: (1) It is difficult to assign values to these intangible contributions... offer the use of commercially available software for which there is an established license fee for use of the product. The costs of the development of the software would not be a reasonable basis for...

  1. Experiences in Teaching a Graduate Course on Model-Driven Software Development

    ERIC Educational Resources Information Center

    Tekinerdogan, Bedir

    2011-01-01

    Model-driven software development (MDSD) aims to support the development and evolution of software intensive systems using the basic concepts of model, metamodel, and model transformation. In parallel with the ongoing academic research, MDSD is more and more applied in industrial practices. After being accepted both by a broad community of…

  2. 32 CFR 37.550 - May I accept intellectual property as cost sharing?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... offer the use of commercially available software for which there is an established license fee for use of the product. The costs of the development of the software would not be a reasonable basis for... software) as cost sharing, because: (1) It is difficult to assign values to these intangible contributions...

  3. Multimedia software to help caregivers cope.

    PubMed

    Chambers, Mary G; Connor, Samantha L; McGonigle, Mary; Diver, Mike G

    2003-01-01

    This report describes the design and evaluation of a software application to help carers cope when faced with caring problems and emergencies. The design process involved users at each stage to ensure the content of the software application was appropriate, and the research team carefully considered the requirements of disabled and elderly users. Focus group discussions and individual interviews were conducted in five European countries to ascertain the needs of caregivers in this area. The findings were used to design a three-part multimedia software application to help family caregivers prepare to cope with sudden, unexpected, and difficult situations that may arise during their time as a caregiver. This prototype then was evaluated via user trials and usability questionnaires to consider the usability and acceptance of the application and any changes that may be required. User acceptance of the software application was high, and the key features of usability such as content, appearance, and navigation were highly rated. In general, comments were positive and enthusiastic regarding the content of the software application and relevance to the caring situation. The software application has the potential to offer information and support to those who are caring for the elderly and disabled at home and to help them prepare for a crisis.

  4. Cargo Movement Operations System (CMOS) Draft Software User’s Manual Increment II

    DTIC Science & Technology

    1991-06-26

    the user. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED...indicated. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR...YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN r I CLOSED [ ] ORIGINATOR CONTROL NUMBER: SUM-0006 PROGRAM OFFICE CONTROL NUMBER: DATA ITEM

  5. Military Standard: Technical Reviews and Audits for Systems, Equipments, and Computer Software

    DTIC Science & Technology

    1985-06-04

    Concept Exploration or Demonstration and Validation phase. Such reviews may De conducted at any time but normal’y -dill be conducted after the...method of re•olutiun shall also De reviewed." All proposed environmental tests shall De reviewe• for compatibility wi•h the specified na•ura...accepted. See Attachment _ for comments. Attached is a list of de `iciencies. Signature(s) of FCA Team Member(s) "Sub-Team Chairperscn 0. Figure 3 Page 5 of

  6. First experiences with the implementation of the European standard EN 62304 on medical device software for the quality assurance of a radiotherapy unit

    PubMed Central

    2014-01-01

    Background According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). Methods The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. Results The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. Conclusions The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers. PMID:24655818

  7. First experiences with the implementation of the European standard EN 62304 on medical device software for the quality assurance of a radiotherapy unit.

    PubMed

    Höss, Angelika; Lampe, Christian; Panse, Ralf; Ackermann, Benjamin; Naumann, Jakob; Jäkel, Oliver

    2014-03-21

    According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers.

  8. Command system output bit verification

    NASA Technical Reports Server (NTRS)

    Odd, C. W.; Abbate, S. F.

    1981-01-01

    An automatic test was developed to test the ability of the deep space station (DSS) command subsystem and exciter to generate and radiate, from the exciter, the correct idle bit sequence for a given flight project or to store and radiate received command data elements and files without alteration. This test, called the command system output bit verification test, is an extension of the command system performance test (SPT) and can be selected as an SPT option. The test compares the bit stream radiated from the DSS exciter with reference sequences generated by the SPT software program. The command subsystem and exciter are verified when the bit stream and reference sequences are identical. It is a key element of the acceptance testing conducted on the command processor assembly (CPA) operational program (DMC-0584-OP-G) prior to its transfer from development to operations.

  9. EpHLA software: a timesaving and accurate tool for improving identification of acceptable mismatches for clinical purposes.

    PubMed

    Filho, Herton Luiz Alves Sales; da Mata Sousa, Luiz Claudio Demes; von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; dos Santos Neto, Pedro de Alcântara; do Nascimento, Ferraz; de Castro, Adail Fonseca; do Nascimento, Liliane Machado; Kneib, Carolina; Bianchi Cazarote, Helena; Mayumi Kitamura, Daniele; Torres, Juliane Roberta Dias; da Cruz Lopes, Laiane; Barros, Aryela Loureiro; da Silva Edlin, Evelin Nildiane; de Moura, Fernanda Sá Leal; Watanabe, Janine Midori Figueiredo; do Monte, Semiramis Jamil Hadad

    2012-06-01

    The HLAMatchmaker algorithm, which allows the identification of “safe” acceptable mismatches (AMMs) for recipients of solid organ and cell allografts, is rarely used in part due to the difficulty in using it in the current Excel format. The automation of this algorithm may universalize its use to benefit the allocation of allografts. Recently, we have developed a new software called EpHLA, which is the first computer program automating the use of the HLAMatchmaker algorithm. Herein, we present the experimental validation of the EpHLA program by showing the time efficiency and the quality of operation. The same results, obtained by a single antigen bead assay with sera from 10 sensitized patients waiting for kidney transplants, were analyzed either by conventional HLAMatchmaker or by automated EpHLA method. Users testing these two methods were asked to record: (i) time required for completion of the analysis (in minutes); (ii) number of eplets obtained for class I and class II HLA molecules; (iii) categorization of eplets as reactive or non-reactive based on the MFI cutoff value; and (iv) determination of AMMs based on eplets' reactivities. We showed that although both methods had similar accuracy, the automated EpHLA method was over 8 times faster in comparison to the conventional HLAMatchmaker method. In particular the EpHLA software was faster and more reliable but equally accurate as the conventional method to define AMMs for allografts. The EpHLA software is an accurate and quick method for the identification of AMMs and thus it may be a very useful tool in the decision-making process of organ allocation for highly sensitized patients as well as in many other applications.

  10. Active Mirror Predictive and Requirements Verification Software (AMP-ReVS)

    NASA Technical Reports Server (NTRS)

    Basinger, Scott A.

    2012-01-01

    This software is designed to predict large active mirror performance at various stages in the fabrication lifecycle of the mirror. It was developed for 1-meter class powered mirrors for astronomical purposes, but is extensible to other geometries. The package accepts finite element model (FEM) inputs and laboratory measured data for large optical-quality mirrors with active figure control. It computes phenomenological contributions to the surface figure error using several built-in optimization techniques. These phenomena include stresses induced in the mirror by the manufacturing process and the support structure, the test procedure, high spatial frequency errors introduced by the polishing process, and other process-dependent deleterious effects due to light-weighting of the mirror. Then, depending on the maturity of the mirror, it either predicts the best surface figure error that the mirror will attain, or it verifies that the requirements for the error sources have been met once the best surface figure error has been measured. The unique feature of this software is that it ties together physical phenomenology with wavefront sensing and control techniques and various optimization methods including convex optimization, Kalman filtering, and quadratic programming to both generate predictive models and to do requirements verification. This software combines three distinct disciplines: wavefront control, predictive models based on FEM, and requirements verification using measured data in a robust, reusable code that is applicable to any large optics for ground and space telescopes. The software also includes state-of-the-art wavefront control algorithms that allow closed-loop performance to be computed. It allows for quantitative trade studies to be performed for optical systems engineering, including computing the best surface figure error under various testing and operating conditions. After the mirror manufacturing process and testing have been completed, the software package can be used to verify that the underlying requirements have been met.

  11. Method of experimental and calculation determination of dissipative properties of carbon

    NASA Astrophysics Data System (ADS)

    Kazakova, Olga I.; Smolin, Igor Yu.; Bezmozgiy, Iosif M.

    2017-12-01

    This paper describes the process of definition of relations between the damping ratio and strain/state levels in a material. For these purposes, the experimental-calculation approach was applied. The experimental research was performed on plane composite specimens. The tests were accompanied by finite element modeling using the ANSYS software. Optimization was used as a tool for FEM property setting and for finding the above-mentioned relations. A difference between the calculation and experimental results was accepted as objective functions of this optimization. The optimization cycle was implemented using the pSeven DATADVANCE software platform. The developed approach makes it possible to determine the relations between the damping ratio and strain/state levels in the material, which can be used for computer modeling of the structure response under dynamic loading.

  12. Factors affecting acceptance of provider-initiated HIV testing and counseling services among outpatient clients in selected health facilities in Harar Town, Eastern Ethiopia.

    PubMed

    Abdurahman, Sami; Seyoum, Berhanu; Oljira, Lemessa; Weldegebreal, Fitsum

    2015-01-01

    To improve the slow uptake of HIV counseling and testing, the World Health Organization (WHO) and the Joint United Nations Programme on HIV/AIDS (UNAIDS) have developed draft guidelines on provider-initiated testing and counseling (PITC). Both in low- and high-income countries, mainly from outpatient clinics and tuberculosis settings, indicates that the direct offer of HIV testing by health providers can result in significant improvements in test uptake. In Ethiopia, there were limited numbers of studies conducted regarding PITC in outpatient clinics. Therefore, in this study, we have assessed the factors affecting the acceptance of PITC among outpatient clients in selected health facilities in Harar, Harari Region State, Ethiopia. Institutional-based, cross-sectional quantitative and qualitative studies were conducted from February 12-30, 2011 in selected health facilities in Harar town, Harari Region State, Ethiopia. The study participants were recruited from the selected health facilities of Harar using a systematic random sampling technique. The collected data were double entered into a data entry file using Epi Info version 3.5.1. The data were transferred to SPSS software version 16 and analyzed according to the different variables. A total of 362 (70.6%) clients accepted PITC, and only 39.4% of clients had heard of PITC in the outpatient department service. Age, occupation, marital status, anyone who wanted to check their HIV status, and the importance of PITC were the variables that showed significant associations with the acceptance of PITC upon bivariate and multivariate analyses. The main reasons given for not accepting the tests were self-trust, not being at risk for HIV, not being ready, needing to consult their partners, a fear of the results, a shortage of staff, a busy work environment, a lack of private rooms, and a lack of refresher training, which were identified as the main barriers for PITC. There is evidence of the relatively increased acceptability of PITC services by outpatient department clients. A program needs to be strengthened to enhance the use of PITC; the Ministry of Health, Regional Health Bureau, and other responsible bodies - including health facilities - should design and strengthen information education and communication/behavioral change and communication interventions and promote activities related to PITC and HIV counseling and testing in both health facilities and the community at large.

  13. INSPECT: A graphical user interface software package for IDARC-2D

    NASA Astrophysics Data System (ADS)

    AlHamaydeh, Mohammad; Najib, Mohamad; Alawnah, Sameer

    Modern day Performance-Based Earthquake Engineering (PBEE) pivots about nonlinear analysis and its feasibility. IDARC-2D is a widely used and accepted software for nonlinear analysis; it possesses many attractive features and capabilities. However, it is operated from the command prompt in the DOS/Unix systems and requires elaborate text-based input files creation by the user. To complement and facilitate the use of IDARC-2D, a pre-processing GUI software package (INSPECT) is introduced herein. INSPECT is created in the C# environment and utilizes the .NET libraries and SQLite database. Extensive testing and verification demonstrated successful and high-fidelity re-creation of several existing IDARC-2D input files. Its design and built-in features aim at expediting, simplifying and assisting in the modeling process. Moreover, this practical aid enhances the reliability of the results and improves accuracy by reducing and/or eliminating many potential and common input mistakes. Such benefits would be appreciated by novice and veteran IDARC-2D users alike.

  14. ASTEP user's guide and software documentation

    NASA Technical Reports Server (NTRS)

    Gliniewicz, A. S.; Lachowski, H. M.; Pace, W. H., Jr.; Salvato, P., Jr.

    1974-01-01

    The Algorithm Simulation Test and Evaluation Program (ASTEP) is a modular computer program developed for the purpose of testing and evaluating methods of processing remotely sensed multispectral scanner earth resources data. ASTEP is written in FORTRAND V on the UNIVAC 1110 under the EXEC 8 operating system and may be operated in either a batch or interactive mode. The program currently contains over one hundred subroutines consisting of data classification and display algorithms, statistical analysis algorithms, utility support routines, and feature selection capability. The current program can accept data in LARSC1, LARSC2, ERTS, and Universal formats, and can output processed image or data tapes in Universal format.

  15. Automated Estimation Of Software-Development Costs

    NASA Technical Reports Server (NTRS)

    Roush, George B.; Reini, William

    1993-01-01

    COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.

  16. Design and implementation of I2Vote--an interactive image-based voting system using windows mobile devices.

    PubMed

    van Ooijen, P M A; Broekema, A; Oudkerk, M

    2011-08-01

    To develop, implement and test a novel audience response system (ARS) that allows image based interaction for radiology education. The ARS developed in this project is based on standard Personal Digital Assistants (PDAs) (HP iPAQ 114 classic handheld) running Microsoft® Windows Mobile® 6 Classic with a large 3.5 in. TFT touch screen (320×240 pixel resolution), high luminance and integrated IEEE 802.11b/g wireless. For software development Visual Studio 2008 professional (Microsoft) was used and all components were written in C#. Two test sessions were conducted to test the software technically followed by two real classroom tests in a radiology class for medical students on thoracic radiology. The novel ARS, called I2Vote, was successfully implemented and provided an easy to use, stable setup. The acceptance of both students and teachers was very high and the interaction with the students improved because of the anonymous interaction possibility. An easy to use handheld based ARS that enables interactive, image-based, teaching is achieved. The system effectively adds an extra dimension to the use of an ARS. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Research study demonstrates computer simulation can predict warpage and assist in its elimination

    NASA Astrophysics Data System (ADS)

    Glozer, G.; Post, S.; Ishii, K.

    1994-10-01

    Programs for predicting warpage in injection molded parts are relatively new. Commercial software for simulating the flow and cooling stages of injection molding have steadily gained acceptance; however, warpage software is not yet as readily accepted. This study focused on gaining an understanding of the predictive capabilities of the warpage software. The following aspects of this study were unique. (1) Quantitative results were found using a statistically designed set of experiments. (2) Comparisons between experimental and simulation results were made with parts produced in a well-instrumented and controlled injection molding machine. (3) The experimental parts were accurately measured on a coordinate measuring machine with a non-contact laser probe. (4) The effect of part geometry on warpage was investigated.

  18. Cargo Movement Operations System (CMOS) Updated Draft Software Requirements Specification (Applications CSCI) Increment II

    DTIC Science & Technology

    1990-11-29

    appropriate to combine them into one paragraph. CMOS PMO ACCEPTS COY24ENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT...COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SRS1-0004...ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SRS1-0005 PROGRAM OFFICE

  19. The Apple III.

    ERIC Educational Resources Information Center

    Ditlea, Steve

    1982-01-01

    Describes and evaluates the features, performance, peripheral devices, available software, and capabilities of the Apple III microcomputer. The computer's operating system, its hardware, and the commercially produced software it accepts are discussed. Specific applications programs for financial planning, accounting, and word processing are…

  20. Control of Technology Transfer at JPL

    NASA Technical Reports Server (NTRS)

    Oliver, Ronald

    2006-01-01

    Controlled Technology: 1) Design: preliminary or critical design data, schematics, technical flow charts, SNV code/diagnostics, logic flow diagrams, wirelist, ICDs, detailed specifications or requirements. 2) Development: constraints, computations, configurations, technical analyses, acceptance criteria, anomaly resolution, detailed test plans, detailed technical proposals. 3) Production: process or how-to: assemble, operated, repair, maintain, modify. 4) Manufacturing: technical instructions, specific parts, specific materials, specific qualities, specific processes, specific flow. 5) Operations: how-to operate, contingency or standard operating plans, Ops handbooks. 6) Repair: repair instructions, troubleshooting schemes, detailed schematics. 7) Test: specific procedures, data, analysis, detailed test plan and retest plans, detailed anomaly resolutions, detailed failure causes and corrective actions, troubleshooting, trended test data, flight readiness data. 8) Maintenance: maintenance schedules and plans, methods for regular upkeep, overhaul instructions. 9) Modification: modification instructions, upgrades kit parts, including software

  1. Acceptance testing and commissioning of Kodak Directview CR-850 digital radiography system.

    PubMed

    Bezak, E; Nelligan, R A

    2006-03-01

    This Technical Paper describes Acceptance Testing and Commissioning of the Kodak DirectView CR-850 digital radiography system installed at the Royal Adelaide Hospital. The first of its type installed in Australia, the system is a "dry" image processor, for which no chemicals are required to develop images. Rather, latent radiographic images are stored on photostimulable phosphor screens, which are scanned and displayed by a reader unit. The image can be digitally processed and enhanced before it is forwarded to a storage device, printer or workstation display, thereby alleviating the need to re-expose patients to achieve satisfactory quality images. The phosphor screens are automatically erased, ready for re-use. Results are reported of tests carried out using the optional "Total Quality Tool" quality assurance package installed with the system. This package includes analysis and reporting software which provides for simple testing and reporting of many important characteristics of the system, such as field uniformity, aspect ratio, line and pixel positions, image and system noise, exposure response, scan linearity, modulation transfer function (MTF) and image artefacts. Acceptance Tests were performed for kV and MV exposures. Resolution for MV exposures was at least 0.8 l/mm, and measured phantom dimensions were within 1.05% of expected magnification. Reproducibility between cassettes was within 1.6%. The mean pixel values on the central axis were close to linear for MV exposures from 3 to 10 MU and reached saturation level at around 20 MU for 6 MV and around 30 MV for 23 MV beams. Noise levels were below 0.2 %.

  2. An Assessment of Software Safety as Applied to the Department of Defense Software Development Process

    DTIC Science & Technology

    1992-12-01

    provide program 5 managers some level of confidence that their software will operate at an acceptable level of risk. A number of structured safety...safety within the constraints of operational effectiveness, schedule, and cost through timely application of system safety management and engineering...Master of Science in Software Systems Management Peter W. Colan, B.S.E. Robert W. Prouhet, B.S. Captain, USAF Captain, USAF December 1992 Approved for

  3. Software For Clear-Air Doppler-Radar Display

    NASA Technical Reports Server (NTRS)

    Johnston, Bruce W.

    1990-01-01

    System of software developed to present plan-position-indicator scans of clear-air Doppler radar station on color graphical cathode-ray-tube display. Designed to incorporate latest accepted standards for equipment, computer programs, and meteorological data bases. Includes use of Ada programming language, of "Graphical-Kernel-System-like" graphics interface, and of Common Doppler Radar Exchange Format. Features include portability and maintainability. Use of Ada software packages produced number of software modules reused on other related projects.

  4. A method for tailoring the information content of a software process model

    NASA Technical Reports Server (NTRS)

    Perkins, Sharon; Arend, Mark B.

    1990-01-01

    The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to set of accepted processes and products for achieving each criterion; (5) Select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.

  5. A method for tailoring the information content of a software process model

    NASA Technical Reports Server (NTRS)

    Perkins, Sharon; Arend, Mark B.

    1990-01-01

    The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to a set of accepted processes and products for achieving each criterion; (5) select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.

  6. Technology to Augment Early Home Visitation for Child Maltreatment Prevention: A Pragmatic Randomized Trial.

    PubMed

    Ondersma, Steven J; Martin, Joanne; Fortson, Beverly; Whitaker, Daniel J; Self-Brown, Shannon; Beatty, Jessica; Loree, Amy; Bard, David; Chaffin, Mark

    2017-11-01

    Early home visitation (EHV) for child maltreatment prevention is widely adopted but has received inconsistent empirical support. Supplementation with interactive software may facilitate attention to major risk factors and use of evidence-based approaches. We developed eight 20-min computer-delivered modules for use by mothers during the course of EHV. These modules were tested in a randomized trial in which 413 mothers were assigned to software-supplemented e-Parenting Program ( ePP), services as usual (SAU), or community referral conditions, with evaluation at 6 and 12 months. Outcomes included satisfaction, working alliance, EHV retention, child maltreatment, and child maltreatment risk factors. The software was well-received overall. At the 6-month follow-up, working alliance ratings were higher in the ePP condition relative to the SAU condition (Cohen's d = .36, p < .01), with no differences at 12 months. There were no between-group differences in maltreatment or major risk factors at either time point. Despite good acceptability and feasibility, these findings provide limited support for use of this software within EHV. These findings contribute to the mixed results seen across different models of EHV for child maltreatment prevention.

  7. Review and evaluation of methods for analyzing capacity at signalized intersections.

    DOT National Transportation Integrated Search

    1996-01-01

    VDOT's current policy is to use and accept from others the 1994 Highway Capacity Manual (HCM) as the basis for capacity analysis on Virginia's streets and highways. VDOT uses the latest version of the Highway Capacity Software (HCS). Software program...

  8. Responsible Internet Use.

    ERIC Educational Resources Information Center

    Truett, Carol; And Others

    1997-01-01

    Provides advice for making school Internet-use guidelines. Outlines responsible proactive use of the Internet for educators and librarians, discusses strengths and weaknesses of Internet blocking software and rating systems, and describes acceptable-use policies (AUP). Lists resources for creating your own AUP, Internet filtering software, and…

  9. Evaluation of a New Patient Record System Using the Optical Card

    PubMed Central

    Brown, J.H.U.; Vallbona, Carlos; Shoda, Junji; Albin, Jean

    1989-01-01

    A new form of patient record has been devised in which a laser imprinted card is coupled to a p.c. for data input and output. Entry of data is simple and recall of any datum requires only a keystroke. Any part of the data can be readily accessed through a software system which encompasses a variety of screens and menus to summarize and combine data. The complete system has been under test in a community health clinic and at NASA and results to date are satisfactory. Preliminary evaluation indicates that the system has no hardware problems, that the software is suitable for the purpose, that patients carry the card and return with it at succeeding visits, that physicians accept that card for a medical record and are pleased with the speed of access and the organization of the data.

  10. CMIP: a software package capable of reconstructing genome-wide regulatory networks using gene expression data.

    PubMed

    Zheng, Guangyong; Xu, Yaochen; Zhang, Xiujun; Liu, Zhi-Ping; Wang, Zhuo; Chen, Luonan; Zhu, Xin-Guang

    2016-12-23

    A gene regulatory network (GRN) represents interactions of genes inside a cell or tissue, in which vertexes and edges stand for genes and their regulatory interactions respectively. Reconstruction of gene regulatory networks, in particular, genome-scale networks, is essential for comparative exploration of different species and mechanistic investigation of biological processes. Currently, most of network inference methods are computationally intensive, which are usually effective for small-scale tasks (e.g., networks with a few hundred genes), but are difficult to construct GRNs at genome-scale. Here, we present a software package for gene regulatory network reconstruction at a genomic level, in which gene interaction is measured by the conditional mutual information measurement using a parallel computing framework (so the package is named CMIP). The package is a greatly improved implementation of our previous PCA-CMI algorithm. In CMIP, we provide not only an automatic threshold determination method but also an effective parallel computing framework for network inference. Performance tests on benchmark datasets show that the accuracy of CMIP is comparable to most current network inference methods. Moreover, running tests on synthetic datasets demonstrate that CMIP can handle large datasets especially genome-wide datasets within an acceptable time period. In addition, successful application on a real genomic dataset confirms its practical applicability of the package. This new software package provides a powerful tool for genomic network reconstruction to biological community. The software can be accessed at http://www.picb.ac.cn/CMIP/ .

  11. Unique Challenges Testing SDRs for Space

    NASA Technical Reports Server (NTRS)

    Chelmins, David; Downey, Joseph A.; Johnson, Sandra K.; Nappier, Jennifer M.

    2013-01-01

    This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA s Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test waveforms were developed to measure the gain of the transmit system across the tunable frequency band. These were used during thermal vacuum testing to enable characterization of the integrated system in the wide operational temperature range of space. Receive power indicators were used for Electromagnetic Interference tests (EMI) to understand the platform s susceptibility to external interferers independent of the waveform. Additional approaches and lessons learned during the SCaN Testbed subsystem and system level testing will be discussed that may help future SDR integrators

  12. Unique Challenges Testing SDRs for Space

    NASA Technical Reports Server (NTRS)

    Johnson, Sandra; Chelmins, David; Downey, Joseph; Nappier, Jennifer

    2013-01-01

    This paper describes the approach used by the Space Communication and Navigation (SCaN) Testbed team to qualify three Software Defined Radios (SDR) for operation in space and the characterization of the platform to enable upgrades on-orbit. The three SDRs represent a significant portion of the new technologies being studied on board the SCAN Testbed, which is operating on an external truss on the International Space Station (ISS). The SCaN Testbed provides experimenters an opportunity to develop and demonstrate experimental waveforms and applications for communication, networking, and navigation concepts and advance the understanding of developing and operating SDRs in space. Qualifying a Software Defined Radio for the space environment requires additional consideration versus a hardware radio. Tests that incorporate characterization of the platform to provide information necessary for future waveforms, which might exercise extended capabilities of the hardware, are needed. The development life cycle for the radio follows the software development life cycle, where changes can be incorporated at various stages of development and test. It also enables flexibility to be added with minor additional effort. Although this provides tremendous advantages, managing the complexity inherent in a software implementation requires a testing beyond the traditional hardware radio test plan. Due to schedule and resource limitations and parallel development activities, the subsystem testing of the SDRs at the vendor sites was primarily limited to typical fixed transceiver type of testing. NASA's Glenn Research Center (GRC) was responsible for the integration and testing of the SDRs into the SCaN Testbed system and conducting the investigation of the SDR to advance the technology to be accepted by missions. This paper will describe the unique tests that were conducted at both the subsystem and system level, including environmental testing, and present results. For example, test waveforms were developed to measure the gain of the transmit system across the tunable frequency band. These were used during thermal vacuum testing to enable characterization of the integrated system in the wide operational temperature range of space. Receive power indicators were used for Electromagnetic Interference tests (EMI) to understand the platform's susceptibility to external interferers independent of the waveform. Additional approaches and lessons learned during the SCaN Testbed subsystem and system level testing will be discussed that may help future SDR integrators.

  13. Acceptability of physical examination by male doctors in medical care: Taking breast palpation as an example.

    PubMed

    Wang, Yan-jie; Yang, Jie; Kang, Li-xia; Jia, Zhen; Chen, Dong-ming; Zhang, Ping; Feng, Zhan-chun

    2015-10-01

    In this study, we conducted an investigation among medical workers, patients and college students concerning their acceptability of breast palpation performed by male doctors (hereinafter referred to as "acceptability", or "the examination", respectively, if not otherwise indicated), to get the information about their acceptability and reasons for accepting or declining the examination among the three population. A questionnaire investigation was conducted in 500 patients with breast diseases, 700 students of medical colleges, and 280 medical workers working in hospitals. The subjects were asked to choose between two options: accept or do not accept (the examination). The subjects were asked to fill out the questionnaire forms on free and anonymous basis and the questionnaire forms were collected on spot, immediately after completion. The questionnaires collected were coded, sorted out and checked. Data of the eligible questionnaires were input into Epidata software and analyzed by SPSS. Upon the establishment of the database, the intra-group data were tested by utilizing χ(2) test. Among 1480 questionnaires, 1293 (90.41%) questionnaires were retrieved. Our results showed that 56.78% of patients reported that they could accept breast palpation by male doctors. About 59.66% of medical staff expressed their acceptance of the examination, but only 35.03% of students said the examination. On the basis of this study, we were led to conclude that the examination is not well accepted by different populations, and therefore, (1) medical professionals and administrators should pay attention to the gender-related ethics in their practice and the feeling of patients should be respected when medical examinations involve private or sensitive body parts; (2) to this end, related departments should be properly staffed with doctors of both sexes, and this is especially true of the departments involving the examination or treatment of private or sensitive body parts; (3) health education should, among other things, include helping female patients to overcome the fear and anxiety in such examinations. This is of great importance since some women may miss the opportunity to get timely diagnosis.

  14. DEVELOPMENT AND TESTING OF FAULT-DIAGNOSIS ALGORITHMS FOR REACTOR PLANT SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grelle, Austin L.; Park, Young S.; Vilim, Richard B.

    Argonne National Laboratory is further developing fault diagnosis algorithms for use by the operator of a nuclear plant to aid in improved monitoring of overall plant condition and performance. The objective is better management of plant upsets through more timely, informed decisions on control actions with the ultimate goal of improved plant safety, production, and cost management. Integration of these algorithms with visual aids for operators is taking place through a collaboration under the concept of an operator advisory system. This is a software entity whose purpose is to manage and distill the enormous amount of information an operator mustmore » process to understand the plant state, particularly in off-normal situations, and how the state trajectory will unfold in time. The fault diagnosis algorithms were exhaustively tested using computer simulations of twenty different faults introduced into the chemical and volume control system (CVCS) of a pressurized water reactor (PWR). The algorithms are unique in that each new application to a facility requires providing only the piping and instrumentation diagram (PID) and no other plant-specific information; a subject-matter expert is not needed to install and maintain each instance of an application. The testing approach followed accepted procedures for verifying and validating software. It was shown that the code satisfies its functional requirement which is to accept sensor information, identify process variable trends based on this sensor information, and then to return an accurate diagnosis based on chains of rules related to these trends. The validation and verification exercise made use of GPASS, a one-dimensional systems code, for simulating CVCS operation. Plant components were failed and the code generated the resulting plant response. Parametric studies with respect to the severity of the fault, the richness of the plant sensor set, and the accuracy of sensors were performed as part of the validation exercise. The background and overview of the software will be presented to give an overview of the approach. Following, the verification and validation effort using the GPASS code for simulation of plant transients including a sensitivity study on important parameters will be presented« less

  15. Selection of behavioral tasks and development of software for evaluation of rhesus monkey behavior during spaceflight

    NASA Technical Reports Server (NTRS)

    Rumbaugh, Duane M.; Washburn, David A.

    1993-01-01

    The results of several experiments were disseminated professionally during this semiannual period. These peer-reviewed papers that were accepted for publication represent the growth of our research areas, as follow-up experiments to previously published work in cognition and enrichment have been completed and are being published. The presentations not only reflect the latest interesting results that we have obtained, but also serve as a testament to the intense interest that is being expressed for our test system and findings.

  16. The use of real-time, hardware-in-the-loop simulation in the design and development of the new Hughes HS601 spacecraft attitude control system

    NASA Technical Reports Server (NTRS)

    Slafer, Loren I.

    1989-01-01

    Realtime simulation and hardware-in-the-loop testing is being used extensively in all phases of the design, development, and testing of the attitude control system (ACS) for the new Hughes HS601 satellite bus. Realtime, hardware-in-the-loop simulation, integrated with traditional analysis and pure simulation activities is shown to provide a highly efficient and productive overall development program. Implementation of high fidelity simulations of the satellite dynamics and control system algorithms, capable of real-time execution (using applied Dynamics International's System 100), provides a tool which is capable of being integrated with the critical flight microprocessor to create a mixed simulation test (MST). The MST creates a highly accurate, detailed simulated on-orbit test environment, capable of open and closed loop ACS testing, in which the ACS design can be validated. The MST is shown to provide a valuable extension of traditional test methods. A description of the MST configuration is presented, including the spacecraft dynamics simulation model, sensor and actuator emulators, and the test support system. Overall system performance parameters are presented. MST applications are discussed; supporting ACS design, developing on-orbit system performance predictions, flight software development and qualification testing (augmenting the traditional software-based testing), mission planning, and a cost-effective subsystem-level acceptance test. The MST is shown to provide an ideal tool in which the ACS designer can fly the spacecraft on the ground.

  17. A neural net-based approach to software metrics

    NASA Technical Reports Server (NTRS)

    Boetticher, G.; Srinivas, Kankanahalli; Eichmann, David A.

    1992-01-01

    Software metrics provide an effective method for characterizing software. Metrics have traditionally been composed through the definition of an equation. This approach is limited by the fact that all the interrelationships among all the parameters be fully understood. This paper explores an alternative, neural network approach to modeling metrics. Experiments performed on two widely accepted metrics, McCabe and Halstead, indicate that the approach is sound, thus serving as the groundwork for further exploration into the analysis and design of software metrics.

  18. Gamification as a tool for enhancing graduate medical education

    PubMed Central

    Nevin, Christa R; Westfall, Andrew O; Rodriguez, J Martin; Dempsey, Donald M; Cherrington, Andrea; Roy, Brita; Patel, Mukesh; Willig, James H

    2014-01-01

    Introduction The last decade has seen many changes in graduate medical education training in the USA, most notably the implementation of duty hour standards for residents by the Accreditation Council of Graduate Medical Education. As educators are left to balance more limited time available between patient care and resident education, new methods to augment traditional graduate medical education are needed. Objectives To assess acceptance and use of a novel gamification-based medical knowledge software among internal medicine residents and to determine retention of information presented to participants by this medical knowledge software. Methods We designed and developed software using principles of gamification to deliver a web-based medical knowledge competition among internal medicine residents at the University of Alabama (UA) at Birmingham and UA at Huntsville in 2012–2013. Residents participated individually and in teams. Participants accessed daily questions and tracked their online leaderboard competition scores through any internet-enabled device. We completed focus groups to assess participant acceptance and analysed software use, retention of knowledge and factors associated with loss of participants (attrition). Results Acceptance: In focus groups, residents (n=17) reported leaderboards were the most important motivator of participation. Use: 16 427 questions were completed: 28.8% on Saturdays/Sundays, 53.1% between 17:00 and 08:00. Retention of knowledge: 1046 paired responses (for repeated questions) were collected. Correct responses increased by 11.9% (p<0.0001) on retest. Differences per time since question introduction, trainee level and style of play were observed. Attrition: In ordinal regression analyses, completing more questions (0.80 per 10% increase; 0.70 to 0.93) decreased, while postgraduate year 3 class (4.25; 1.44 to 12.55) and non-daily play (4.51; 1.50 to 13.58) increased odds of attrition. Conclusions Our software-enabled, gamification-based educational intervention was well accepted among our millennial learners. Coupling software with gamification and analysis of trainee use and engagement data can be used to develop strategies to augment learning in time-constrained educational settings. PMID:25352673

  19. Gamification as a tool for enhancing graduate medical education.

    PubMed

    Nevin, Christa R; Westfall, Andrew O; Rodriguez, J Martin; Dempsey, Donald M; Cherrington, Andrea; Roy, Brita; Patel, Mukesh; Willig, James H

    2014-12-01

    The last decade has seen many changes in graduate medical education training in the USA, most notably the implementation of duty hour standards for residents by the Accreditation Council of Graduate Medical Education. As educators are left to balance more limited time available between patient care and resident education, new methods to augment traditional graduate medical education are needed. To assess acceptance and use of a novel gamification-based medical knowledge software among internal medicine residents and to determine retention of information presented to participants by this medical knowledge software. We designed and developed software using principles of gamification to deliver a web-based medical knowledge competition among internal medicine residents at the University of Alabama (UA) at Birmingham and UA at Huntsville in 2012-2013. Residents participated individually and in teams. Participants accessed daily questions and tracked their online leaderboard competition scores through any internet-enabled device. We completed focus groups to assess participant acceptance and analysed software use, retention of knowledge and factors associated with loss of participants (attrition). Acceptance: In focus groups, residents (n=17) reported leaderboards were the most important motivator of participation. Use: 16 427 questions were completed: 28.8% on Saturdays/Sundays, 53.1% between 17:00 and 08:00. Retention of knowledge: 1046 paired responses (for repeated questions) were collected. Correct responses increased by 11.9% (p<0.0001) on retest. Differences per time since question introduction, trainee level and style of play were observed. Attrition: In ordinal regression analyses, completing more questions (0.80 per 10% increase; 0.70 to 0.93) decreased, while postgraduate year 3 class (4.25; 1.44 to 12.55) and non-daily play (4.51; 1.50 to 13.58) increased odds of attrition. Our software-enabled, gamification-based educational intervention was well accepted among our millennial learners. Coupling software with gamification and analysis of trainee use and engagement data can be used to develop strategies to augment learning in time-constrained educational settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. How Usability Testing Resulted in Improvements to Ground Collision Software for General Aviation: Improved Ground Collision Avoidance System (IGCAS)

    NASA Technical Reports Server (NTRS)

    Lamarr, Michael; Chinske, Chris; Williams, Ethan; Law, Cameron; Skoog, Mark; Sorokowski, Paul

    2016-01-01

    The NASA improved Ground Collision Avoidance System (iGCAS) team conducted an onsite usability study at Experimental Aircraft Association (EAA) Air Venture in Oshkosh, Wisconsin from July 19 through July 26, 2015. EAA Air Venture had approximately 550,000 attendees from which the sample pool of pilots were selected. The objectives of this study were to assess the overall appropriateness and acceptability of iGCAS as a warning system for General Aviation aircraft, usability of the iGCAS displays and audio cues, test terrain avoidance characteristics, performance, functionality, pilot response time, and correlate terrain avoidance performance and pilot response time data.

  1. ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.

    PubMed

    Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi

    2017-08-01

    With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.

  2. All-Ages Lead Model (Aalm) Version 1.05 (External Draft Report)

    EPA Science Inventory

    The All-Ages Lead Model (AALM) Version 1.05, is an external review draft software and guidance manual. EPA released this software and associated documentation for public review and comment beginning September 27, 2005, until October 27, 2005. The public comments will be accepte...

  3. Efficacy and Physicochemical Evaluation of an Optimized Semisolid Formulation of Povidone Iodine Proposed by Extreme Vertices Statistical Design; a Practical Approach

    PubMed Central

    Lotfipour, Farzaneh; Valizadeh, Hadi; Shademan, Shahin; Monajjemzadeh, Farnaz

    2015-01-01

    One of the most significant issues in pharmaceutical industries, prior to commercialization of a pharmaceutical preparation is the "preformulation" stage. However, far too attention has been paid to verification of the software assisted statistical designs in preformulation studies. The main aim of this study was to report a step by step preformulation approach for a semisolid preparation based on a statistical mixture design and to verify the predictions made by the software with an in-vitro efficacy bioassay test. Extreme vertices mixture design (4 factors, 4 levels) was applied for preformulation of a semisolid Povidone Iodine preparation as Water removable ointment using different PolyEthylenGlycoles. Software Assisted (Minitab) analysis was then performed using four practically assessed response values including; Available iodine, viscosity (N index and yield value) and water absorption capacity. Subsequently mixture analysis was performed and finally, an optimized formulation was proposed. The efficacy of this formulation was bio-assayed using microbial tests in-vitro and MIC values were calculated for Escherichia coli, pseudomonaaeruginosa, staphylococcus aureus and Candida albicans. Results indicated the acceptable conformity of the measured responses. Thus, it can be concluded that the proposed design had an adequate power to predict the responses in practice. Stability studies, proved no significant change during the one year study for the optimized formulation. Efficacy was eligible on all tested species and in the case of staphylococcus aureus; the prepared semisolid formulation was even more effective. PMID:26664368

  4. VARED: Verification and Analysis of Requirements and Early Designs

    NASA Technical Reports Server (NTRS)

    Badger, Julia; Throop, David; Claunch, Charles

    2014-01-01

    Requirements are a part of every project life cycle; everything going forward in a project depends on them. Good requirements are hard to write, there are few useful tools to test, verify, or check them, and it is difficult to properly marry them to the subsequent design, especially if the requirements are written in natural language. In fact, the inconsistencies and errors in the requirements along with the difficulty in finding these errors contribute greatly to the cost of the testing and verification stage of flight software projects [1]. Large projects tend to have several thousand requirements written at various levels by different groups of people. The design process is distributed and a lack of widely accepted standards for requirements often results in a product that varies widely in style and quality. A simple way to improve this would be to standardize the design process using a set of tools and widely accepted requirements design constraints. The difficulty with this approach is finding the appropriate constraints and tools. Common complaints against the tools available include ease of use, functionality, and available features. Also, although preferable, it is rare that these tools are capable of testing the quality of the requirements.

  5. Metadata-driven Delphi rating on the Internet.

    PubMed

    Deshpande, Aniruddha M; Shiffman, Richard N; Nadkarni, Prakash M

    2005-01-01

    Paper-based data collection and analysis for consensus development is inefficient and error-prone. Computerized techniques that could improve efficiency, however, have been criticized as costly, inconvenient and difficult to use. We designed and implemented a metadata-driven Web-based Delphi rating and analysis tool, employing the flexible entity-attribute-value schema to create generic, reusable software. The software can be applied to various domains by altering the metadata; the programming code remains intact. This approach greatly reduces the marginal cost of re-using the software. We implemented our software to prepare for the Conference on Guidelines Standardization. Twenty-three invited experts completed the first round of the Delphi rating on the Web. For each participant, the software generated individualized reports that described the median rating and the disagreement index (calculated from the Interpercentile Range Adjusted for Symmetry) as defined by the RAND/UCLA Appropriateness Method. We evaluated the software with a satisfaction survey using a five-level Likert scale. The panelists felt that Web data entry was convenient (median 4, interquartile range [IQR] 4.0-5.0), acceptable (median 4.5, IQR 4.0-5.0) and easily accessible (median 5, IQR 4.0-5.0). We conclude that Web-based Delphi rating for consensus development is a convenient and acceptable alternative to the traditional paper-based method.

  6. The effect of three ergonomics interventions on body posture and musculoskeletal disorders among stuff of Isfahan Province Gas Company

    PubMed Central

    Habibi, Ehsanollah; Soury, Shiva

    2015-01-01

    Background: Prevalence of work-related musculoskeletal disorders (WMSDs) is high among computer users. The study investigates the effect of three ergonomic interventions on the incidence of musculoskeletal disorders among the staff of Isfahan Province Gas Company, including training, sport, and installation of software. Materials and Methods: The study was performed in the summer of 2013 on 75 (52 men, 23 women) Isfahan Province Gas Company employees in three phases (phase 1: Evaluation of present situation, phase 2: Performing interventions, and phase 3: Re-evaluation). Participants were divided into three groups (training, exercise, and software). The Nordic Musculoskeletal Questionnaire (NMQ) and rapid upper limb assessment (RULA) were used. Data collected were analyzed using SPSS software and McNemar test, t-test, and Chi-square test. Results: Based on the evaluations, there was a decrease in musculoskeletal symptoms among the trained group participants after they received the training. McNemar test showed that the lower rate of pain in low back, neck, knee, and wrist was significant (P < 0.05). The results obtained from the RULA method for evaluation of posture showed an average 25 points decrease in the right side of the body and 20 points decrease in the left side of the body in the group subjected to training. Based on t-test, the decrease was significant. Conclusion: The study demonstrated that majority of the participants accepted interventions, which indicates that most of the people were unsatisfied with the work settings and seeking improvement at the workplace. Overall, the findings show that training, chair adjustment, and arrangement in workplace could decrease musculoskeletal disorders. PMID:26430692

  7. Beta Testing a Novel Smartphone Application to Improve Medication Adherence.

    PubMed

    Sarzynski, Erin; Decker, Brian; Thul, Aaron; Weismantel, David; Melaragni, Ronald; Cholakis, Elizabeth; Tewari, Megha; Beckholt, Kristy; Zaroukian, Michael; Kennedy, Angie C; Given, Charles

    2017-04-01

    We developed and beta-tested a patient-centered medication management application, PresRx optical character recognition (OCR), a mobile health (m-health) tool that auto-populates drug name and dosing instructions directly from patients' medication labels by OCR. We employed a single-subject design study to evaluate PresRx OCR for three outcomes: (1) accuracy of auto-populated medication dosing instructions, (2) acceptability of the user interface, and (3) patients' adherence to chronic medications. Eight patients beta-tested PresRx OCR. Five patients used the software for ≥6 months, and four completed exit interviews (n = 4 completers). At baseline, patients used 3.4 chronic prescription medications and exhibited moderate-to-high adherence rates. Accuracy of auto-populated information by OCR was 95% for drug name, 98% for dose, and 96% for frequency. Study completers rated PresRx OCR 74 on the System Usability Scale, where scores ≥70 indicate an acceptable user interface (scale 0-100). Adherence rates measured by PresRx OCR were high during the first month of app use (93%), but waned midway through the 6-month testing period (78%). Compared with pharmacy fill rates, PresRx OCR underestimated adherence among completers by 3%, while it overestimated adherence among noncompleters by 8%. Results suggest smartphone applications supporting medication management are feasible and accurately assess adherence compared with objective measures. Future efforts to improve medication-taking behavior using m-health tools should target specific patient populations and leverage common application programming interfaces to promote generalizability. Our medication management application PresRx OCR is innovative, acceptable for patient use, and accurately tracks medication adherence.

  8. A shift from significance test to hypothesis test through power analysis in medical research.

    PubMed

    Singh, G

    2006-01-01

    Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.

  9. Disaster victim investigation recommendations from two simulated mass disaster scenarios utilized for user acceptance testing CODIS 6.0.

    PubMed

    Bradford, Laurie; Heal, Jennifer; Anderson, Jeff; Faragher, Nichole; Duval, Kristin; Lalonde, Sylvain

    2011-08-01

    Members of the National DNA Data Bank (NDDB) of Canada designed and searched two simulated mass disaster (MD) scenarios for User Acceptance Testing (UAT) of the Combined DNA Index System (CODIS) 6.0, developed by the Federal Bureau of Investigation (FBI) and the US Department of Justice. A simulated airplane MD and inland Tsunami MD were designed representing a closed and open environment respectively. An in-house software program was written to randomly generate DNA profiles from a mock Caucasian population database. As part of the UAT, these two MDs were searched separately using CODIS 6.0. The new options available for identity and pedigree searching in addition to the inclusion of mitochondrial DNA (mtDNA) and Y-STR (short tandem repeat) information in CODIS 6.0, led to rapid identification of all victims. A Joint Pedigree Likelihood Ratio (JPLR) was calculated from the pedigree searches and ranks were stored in Rank Manager providing confidence to the user in assigning an Unidentified Human Remain (UHR) to a pedigree tree. Analyses of the results indicated that primary relatives were more useful in Disaster Victim Identification (DVI) compared to secondary or tertiary relatives and that inclusion of mtDNA and/or Y-STR technologies helped to link family units together as shown by the software searches. It is recommended that UHRs have as many informative loci possible to assist with their identification. CODIS 6.0 is a valuable technological tool for rapidly and confidently identifying victims of mass disasters. Crown Copyright © 2010. Published by Elsevier Ireland Ltd. All rights reserved.

  10. SU-E-T-361: Energy Dependent Radiation/light-Field Misalignment On Truebeam Linear Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sperling, N; Tanny, S; Parsai, E

    2015-06-15

    Purpose: Verifying the co-incidence of the radiation and light field is recommended by TG-142 for monthly and annual checks. On a digital accelerator, it is simple to verify that beam steering settings are consistent with accepted and commissioned values. This fact should allow for physicists to verify radiation-light-field co-incidence for a single energy and accept that Result for all energies. We present a case where the radiation isocenter deviated for a single energy without any apparent modification to the beam steering parameters. Methods: The radiation isocenter was determined using multiple Methods: Gafchromic film, a BB test, and radiation profiles measuredmore » with a diode. Light-field borders were marked on Gafchromic film and then irradiated for all photon energies. Images of acceptance films were compared with films taken four months later. A phantom with a radio-opaque BB was aligned to isocenter using the light-field and imaged using the EPID for all photon energies. An unshielded diode was aligned using the crosshairs and then beam profiles of multiple field sizes were obtained. Field centers were determined using Omni-Pro v7.4 software, and compared to similar scans taken during commissioning. Beam steering parameter files were checked against backups to determine that the steering parameters were unchanged. Results: There were no differences between the configuration files from acceptance. All three tests demonstrated that a single energy had deviated from accepted values by 0.8 mm in the inline direction. The other two energies remained consistent with previous measurements. The deviated energy was re-steered to be within our clinical tolerance. Conclusions: Our study demonstrates that radiation-light-field coincidence is an energy dependent effect for modern linacs. We recommend that radiation-light-field coincidence be verified for all energies on a monthly basis, particularly for modes used to treat small fields, as these may drift without influencing results from other tests.« less

  11. CARES/Life Software for Designing More Reliable Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.

  12. Gas turbine engines and transmissions for bus demonstration programs. Technical status report, 31 July 1979--31 October 1979

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigro, D.N.

    1979-11-01

    The report summarizes the DDA activities for the effort performed on the procurement and delivery of eleven Allison GT 404-4 gas turbine engines and five HT740CT and six V730CT Allison automatic transmissions and the required associated software. The contract requires the delivery of the engines and transmissions for the Greyhound and Transit Coaches, respectively. In addition, software items such as cost reports, technical reports, installation drawings, acceptance test data and parts lists are required. A recent decision by the DOE will modify the build configuration for the last four (4) Transit Coach engines. It was decided by the DOE atmore » a meeting in Washington, DC on March 28, 1979 with representatives from DDA, NASA/LeRC, JPL and Booz-Allen and Hamilton that these engines are to be built with ceramic regenerators. (TFD)« less

  13. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less

  14. Cargo Movement Operations System (CMOS). Final Software Requirements Specification, (Applications CSCI), Increment II

    DTIC Science & Technology

    1991-01-29

    NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN ( ] CLOSED [ ] ORIGINATOR CONTROL Nt3MBFR: SRS1-0002 PROGRAM OFFICE CONTROL NUMBER: DATA ITEM...floppy diskette interface with CMOS. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES ( 3 NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [

  15. Textbook Websites: User Technology Acceptance Behaviour

    ERIC Educational Resources Information Center

    Jonas, Gregory A.; Norman, Carolyn Strand

    2011-01-01

    Compared with course management software (e.g. Blackboard and WebCT), the content and technology offered by a textbook website (TBW) is relatively costless to universities and professors, and is a potentially valuable tool that can be leveraged to help students learn course material. The present study uses the extended Technology Acceptance Model…

  16. Development of an integrated e-health tool for people with, or at high risk of, cardiovascular disease: The Consumer Navigation of Electronic Cardiovascular Tools (CONNECT) web application.

    PubMed

    Neubeck, Lis; Coorey, Genevieve; Peiris, David; Mulley, John; Heeley, Emma; Hersch, Fred; Redfern, Julie

    2016-12-01

    Cardiovascular disease is the leading killer globally and secondary prevention substantially reduces risk. Uptake of, and adherence to, face-to-face preventive programs is often low. Alternative models of care are exploiting the prominence of technology in daily life to facilitate lifestyle behavior change. To inform the development of a web-based application integrated with the primary care electronic health record, we undertook a collaborative user-centered design process to develop a consumer-focused e-health tool for cardiovascular disease risk reduction. A four-phase iterative process involved ten multidisciplinary clinicians and academics (primary care physician, nurses and allied health professionals), two design consultants, one graphic designer, three software developers and fourteen proposed end-users. This 18-month process involved, (1) defining the target audience and needs, (2) pilot testing and refinement, (3) software development including validation and testing the algorithm, (4) user acceptance testing and beta testing. From this process, researchers were able to better understand end-user needs and preferences, thereby improving and enriching the increasingly detailed system designs and prototypes for a mobile responsive web application. We reviewed 14 relevant applications/websites and sixteen observational and interventional studies to derive a set of core components and ideal features for the system. These included the need for interactivity, visual appeal, credible health information, virtual rewards, and emotional and physical support. The features identified as essential were: (i) both mobile and web-enabled 'apps', (ii) an emphasis on medication management, (iii) a strong psychosocial support component. Subsequent workshops (n=6; 2×1.5h) informed the development of functionality and lo-fidelity sketches of application interfaces. These ideas were next tested in consumer focus groups (n=9; 3×1.5h). Specifications for the application were refined from this feedback and a graphic designer iteratively developed the interface. Concurrently, the electronic health record was linked to the consumer portal. A written description of the final algorithms for all decisions and outputs was provided to software programmers. These algorithmic outputs to the app were first validated against those obtained from an independently programmed version in STATA 11. User acceptance testing (n=5, 2×1.0h) and beta testing revealed technical bugs and interface concerns across commonly-used web browsers and smartphones. These were resolved and re-tested until functionality was optimized. End-users of a cardiovascular disease prevention program have complex needs. A user-centered design approach aided the integration of these needs into the concept, specifications, development and refinement of a responsive web application for risk factor reduction and disease prevention. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. The JPL telerobot operator control station. Part 2: Software

    NASA Technical Reports Server (NTRS)

    Kan, Edwin P.; Landell, B. Patrick; Oxenberg, Sheldon; Morimoto, Carl

    1989-01-01

    The Operator Control Station of the Jet Propulsion Laboratory (JPL)/NASA Telerobot Demonstrator System provides the man-machine interface between the operator and the system. It provides all the hardware and software for accepting human input for the direct and indirect (supervised) manipulation of the robot arms and tools for task execution. Hardware and software are also provided for the display and feedback of information and control data for the operator's consumption and interaction with the task being executed. The software design of the operator control system is discussed.

  18. Data acquisition software for DIRAC experiment

    NASA Astrophysics Data System (ADS)

    Olshevsky, V.; Trusov, S.

    2001-08-01

    The structure and basic processes of data acquisition software of the DIRAC experiment for the measurement of π +π - atom lifetime are described. The experiment is running on the PS accelerator of CERN. The developed software allows one to accept, record and distribute up to 3 Mbytes of data to consumers in one accelerator supercycle of 14.4 s duration. The described system is successfully in use in the experiment since its startup in 1998.

  19. Comparing Acquisition Strategies: Open Architecture versus Product Lines

    DTIC Science & Technology

    2010-04-30

    software • New SOW language for accepting software deliveries – Enables third-party reuse • Additional SOW language regarding conducting software code walkthroughs and for using integrated development environments ...change the business environment must be the primary factor that drives the technical approach. Accordingly, there are business case decisions to be...elements of a system design should be made available to the customer to observe throughout the design process. Electronic access to the design environment

  20. Human-centered design of a personal health record system for metabolic syndrome management based on the ISO 9241-210:2010 standard.

    PubMed

    Farinango, Charic D; Benavides, Juan S; Cerón, Jesús D; López, Diego M; Álvarez, Rosa E

    2018-01-01

    Previous studies have demonstrated the effectiveness of information and communication technologies to support healthy lifestyle interventions. In particular, personal health record systems (PHR-Ss) empower self-care, essential to support lifestyle changes. Approaches such as the user-centered design (UCD), which is already a standard within the software industry (ISO 9241-210:2010), provide specifications and guidelines to guarantee user acceptance and quality of eHealth systems. However, no single PHR-S for metabolic syndrome (MS) developed following the recommendations of the ISO 9241-210:2010 specification has been found in the literature. The aim of this study was to describe the development of a PHR-S for the management of MS according to the principles and recommendations of the ISO 9241-210 standard. The proposed PHR-S was developed using a formal software development process which, in addition to the traditional activities of any software process, included the principles and recommendations of the ISO 9241-210 standard. To gather user information, a survey sample of 1,187 individuals, eight interviews, and a focus group with seven people were performed. Throughout five iterations, three prototypes were built. Potential users of each system evaluated each prototype. The quality attributes of efficiency, effectiveness, and user satisfaction were assessed using metrics defined in the ISO/IEC 25022 standard. The following results were obtained: 1) a technology profile from 1,187 individuals at risk for MS from the city of Popayan, Colombia, identifying that 75.2% of the people use the Internet and 51% had a smartphone; 2) a PHR-S to manage MS developed (the PHR-S has the following five main functionalities: record the five MS risk factors, share these measures with health care professionals, and three educational modules on nutrition, stress management, and a physical activity); and 3) usability tests on each prototype obtaining the following results: 100% effectiveness, 100% efficiency, and 84.2 points in the system usability scale. The software development methodology used was based on the ISO 9241-210 standard, which allowed the development team to maintain a focus on user's needs and requirements throughout the project, which resulted in an increased satisfaction and acceptance of the system. Additionally, the establishment of a multidisciplinary team allowed the application of considerations not only from the disciplines of software engineering and health sciences but also from other disciplines such as graphical design and media communication. Finally, usability testing allowed the observation of flaws in the designs, which helped to improve the solution.

  1. The European computer model for optronic system performance prediction (ECOMOS)

    NASA Astrophysics Data System (ADS)

    Repasi, Endre; Bijl, Piet; Labarre, Luc; Wittenstein, Wolfgang; Bürsing, Helge

    2017-05-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE. In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short outlook on validation tests and the future potential of simulation for sensor assessment.

  2. Simple solution to the medical instrumentation software problem

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.; Leif, Suzanne B.; Leif, Stephanie H.; Bingue, E.

    1995-04-01

    Medical devices now include a substantial software component, which is both difficult and expensive to produce and maintain. Medical software must be developed according to `Good Manufacturing Practices', GMP. Good Manufacturing Practices as specified by the FDA and ISO requires the definition and compliance to a software processes which ensures quality products by specifying a detailed method of software construction. The software process should be based on accepted standards. US Department of Defense software standards and technology can both facilitate the development and improve the quality of medical systems. We describe the advantages of employing Mil-Std-498, Software Development and Documentation, and the Ada programming language. Ada provides the very broad range of functionalities, from embedded real-time to management information systems required by many medical devices. It also includes advanced facilities for object oriented programming and software engineering.

  3. Software assurance standard

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This standard specifies the software assurance program for the provider of software. It also delineates the assurance activities for the provider and the assurance data that are to be furnished by the provider to the acquirer. In any software development effort, the provider is the entity or individual that actually designs, develops, and implements the software product, while the acquirer is the entity or individual who specifies the requirements and accepts the resulting products. This standard specifies at a high level an overall software assurance program for software developed for and by NASA. Assurance includes the disciplines of quality assurance, quality engineering, verification and validation, nonconformance reporting and corrective action, safety assurance, and security assurance. The application of these disciplines during a software development life cycle is called software assurance. Subsequent lower-level standards will specify the specific processes within these disciplines.

  4. Waste retrieval sluicing system data acquisition system acceptance test report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevins, R.R.

    1998-07-31

    This document describes the test procedure for the Project W-320 Tank C-106 Sluicing Data Acquisition System (W-320 DAS). The Software Test portion will test items identified in the WRSS DAS System Description (SD), HNF-2115. Traceability to HNF-2115 will be via a reference that follows in parenthesis, after the test section title. The Field Test portion will test sensor operability, analog to digital conversion, and alarm setpoints for field instrumentation. The W-320 DAS supplies data to assist thermal modeling of tanks 241-C-106 and 241-AY-102. It is designed to be a central repository for information from sources that would otherwise have tomore » be read, recorded, and integrated manually. Thus, completion of the DAS requires communication with several different data collection devices and output to a usable PC data formats. This test procedure will demonstrate that the DAS functions as required by the project requirements stated in Section 3 of the W-320 DAS System Description, HNF-2115.« less

  5. Provider-Initiated HIV Testing for Migrants in Spain: A Qualitative Study with Health Care Workers and Foreign-Born Sexual Minorities

    PubMed Central

    Navaza, Barbara; Abarca, Bruno; Bisoffi, Federico; Pool, Robert; Roura, Maria

    2016-01-01

    Introduction Provider-initiated HIV testing (PITC) is increasingly adopted in Europe. The success of the approach at identifying new HIV cases relies on its effectiveness at testing individuals most at risk. However, its suitability to reach populations facing overlapping vulnerabilities is under researched. This qualitative study examined HIV testing experiences and perceptions amongst Latin-American migrant men who have sex with men and transgender females in Spain, as well as health professionals’ experiences offering HIV tests to migrants in Barcelona and Madrid. Methods We conducted 32 in-depth interviews and 8 discussion groups with 38 Latin-American migrants and 21 health professionals. We imported verbatim transcripts and detailed field work notes into the qualitative software package Nvivo-10 and applied to all data a coding framework to examine systematically different HIV testing dimensions and modalities. The dimensions analysed were based on the World Health Organization “5 Cs” principles: Consent, Counselling, Connection to treatment, Correctness of results and Confidentiality. Results Health professionals reported that PITC was conceptually acceptable for them, although their perceived inability to adequately communicate HIV+ results and resulting bottle necks in the flow of care were recurrent concerns. Endorsement and adherence to the principles underpinning the rights-based response to HIV varied widely across health settings. The offer of an HIV test during routine consultations was generally appreciated by users as a way of avoiding the embarrassment of asking for it. Several participants deemed compulsory testing as acceptable on public health grounds. In spite of—and sometimes because of—partial endorsement of rights-based approaches, PITC was acceptable in a population with high levels of internalised stigma. Conclusion PITC is a promising approach to reach sexual minority migrants who hold high levels of internalised stigma but explicit extra efforts are needed to safeguard the rights of the most vulnerable. PMID:26914023

  6. Oak Ridge Institutional Cluster Autotune Test Drive Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jibonananda, Sanyal; New, Joshua Ryan

    2014-02-01

    The Oak Ridge Institutional Cluster (OIC) provides general purpose computational resources for the ORNL staff to run computation heavy jobs that are larger than desktop applications but do not quite require the scale and power of the Oak Ridge Leadership Computing Facility (OLCF). This report details the efforts made and conclusions derived in performing a short test drive of the cluster resources on Phase 5 of the OIC. EnergyPlus was used in the analysis as a candidate user program and the overall software environment was evaluated against anticipated challenges experienced with resources such as the shared memory-Nautilus (JICS) and Titanmore » (OLCF). The OIC performed within reason and was found to be acceptable in the context of running EnergyPlus simulations. The number of cores per node and the availability of scratch space per node allow non-traditional desktop focused applications to leverage parallel ensemble execution. Although only individual runs of EnergyPlus were executed, the software environment on the OIC appeared suitable to run ensemble simulations with some modifications to the Autotune workflow. From a standpoint of general usability, the system supports common Linux libraries, compilers, standard job scheduling software (Torque/Moab), and the OpenMPI library (the only MPI library) for MPI communications. The file system is a Panasas file system which literature indicates to be an efficient file system.« less

  7. Software development for safety-critical medical applications

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    1992-01-01

    There are many computer-based medical applications in which safety and not reliability is the overriding concern. Reduced, altered, or no functionality of such systems is acceptable as long as no harm is done. A precise, formal definition of what software safety means is essential, however, before any attempt can be made to achieve it. Without this definition, it is not possible to determine whether a specific software entity is safe. A set of definitions pertaining to software safety will be presented and a case study involving an experimental medical device will be described. Some new techniques aimed at improving software safety will also be discussed.

  8. Digital Preservation in Open-Source Digital Library Software

    ERIC Educational Resources Information Center

    Madalli, Devika P.; Barve, Sunita; Amin, Saiful

    2012-01-01

    Digital archives and digital library projects are being initiated all over the world for materials of different formats and domains. To organize, store, and retrieve digital content, many libraries as well as archiving centers are using either proprietary or open-source software. While it is accepted that print media can survive for centuries with…

  9. 36 CFR 1235.50 - What specifications and standards for transfer apply to electronic records?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... electronic records in a format that is independent of specific hardware or software. Except as specified in... a request from NARA to provide the software to decompress the records. (3) Agencies interested in... organization. Acceptable transfer formats include the Geography Markup Language (GML) as defined by the Open...

  10. 77 FR 46763 - Documents to Support Submission of an Electronic Common Technical Document; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-06

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2011-N-0724... not prepared at present to accept submissions utilizing this new version because eCTD software vendors need time to update their software to accommodate this information and because its use will require...

  11. Utility and Usability as Factors Influencing Teacher Decisions about Software Integration

    ERIC Educational Resources Information Center

    Okumus, Samet; Lewis, Lindsey; Wiebe, Eric; Hollebrands, Karen

    2016-01-01

    Given the importance of teacher in the implementation of computer technology in classrooms, the technology acceptance model and TPACK model were used to better understand the decision-making process teachers use in determining how, when, and where computer software is used in mathematics classrooms. Thirty-four (34) teachers implementing…

  12. Research and Development of Automated Eddy Current Testing for Composite Overwrapped Pressure Vessels

    NASA Technical Reports Server (NTRS)

    Carver, Kyle L.; Saulsberry, Regor L.; Nichols, Charles T.; Spencer, Paul R.; Lucero, Ralph E.

    2012-01-01

    Eddy current testing (ET) was used to scan bare metallic liners used in the fabrication of composite overwrapped pressure vessels (COPVs) for flaws which could result in premature failure of the vessel. The main goal of the project was to make improvements in the areas of scan signal to noise ratio, sensitivity of flaw detection, and estimation of flaw dimensions. Scan settings were optimized resulting in an increased signal to noise ratio. Previously undiscovered flaw indications were observed and investigated. Threshold criteria were determined for the system software's flaw report and estimation of flaw dimensions were brought to an acceptable level of accuracy. Computer algorithms were written to import data for filtering and a numerical derivative filtering algorithm was evaluated.

  13. Methodology evaluation: Effects of independent verification and intergration on one class of application

    NASA Technical Reports Server (NTRS)

    Page, J.

    1981-01-01

    The effects of an independent verification and integration (V and I) methodology on one class of application are described. Resource profiles are discussed. The development environment is reviewed. Seven measures are presented to test the hypothesis that V and I improve the development and product. The V and I methodology provided: (1) a decrease in requirements ambiguities and misinterpretation; (2) no decrease in design errors; (3) no decrease in the cost of correcting errors; (4) a decrease in the cost of system and acceptance testing; (5) an increase in early discovery of errors; (6) no improvement in the quality of software put into operation; and (7) a decrease in productivity and an increase in cost.

  14. Cargo Movement Operations System (CMOS) Updated Software Test Report. Increment I

    DTIC Science & Technology

    1991-02-19

    NO [ ] COMMENT DISPOSITION: ACCEPT [ J REJECT ( ] COMMENT STATUS: OPEN [ ] CLOSED [ ] Cmnt Page Paragraph No. No. Number Comment 1. C-14 TD1251.03 Change "price" to "piece". 2. C-19 TD1323.04 Change "requried" to "required". 3. D-53 TD1322.03 Change the SPCR number to 90122064. ORIGINATOR CONTROL NUMBER: STR1-0002 PROGRAM OFFICE CONTROL NUMBER: DATA ITEM DISCREPANCY WORKSHEET CDRL NUMBER: A010-02 DATE: 02/19/91 ORIGINATOR NAME: Gerald T. Love OFFICE SYMBOL: SAIC TELEPHONE NUMBER: 272-2999 SUBSTANTIVE: X EDITORIAL: PAGE

  15. Post-upgrade testing on a radiotherapy oncology information system with an embedded record and verify system following the IAEA Human Health Report No. 7 recommendations.

    PubMed

    Nyathi, Thulani; Colyer, Christopher; Bhardwaj, Anup Kumar; Rijken, James; Morton, Jason

    2016-06-01

    Record and verify (R&V) systems have proven that their application in radiotherapy clinics leads to a significant reduction in mis-treatments of patients. The purpose of this technical note is to share our experience of acceptance testing, commissioning and setting up a quality assurance programme for the MOSAIQ® oncology information system and R&V system after upgrading from software version 2.41 to 2.6 in a multi-vendor, multi-site environment. Testing was guided primarily by the IAEA Human Report No. 7 recommendations, but complemented by other departmental workflow specific tests. To the best of our knowledge, this is the first time successful implementation of the IAEA Human Health Report Series No. 7 recommendations have been reported in the literature. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. The Effects of Acceptance and Commitment Therapy on Man Smokers' Comorbid Depression and Anxiety Symptoms and Smoking Cessation: A Randomized Controlled Trial.

    PubMed

    Davoudi, Mohammadreza; Omidi, Abdollah; Sehat, Mojtaba; Sepehrmanesh, Zahra

    2017-07-01

    Besides physical problems, cigarette smoking is associated with a high prevalence of comorbid depression and anxiety symptoms. One of the reasons behind high post-cessation smoking lapse and relapse rates is inattentiveness to these symptoms during the process of cessation. The aim of this study was to examine the effects of acceptance and commitment therapy (ACT) on male smokers' comorbid depression and anxiety symptoms and smoking cessation. This two-group pre-test-post-test randomized controlled trial was done on a random sample of seventy male smokers. Participants were randomly and evenly allocated to an intervention and a control group. Patients in these groups received either acceptance or commitment therapy or routine psychological counseling services include cognitive behavior therapy, respectively. Study data were collected through a demographic questionnaire, the Structural Clinical Interview (SCI) for Diagnostic and Statistical Manual of Mental Disorders-4th Edition (DSM-IV) disorders, Beck Depression Inventory (BDI), Beck Anxiety Inventory (BAI), and Micro Smokerlyzer carbon monoxide monitor. The SPSS software was employed to analyze the data. After the intervention, depression and anxiety scores and smoking cessation rate in the intervention group were respectively lower and higher than the control group (P < 0.050). ACT can significantly improve comorbid depression and anxiety symptoms and smoking cessation rate. Thus, it can be used to simultaneously manage depression, anxiety, and cigarette smoking.

  17. Sci-Thur AM: YIS – 08: Automated Imaging Quality Assurance for Image-Guided Small Animal Irradiators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstone, Chris; Bazalova-Carter, Magdalena

    Purpose: To develop quality assurance (QA) standards and tolerance levels for image quality of small animal irradiators. Methods: A fully automated in-house QA software for image analysis of a commercial microCT phantom was created. Quantitative analyses of CT linearity, signal-to-noise ratio (SNR), uniformity and noise, geometric accuracy, modulation transfer function (MTF), and CT number evaluation was performed. Phantom microCT scans from seven institutions acquired with varying parameters (kVp, mA, time, voxel size, and frame rate) and five irradiator units (Xstrahl SARRP, PXI X-RAD 225Cx, PXI X-RAD SmART, GE explore CT/RT 140, and GE Explore CT 120) were analyzed. Multi-institutional datamore » sets were compared using our in-house software to establish pass/fail criteria for each QA test. Results: CT linearity (R2>0.996) was excellent at all but Institution 2. Acceptable SNR (>35) and noise levels (<55HU) were obtained at four of the seven institutions, where failing scans were acquired with less than 120mAs. Acceptable MTF (>1.5 lp/mm for MTF=0.2) was obtained at all but Institution 6 due to the largest scan voxel size (0.35mm). The geometric accuracy passed (<1.5%) at five of the seven institutions. Conclusion: Our QA software can be used to rapidly perform quantitative imaging QA for small animal irradiators, accumulate results over time, and display possible changes in imaging functionality from its original performance and/or from the recommended tolerance levels. This tool will aid researchers in maintaining high image quality, enabling precise conformal dose delivery to small animals.« less

  18. Validity and reproducibility of cephalometric measurements obtained from digital photographs of analogue headfilms.

    PubMed

    Grybauskas, Simonas; Balciuniene, Irena; Vetra, Janis

    2007-01-01

    The emerging market of digital cephalographs and computerized cephalometry is overwhelming the need to examine the advantages and drawbacks of manual cephalometry, meanwhile, small offices continue to benefit from the economic efficacy and ease of use of analogue cephalograms. The use of modern cephalometric software requires import of digital cephalograms or digital capture of analogue data: scanning and digital photography. The validity of digital photographs of analogue headfilms rather than original headfilms in clinical practice has not been well established. Digital photography could be a fast and inexpensive method of digital capture of analogue cephalograms for use in digital cephalometry. The objective of this study was to determine the validity and reproducibility of measurements obtained from digital photographs of analogue headfilms in lateral cephalometry. Analogue cephalometric radiographs were performed on 15 human dry skulls. Each of them was traced on acetate paper and photographed three times independently. Acetate tracings and digital photographs were digitized and analyzed in cephalometric software. Linear regression model, paired t-test intergroup analysis and coefficient of repeatability were used to assess validity and reproducibility for 63 angular, linear and derivative measurements. 54 out of 63 measurements were determined to have clinically acceptable reproducibility in the acetate tracing group as well as 46 out of 63 in the digital photography group. The worst reproducibility was determined for measurements dependent on landmarks of incisors and poorly defined outlines, majority of them being angular measurements. Validity was acceptable for all measurements, and although statistically significant differences between methods existed for as many as 15 parameters, they appeared to be clinically insignificant being smaller than 1 unit of measurement. Validity was acceptable for 59 of 63 measurements obtained from digital photographs, substantiating the use of digital photography for headfilm capture and computer-aided cephalometric analysis.

  19. Clinical Implications of TiGRT Algorithm for External Audit in Radiation Oncology.

    PubMed

    Shahbazi-Gahrouei, Daryoush; Saeb, Mohsen; Monadi, Shahram; Jabbari, Iraj

    2017-01-01

    Performing audits play an important role in quality assurance program in radiation oncology. Among different algorithms, TiGRT is one of the common application software for dose calculation. This study aimed to clinical implications of TiGRT algorithm to measure dose and compared to calculated dose delivered to the patients for a variety of cases, with and without the presence of inhomogeneities and beam modifiers. Nonhomogeneous phantom as quality dose verification phantom, Farmer ionization chambers, and PC-electrometer (Sun Nuclear, USA) as a reference class electrometer was employed throughout the audit in linear accelerators 6 and 18 MV energies (Siemens ONCOR Impression Plus, Germany). Seven test cases were performed using semi CIRS phantom. In homogeneous regions and simple plans for both energies, there was a good agreement between measured and treatment planning system calculated dose. Their relative error was found to be between 0.8% and 3% which is acceptable for audit, but in nonhomogeneous organs, such as lung, a few errors were observed. In complex treatment plans, when wedge or shield in the way of energy is used, the error was in the accepted criteria. In complex beam plans, the difference between measured and calculated dose was found to be 2%-3%. All differences were obtained between 0.4% and 1%. A good consistency was observed for the same type of energy in the homogeneous and nonhomogeneous phantom for the three-dimensional conformal field with a wedge, shield, asymmetric using the TiGRT treatment planning software in studied center. The results revealed that the national status of TPS calculations and dose delivery for 3D conformal radiotherapy was globally within acceptable standards with no major causes for concern.

  20. Clinical Implications of TiGRT Algorithm for External Audit in Radiation Oncology

    PubMed Central

    Shahbazi-Gahrouei, Daryoush; Saeb, Mohsen; Monadi, Shahram; Jabbari, Iraj

    2017-01-01

    Background: Performing audits play an important role in quality assurance program in radiation oncology. Among different algorithms, TiGRT is one of the common application software for dose calculation. This study aimed to clinical implications of TiGRT algorithm to measure dose and compared to calculated dose delivered to the patients for a variety of cases, with and without the presence of inhomogeneities and beam modifiers. Materials and Methods: Nonhomogeneous phantom as quality dose verification phantom, Farmer ionization chambers, and PC-electrometer (Sun Nuclear, USA) as a reference class electrometer was employed throughout the audit in linear accelerators 6 and 18 MV energies (Siemens ONCOR Impression Plus, Germany). Seven test cases were performed using semi CIRS phantom. Results: In homogeneous regions and simple plans for both energies, there was a good agreement between measured and treatment planning system calculated dose. Their relative error was found to be between 0.8% and 3% which is acceptable for audit, but in nonhomogeneous organs, such as lung, a few errors were observed. In complex treatment plans, when wedge or shield in the way of energy is used, the error was in the accepted criteria. In complex beam plans, the difference between measured and calculated dose was found to be 2%–3%. All differences were obtained between 0.4% and 1%. Conclusions: A good consistency was observed for the same type of energy in the homogeneous and nonhomogeneous phantom for the three-dimensional conformal field with a wedge, shield, asymmetric using the TiGRT treatment planning software in studied center. The results revealed that the national status of TPS calculations and dose delivery for 3D conformal radiotherapy was globally within acceptable standards with no major causes for concern. PMID:28989910

  1. Towards Certification of a Space System Application of Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.; Markosian, Lawrence Z.

    2008-01-01

    Advanced fault detection, isolation and recovery (FDIR) software is being investigated at NASA as a means to the improve reliability and availability of its space systems. Certification is a critical step in the acceptance of such software. Its attainment hinges on performing the necessary verification and validation to show that the software will fulfill its requirements in the intended setting. Presented herein is our ongoing work to plan for the certification of a pilot application of advanced FDIR software in a NASA setting. We describe the application, and the key challenges and opportunities it offers for certification.

  2. Computer-assisted learning in medicine. How to create a novel software for immunology.

    PubMed

    Colsman, Andreas; Sticherling, Michael; Stöpel, Claus; Emmrich, Frank

    2006-06-01

    Teaching medical issues is increasingly demanding due to the permanent progress in medical sciences. Simultaneously, software applications are rapidly advancing with regard to their availability and easy use. Here a novel teaching program is presented for immunology, which is one of the fastest expanding topics in medical sciences. The requirements of media didactics were transferred to this e-learning tool for German students. After implementation, medical students evaluated the software and the different learning approaches showed acceptance. Altogether this novel software compares favourably to other English e-learning tools available in the Internet.

  3. Using mobile technology to deliver a cognitive behaviour therapy-informed intervention in early psychosis (Actissist): study protocol for a randomised controlled trial.

    PubMed

    Bucci, Sandra; Barrowclough, Christine; Ainsworth, John; Morris, Rohan; Berry, Katherine; Machin, Matthew; Emsley, Richard; Lewis, Shon; Edge, Dawn; Buchan, Iain; Haddock, Gillian

    2015-09-10

    Cognitive behaviour therapy (CBT) is recommended for the treatment of psychosis; however, only a small proportion of service users have access to this intervention. Smartphone technology using software applications (apps) could increase access to psychological approaches for psychosis. This paper reports the protocol development for a clinical trial of smartphone-based CBT. We present a study protocol that describes a single-blind randomised controlled trial comparing a cognitive behaviour therapy-informed software application (Actissist) plus Treatment As Usual (TAU) with a symptom monitoring software application (ClinTouch) plus TAU in early psychosis. The study consists of a 12-week intervention period. We aim to recruit and randomly assign 36 participants registered with early intervention services (EIS) across the North West of England, UK in a 2:1 ratio to each arm of the trial. Our primary objective is to determine whether in people with early psychosis the Actissist app is feasible to deliver and acceptable to use. Secondary aims are to determine whether Actissist impacts on predictors of first episode psychosis (FEP) relapse and enhances user empowerment, functioning and quality of life. Assessments will take place at baseline, 12 weeks (post-treatment) and 22-weeks (10 weeks post-treatment) by assessors blind to treatment condition. The trial will report on the feasibility and acceptability of Actissist and compare outcomes between the randomised arms. The study also incorporates semi-structured interviews about the experience of participating in the Actissist trial that will be qualitatively analysed to inform future developments of the Actissist protocol and app. To our knowledge, this is the first controlled trial to test the feasibility, acceptability, uptake, attrition and potential efficacy of a CBT-informed smartphone app for early psychosis. Mobile applications designed to deliver a psychologically-informed intervention offer new possibilities to extend the reach of traditional mental health service delivery across a range of serious mental health problems and provide choice about available care. ISRCTN34966555. Date of first registration: 12 June 2014.

  4. Knowledge, attitudes and acceptability to provider-initiated HIV testing and counseling: patients' perspectives in Moshi and Rombo Districts, Tanzania.

    PubMed

    Manongi, Rachel; Mahande, Michael; Njau, Bernard

    2014-10-01

    Provider-initiated HIV testing and counseling (PITC) is referred to as routine testing in a clinical setting as part of a standard programme of medical services. PITC is initiated in order to avoid missed opportunities for people to get tested for HIV. While advocated as a strategy, there is dearth of information on patients' views on PITC in a number of districts in Tanzania. The objective of this study was to assess the knowledge, attitude and acceptability to PITC services among patients attending health care facilities in rural and urban settings in Kilimanjaro region A total of 12 focus group discussions (FGDs) were conducted with 99 (73 female and 26 male) patients enrolled into out-patient clinics in 8 (2 hospitals and 6 primary care centers) health facilities in Moshi Urban and Rombo districts in northern Tanzania. The study explored on knowledge, attitudes and acceptability of PITC, perceived benefits and barriers of PITC, and ethical issues related to PITC. Interviews were audio taped, transcribed, translated, and analyzed using Non-numerical Unstructured Data Indexing and Theorizing (NUDIST) software. Knowledge about PITC services was generally low. Compared to men, women had a more positive attitude towards PITC services, because of its ability to identify and treat undiagnosed HIV cases. HIV stigma was regarded as a major barrier to patients' uptake of PITC. Institutional factors such as lack of supplies and human resources were identified as barriers to successful provision of PITC. In conclusion, the findings highlight both opportunities and potential barriers in the successful uptake of PITC, and underscore the importance of informed consent, counseling and confidentiality and the need for specific strategies on advocacy for the service.

  5. Results of the Updated NASA Kennedy Space Center 50-MHz Doppler Radar Wind Profiler Operational Acceptance Test

    NASA Technical Reports Server (NTRS)

    Barbre', Robert E., Jr.; Deker, Ryan K.; Leahy, Frank B.; Huddleston, Lisa

    2016-01-01

    We present here the methodology and results of the Operational Acceptance Test (OAT) performed on the new Kennedy Space Center (KSC) 50-MHz Doppler Radar Wind Profiler (DRWP). On day-of-launch (DOL), space launch vehicle operators have used data from the DRWP to invalidate winds in prelaunch loads and trajectory assessments due to the DRWP's capability to quickly identify changes in the wind profile within a rapidly-changing wind environment. The previous DRWP has been replaced with a completely new system, which needs to undergo certification testing before being accepted for use in range operations. The new DRWP replaces the previous three-beam system made of coaxial cables and a copper wire ground plane with a four-beam system that uses Yagi antennae with enhanced beam steering capability. In addition, the new system contains updated user interface software while maintaining the same general capability as the previous system. The new DRWP continues to use the Median Filter First Guess (MFFG) algorithm to generate a wind profile from Doppler spectra at each range gate. DeTect (2015) contains further details on the upgrade. The OAT is a short-term test designed so that end users can utilize the new DRWP in a similar manner to the previous DRWP during mission operations at the Eastern Range in the midst of a long-term certification process. This paper describes the Marshall Space Flight Center Natural Environments Branch's (MSFC NE's) analyses to verify the quality and accuracy of the DRWP's meteorological data output as compared to the previous DRWP. Ultimately, each launch vehicle program has the responsibility to certify the system for their own use.

  6. Consumer Security Perceptions and the Perceived Influence on Adopting Cloud Computing: A Quantitative Study Using the Technology Acceptance Model

    ERIC Educational Resources Information Center

    Paquet, Katherine G.

    2013-01-01

    Cloud computing may provide cost benefits for organizations by eliminating the overhead costs of software, hardware, and maintenance (e.g., license renewals, upgrading software, servers and their physical storage space, administration along with funding a large IT department). In addition to the promised savings, the organization may require…

  7. Student Perceptions of a Trial of Electronic Text Matching Software: A Preliminary Investigation

    ERIC Educational Resources Information Center

    Green, David; Lindemann, Iris; Marshall, Kelly; Wilkinson, Grette

    2005-01-01

    It is accepted that using electronic detection methods has benefits within an overall strategy to promote academic integrity in an institution. Little attention has been paid to obtaining student perceptions to evaluate the cost/benefit of using such methods. This study reports on the evaluation of a trial of Turnitin software. 728 students…

  8. Why Don't All Maths Teachers Use Dynamic Geometry Software in Their Classrooms?

    ERIC Educational Resources Information Center

    Stols, Gerrit; Kriek, Jeanne

    2011-01-01

    In this exploratory study, we sought to examine the influence of mathematics teachers' beliefs on their intended and actual usage of dynamic mathematics software in their classrooms. The theory of planned behaviour (TPB), the technology acceptance model (TAM) and the innovation diffusion theory (IDT) were used to examine the influence of teachers'…

  9. Factors Influencing the Behavioural Intention to Use Statistical Software: The Perspective of the Slovenian Students of Social Sciences

    ERIC Educational Resources Information Center

    Brezavšcek, Alenka; Šparl, Petra; Žnidaršic, Anja

    2017-01-01

    The aim of the paper is to investigate the main factors influencing the adoption and continuous utilization of statistical software among university social sciences students in Slovenia. Based on the Technology Acceptance Model (TAM), a conceptual model was derived where five external variables were taken into account: statistical software…

  10. The Software Engineering Prototype.

    DTIC Science & Technology

    1983-06-01

    34. sThis cnly means that the ’claim’, i.e., "accepted wisdcu" in systems design, was set up as the aiternative to the hypcthesis, in accord with tra dit ion...conflict and its resolution are m~~lyto occur when users can exercise their influence 4n the levelc2- inert prcezss. Ccnflict 4itsslY os snotr lead...the traditional method of software de- velopment often has poor results. Recently, a new approach to software development, the prototype approach

  11. Software Writing Skills for Your Research - Lessons Learned from Workshops in the Geosciences

    NASA Astrophysics Data System (ADS)

    Hammitzsch, Martin

    2016-04-01

    Findings presented in scientific papers are based on data and software. Once in a while they come along with data - but not commonly with software. However, the software used to gain findings plays a crucial role in the scientific work. Nevertheless, software is rarely seen publishable. Thus researchers may not reproduce the findings without the software which is in conflict with the principle of reproducibility in sciences. For both, the writing of publishable software and the reproducibility issue, the quality of software is of utmost importance. For many programming scientists the treatment of source code, e.g. with code design, version control, documentation, and testing is associated with additional work that is not covered in the primary research task. This includes the adoption of processes following the software development life cycle. However, the adoption of software engineering rules and best practices has to be recognized and accepted as part of the scientific performance. Most scientists have little incentive to improve code and do not publish code because software engineering habits are rarely practised by researchers or students. Software engineering skills are not passed on to followers as for paper writing skill. Thus it is often felt that the software or code produced is not publishable. The quality of software and its source code has a decisive influence on the quality of research results obtained and their traceability. So establishing best practices from software engineering to serve scientific needs is crucial for the success of scientific software. Even though scientists use existing software and code, i.e., from open source software repositories, only few contribute their code back into the repositories. So writing and opening code for Open Science means that subsequent users are able to run the code, e.g. by the provision of sufficient documentation, sample data sets, tests and comments which in turn can be proven by adequate and qualified reviews. This assumes that scientist learn to write and release code and software as they learn to write and publish papers. Having this in mind, software could be valued and assessed as a contribution to science. But this requires the relevant skills that can be passed to colleagues and followers. Therefore, the GFZ German Research Centre for Geosciences performed three workshops in 2015 to address the passing of software writing skills to young scientists, the next generation of researchers in the Earth, planetary and space sciences. Experiences in running these workshops and the lessons learned will be summarized in this presentation. The workshops have received support and funding by Software Carpentry, a volunteer organization whose goal is to make scientists more productive, and their work more reliable, by teaching them basic computing skills, and by FOSTER (Facilitate Open Science Training for European Research), a two-year, EU-Funded (FP7) project, whose goal to produce a European-wide training programme that will help to incorporate Open Access approaches into existing research methodologies and to integrate Open Science principles and practice in the current research workflow by targeting the young researchers and other stakeholders.

  12. Ground Data System Risk Mitigation Techniques for Faster, Better, Cheaper Missions

    NASA Technical Reports Server (NTRS)

    Catena, John J.; Saylor, Rick; Casasanta, Ralph; Weikel, Craig; Powers, Edward I. (Technical Monitor)

    2000-01-01

    With the advent of faster, cheaper, and better missions, NASA Projects acknowledged that a higher level of risk was inherent and accepted with this approach. It was incumbent however upon each component of the Project whether spacecraft, payload, launch vehicle, or ground data system to ensure that the mission would nevertheless be an unqualified success. The Small Explorer (SMEX) program's ground data system (GDS) team developed risk mitigation techniques to achieve these goals starting in 1989. These techniques have evolved through the SMEX series of missions and are practiced today under the Triana program. These techniques are: (1) Mission Team Organization--empowerment of a closeknit ground data system team comprising system engineering, software engineering, testing, and flight operations personnel; (2) Common Spacecraft Test and Operational Control System--utilization of the pre-launch spacecraft integration system as the post-launch ground data system on-orbit command and control system; (3) Utilization of operations personnel in pre-launch testing--making the flight operations team an integrated member of the spacecraft testing activities at the beginning of the spacecraft fabrication phase; (4) Consolidated Test Team--combined system, mission readiness and operations testing to optimize test opportunities with the ground system and spacecraft; and (5). Reuse of Spacecraft, Systems and People--reuse of people, software and on-orbit spacecraft throughout the SMEX mission series. The SMEX ground system development approach for faster, cheaper, better missions has been very successful. This paper will discuss these risk management techniques in the areas of ground data system design, implementation, test, and operational readiness.

  13. EOS MLS Level 1B Data Processing Software. Version 3

    NASA Technical Reports Server (NTRS)

    Perun, Vincent S.; Jarnot, Robert F.; Wagner, Paul A.; Cofield, Richard E., IV; Nguyen, Honghanh T.; Vuu, Christina

    2011-01-01

    This software is an improvement on Version 2, which was described in EOS MLS Level 1B Data Processing, Version 2.2, NASA Tech Briefs, Vol. 33, No. 5 (May 2009), p. 34. It accepts the EOS MLS Level 0 science/engineering data, and the EOS Aura spacecraft ephemeris/attitude data, and produces calibrated instrument radiances and associated engineering and diagnostic data. This version makes the code more robust, improves calibration, provides more diagnostics outputs, defines the Galactic core more finely, and fixes the equator crossing. The Level 1 processing software manages several different tasks. It qualifies each data quantity using instrument configuration and checksum data, as well as data transmission quality flags. Statistical tests are applied for data quality and reasonableness. The instrument engineering data (e.g., voltages, currents, temperatures, and encoder angles) is calibrated by the software, and the filter channel space reference measurements are interpolated onto the times of each limb measurement with the interpolates being differenced from the measurements. Filter channel calibration target measurements are interpolated onto the times of each limb measurement, and are used to compute radiometric gain. The total signal power is determined and analyzed by each digital autocorrelator spectrometer (DACS) during each data integration. The software converts each DACS data integration from an autocorrelation measurement in the time domain into a spectral measurement in the frequency domain, and estimates separately the spectrally, smoothly varying and spectrally averaged components of the limb port signal arising from antenna emission and scattering effects. Limb radiances are also calibrated.

  14. X-ray system simulation software tools for radiology and radiography education.

    PubMed

    Kengyelics, Stephen M; Treadgold, Laura A; Davies, Andrew G

    2018-02-01

    To develop x-ray simulation software tools to support delivery of radiological science education for a range of learning environments and audiences including individual study, lectures, and tutorials. Two software tools were developed; one simulated x-ray production for a simple two dimensional radiographic system geometry comprising an x-ray source, beam filter, test object and detector. The other simulated the acquisition and display of two dimensional radiographic images of complex three dimensional objects using a ray casting algorithm through three dimensional mesh objects. Both tools were intended to be simple to use, produce results accurate enough to be useful for educational purposes, and have an acceptable simulation time on modest computer hardware. The radiographic factors and acquisition geometry could be altered in both tools via their graphical user interfaces. A comparison of radiographic contrast measurements of the simulators to a real system was performed. The contrast output of the simulators had excellent agreement with measured results. The software simulators were deployed to 120 computers on campus. The software tools developed are easy-to-use, clearly demonstrate important x-ray physics and imaging principles, are accessible within a standard University setting and could be used to enhance the teaching of x-ray physics to undergraduate students. Current approaches to teaching x-ray physics in radiological science lack immediacy when linking theory with practice. This method of delivery allows students to engage with the subject in an experiential learning environment. Copyright © 2017. Published by Elsevier Ltd.

  15. ESML for Earth Science Data Sets and Analysis

    NASA Technical Reports Server (NTRS)

    Graves, Sara; Ramachandran, Rahul

    2003-01-01

    The primary objective of this research project was to transition ESML from design to application. The resulting schema and prototype software will foster community acceptance for the Define once, use anywhere concept central to ESML. Supporting goals include: 1) Refinement of the ESML schema and software libraries in cooperation with the user community; 2) Application of the ESML schema and software to a variety of Earth science data sets and analysis tools; 3) Development of supporting prototype software for enhanced ease of use; 4) Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate; and 5) Widespread publication of the ESML approach, schema, and software.

  16. Earth Science Markup Language: Transitioning From Design to Application

    NASA Technical Reports Server (NTRS)

    Moe, Karen; Graves, Sara; Ramachandran, Rahul

    2002-01-01

    The primary objective of the proposed Earth Science Markup Language (ESML) research is to transition from design to application. The resulting schema and prototype software will foster community acceptance for the "define once, use anywhere" concept central to ESML. Supporting goals include: 1. Refinement of the ESML schema and software libraries in cooperation with the user community. 2. Application of the ESML schema and software libraries to a variety of Earth science data sets and analysis tools. 3. Development of supporting prototype software for enhanced ease of use. 4. Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate. 5. Widespread publication of the ESML approach, schema, and software.

  17. Estimation of light commercial vehicles dynamics by means of HIL-testbench simulation

    NASA Astrophysics Data System (ADS)

    Groshev, A.; Tumasov, A.; Toropov, E.; Sereda, P.

    2018-02-01

    The high level of active safety of vehicles is impossible without driver assistance electronic systems. Electronic stability control (ESC) system is one of them. Nowadays such systems are obligatory for installation on vehicles of different categories. The approval of active safety level of vehicles with ESC is possible by means of high speed road tests. The most frequently implemented tests are “fish hook” and “sine with dwell” tests. Such kind of tests provided by The Global technical regulation No. 8 are published by the United Nations Economic Commission for Europe as well as by ECE 13-11. At the same time, not only road tests could be used for estimation of vehicles dynamics. Modern software and hardware technologies allow imitating real tests with acceptable reliability and good convergence between real test data and simulation results. ECE 13-11 Annex 21 - Appendix 1 “Use Of The Dynamic Stability Simulation” regulates demands for special Simulation Test bench that could be used not only for preliminary estimation of vehicles dynamics, but also for official vehicles homologation. This paper describes the approach, proposed by the researchers from Nizhny Novgorod State Technical University n.a. R.E. Alekseev (NNSTU, Russia) with support of engineers of United Engineering Center GAZ Group, as well as specialists of Gorky Automobile Plant. The idea of approach is to use the special HIL (hardware in the loop) -test bench, that consists of Real Time PC with Real Time Software and braking system components including electronic control unit (ECU) of ESC system. The HIL-test bench allows imitating vehicle dynamics in condition of “fish hook” and “sine with dwell” tests. The paper describes the scheme and structure of HIL-test bench and some peculiarities that should be taken into account during HIL-simulation.

  18. Usability Testing of an Interactive Virtual Reality Distraction Intervention to Reduce Procedural Pain in Children and Adolescents With Cancer.

    PubMed

    Birnie, Kathryn A; Kulandaivelu, Yalinie; Jibb, Lindsay; Hroch, Petra; Positano, Karyn; Robertson, Simon; Campbell, Fiona; Abla, Oussama; Stinson, Jennifer

    2018-06-01

    Needle procedures are among the most distressing aspects of pediatric cancer-related treatment. Virtual reality (VR) distraction offers promise for needle-related pain and distress given its highly immersive and interactive virtual environment. This study assessed the usability (ease of use and understanding, acceptability) of a custom VR intervention for children with cancer undergoing implantable venous access device (IVAD) needle insertion. Three iterative cycles of mixed-method usability testing with semistructured interviews were undertaken to refine the VR. Participants included 17 children and adolescents (8-18 years old) with cancer who used the VR intervention prior to or during IVAD access. Most participants reported the VR as easy to use (82%) and understand (94%), and would like to use it during subsequent needle procedures (94%). Based on usability testing, refinements were made to VR hardware, software, and clinical implementation. Refinements focused on increasing responsiveness, interaction, and immersion of the VR program, reducing head movement for VR interaction, and enabling participant alerts to steps of the procedure by clinical staff. No adverse events of nausea or dizziness were reported. The VR intervention was deemed acceptable and safe. Next steps include assessing feasibility and effectiveness of the VR intervention for pain and distress.

  19. [Patient safety in primary care: PREFASEG project].

    PubMed

    Catalán, Arantxa; Borrell, Francesc; Pons, Angels; Amado, Ester; Baena, José Miguel; Morales, Vicente

    2014-07-01

    The Institut Català de la Salut (ICS) has designed and integrated in electronic clinical station of primary care a new software tool to support the prescription of drugs, which can detect on-line certain medication errors. The software called PREFASEG (stands for Secure drug prescriptions) aims to prevent adverse events related to medication use in the field of primary health care (PHC). This study was made on the computerized medical record called CPT, which is used by all PHC physicians in our institution -3,750- and prescribing physicians through it. PREFASEG integrated in eCAP in July 2010 and six months later we performed a cross-sectional study to evaluate their usefulness and refine their design. The software alerts on-line in 5 dimensions: drug interactions, redundant treatments, allergies, contraindications of drugs with disease, and advises against drugs in over 75 years. PREFASEG generated 1,162,765 alerts (1 per 10 high treatment), with the detection of therapeutic duplication (62%) the most alerted. The overall acceptance rate is 35%, redundancies pharmacological (43%) and allergies (26%) are the most accepted. A total of 10,808 professionals (doctors and nurses) have accepted some of the recommendations of the program. PREFASEG is a feasible and highly efficient strategy to achieve an objective of Quality Plan for the NHS. Copyright © 2014. Published by Elsevier Espana.

  20. Development of a simple computerized torsion test to quantify subjective ocular torsion.

    PubMed

    Kim, Y D; Yang, H K; Hwang, J-M

    2017-11-01

    PurposeThe double Maddox-rod test (DMRT) and Lancaster red-green test (LRGT) are the most widely used tests worldwide to assess subjective ocular torsion. However, these tests require equipment and the quantified results of ocular torsion are only provided in rough values. Here we developed a novel computerized torsion test (CTT) for individual assessment of subjective ocular torsion and validated the reliability and accuracy of the test compared with those of the DMRT and LRGT.MethodsA total of 30 patients with cyclovertical strabismus and 30 controls were recruited. The CTT was designed using Microsoft Office PowerPoint. Subjects wore red-green filter spectacles and viewed gradually tilted red and cyan lines on an LCD monitor and pressed the keyboard to go through the slides, until both lines seemed parallel. All subjects underwent the CTT, DMRT, and LRGT. Intraclass correlation coefficients and Bland-Altman plots were analyzed to assess the acceptability of the CTT compared with that of the DMRT.ResultsBoth the DMRT and CTT showed no significant test-retest differences in the strabismus and control groups. The DMRT and CTT results demonstrated an acceptable agreement. The reliability of the CTT was better than that of the DMRT. The LRGT showed low sensitivity for the detection of ocular torsion compared with the DMRT (40.0%) and CTT (39.1%).ConclusionOur results suggest that the assessment of subjective ocular torsion using the CTT based on PowerPoint software is simple, reproducible, and accurate and can be applied in clinical practice.

  1. Goddard Space Flight Center's Structural Dynamics Data Acquisition System

    NASA Technical Reports Server (NTRS)

    McLeod, Christopher

    2004-01-01

    Turnkey Commercial Off The Shelf (COTS) data acquisition systems typically perform well and meet most of the objectives of the manufacturer. The problem is that they seldom meet most of the objectives of the end user. The analysis software, if any, is unlikely to be tailored to the end users specific application; and there is seldom the chance of incorporating preferred algorithms to solve unique problems. Purchasing a customized system allows the end user to get a system tailored to the actual application, but the cost can be prohibitive. Once the system has been accepted, future changes come with a cost and response time that's often not workable. When it came time to replace the primary digital data acquisition system used in the Goddard Space Flight Center's Structural Dynamics Test Section, the decision was made to use a combination of COTS hardware and in-house developed software. The COTS hardware used is the DataMAX II Instrumentation Recorder built by R.C. Electronics Inc. and a desktop Pentium 4 computer system. The in-house software was developed using MATLAB from The MathWorks. This paper will describe the design and development of the new data acquisition and analysis system.

  2. Goddard Space Flight Center's Structural Dynamics Data Acquisition System

    NASA Technical Reports Server (NTRS)

    McLeod, Christopher

    2004-01-01

    Turnkey Commercial Off The Shelf (COTS) data acquisition systems typically perform well and meet most of the objectives of the manufacturer. The problem is that they seldom meet most of the objectives of the end user. The analysis software, if any, is unlikely to be tailored to the end users specific application; and there is seldom the chance of incorporating preferred algorithms to solve unique problems. Purchasing a customized system allows the end user to get a system tailored to the actual application, but the cost can be prohibitive. Once the system has been accepted, future changes come with a cost and response time that's often not workable. When it came time to replace the primary digital data acquisition system used in the Goddard Space Flight Center's Structural Dynamics Test Section, the decision was made to use a combination of COTS hardware and in-house developed software. The COTS hardware used is the DataMAX II Instrumentation Recorder built by R.C. Electronics Inc. and a desktop Pentium 4 computer system. The in-house software was developed using MATLAF3 from The Mathworks. This paper will describe the design and development of the new data acquisition and analysis system.

  3. Development of a Pediatric Visual Field Test

    PubMed Central

    Miranda, Marco A.; Henson, David B.; Fenerty, Cecilia; Biswas, Susmito; Aslam, Tariq

    2016-01-01

    Purpose We describe a pediatric visual field (VF) test based on a computer game where software and hardware combine to provide an enjoyable test experience. Methods The test software consists of a platform-based computer game presented to the central VF. A storyline was created around the game as was a structure surrounding the computer monitor to enhance patients' experience. The patient is asked to help the central character collect magic coins (stimuli). To collect these coins a series of obstacles need to be overcome. The test was presented on a Sony PVM-2541A monitor calibrated from a central midpoint with a Minolta CS-100 photometer placed at 50 cm. Measurements were performed at 15 locations on the screen and the contrast calculated. Retinal sensitivity was determined by modulating stimulus in size. To test the feasibility of the novel approach 20 patients (4–16 years old) with no history of VF defects were recruited. Results For the 14 subjects completing the study, 31 ± 15 data points were collected on 1 eye of each patient. Mean background luminance and stimulus contrast were 9.9 ± 0.3 cd/m2 and 27.9 ± 0.1 dB, respectively. Sensitivity values obtained were similar to an adult population but variability was considerably higher – 8.3 ± 9.0 dB. Conclusions Preliminary data show the feasibility of a game-based VF test for pediatric use. Although the test was well accepted by the target population, test variability remained very high. Translational Relevance Traditional VF tests are not well tolerated by children. This study describes a child-friendly approach to test visual fields in the targeted population. PMID:27980876

  4. Can your software engineer program your PLC?

    NASA Astrophysics Data System (ADS)

    Borrowman, Alastair J.; Taylor, Philip

    2016-07-01

    The use of Programmable Logic Controllers (PLCs) in the control of large physics experiments is ubiquitous1, 2, 3. The programming of these controllers is normally the domain of engineers with a background in electronics, this paper introduces PLC program development from the software engineer's perspective. PLC programs provide the link between control software running on PC architecture systems and physical hardware controlled and monitored by digital and analog signals. The higher-level software running on the PC is typically responsible for accepting operator input and from this deciding when and how hardware connected to the PLC is controlled. The PLC accepts demands from the PC, considers the current state of its connected hardware and if correct to do so (based upon interlocks or other constraints) adjusts its hardware output signals appropriately for the PC's demands. A published ICD (Interface Control Document) defines the PLC memory locations available to be written and read by the PC to control and monitor the hardware. Historically the method of programming PLCs has been ladder diagrams that closely resemble circuit diagrams, however, PLC manufacturers nowadays also provide, and promote, the use of higher-level programming languages4. Based on techniques used in the development of high-level PC software to control PLCs for multiple telescopes, this paper examines the development of PLC programs to operate the hardware of a medical cyclotron beamline controlled from a PC using the Experimental Physics and Industrial Control System (EPICS), which is also widely used in telescope control5, 6, 7. The PLC used is the new generation Siemens S7-1200 programmed using Siemens Pascal based Structured Control Language (SCL), which is their implementation of Structured Text (ST). The approach described is that from a software engineer's perspective, utilising Siemens Totally Integrated Automation (TIA) Portal integrated development environment (IDE) to create modular PLC programs based upon reusable functions capable of being unit tested without the PLC connected to hardware. Emphasis has been placed on designing an interface between EPICS and SCL that enforces correct operation of hardware through stringent separation of PC accessible PLC memory and hardware I/O addresses used only by the PLC. The paper also introduces the method used to automate the creation, from the same source document, the PLC memory structure (tag) definitions (defining memory used to access hardware I/O and that accessed by the PC) and creation of the PC program data structures (EPICS database records) used to access the permitted PLC addresses. From direct experience this paper demonstrates the advantages of PLC program development being shared between electronic and software engineers, to enable use of the most appropriate processes from both the perspective of the hardware and the higher-level software used to control it.

  5. Vision Screening for Children 36 to <72 Months: Recommended Practices

    PubMed Central

    Cotter, Susan A.; Cyert, Lynn A.; Miller, Joseph M.; Quinn, Graham E.

    2015-01-01

    ABSTRACT Purpose This article provides recommendations for screening children aged 36 to younger than 72 months for eye and visual system disorders. The recommendations were developed by the National Expert Panel to the National Center for Children’s Vision and Eye Health, sponsored by Prevent Blindness, and funded by the Maternal and Child Health Bureau of the Health Resources and Services Administration, United States Department of Health and Human Services. The recommendations describe both best and acceptable practice standards. Targeted vision disorders for screening are primarily amblyopia, strabismus, significant refractive error, and associated risk factors. The recommended screening tests are intended for use by lay screeners, nurses, and other personnel who screen children in educational, community, public health, or primary health care settings. Characteristics of children who should be examined by an optometrist or ophthalmologist rather than undergo vision screening are also described. Results There are two current best practice vision screening methods for children aged 36 to younger than 72 months: (1) monocular visual acuity testing using single HOTV letters or LEA Symbols surrounded by crowding bars at a 5-ft (1.5 m) test distance, with the child responding by either matching or naming, or (2) instrument-based testing using the Retinomax autorefractor or the SureSight Vision Screener with the Vision in Preschoolers Study data software installed (version 2.24 or 2.25 set to minus cylinder form). Using the Plusoptix Photoscreener is acceptable practice, as is adding stereoacuity testing using the PASS (Preschool Assessment of Stereopsis with a Smile) stereotest as a supplemental procedure to visual acuity testing or autorefraction. Conclusions The National Expert Panel recommends that children aged 36 to younger than 72 months be screened annually (best practice) or at least once (accepted minimum standard) using one of the best practice approaches. Technological updates will be maintained at http://nationalcenter.preventblindness.org. PMID:25562476

  6. Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center

    NASA Technical Reports Server (NTRS)

    Guillebeau, P. L.

    2004-01-01

    The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining real-time support. An important aspect of the paper will involve challenges and lessons learned. product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support. This paper will also address the deployment approach including user involvement in testing and the , This includes COTS product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support.

  7. Maneuver Automation Software

    NASA Technical Reports Server (NTRS)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; hide

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  8. The development of a clinical outcomes survey research application: Assessment Center.

    PubMed

    Gershon, Richard; Rothrock, Nan E; Hanrahan, Rachel T; Jansky, Liz J; Harniss, Mark; Riley, William

    2010-06-01

    The National Institutes of Health sponsored Patient-Reported Outcome Measurement Information System (PROMIS) aimed to create item banks and computerized adaptive tests (CATs) across multiple domains for individuals with a range of chronic diseases. Web-based software was created to enable a researcher to create study-specific Websites that could administer PROMIS CATs and other instruments to research participants or clinical samples. This paper outlines the process used to develop a user-friendly, free, Web-based resource (Assessment Center) for storage, retrieval, organization, sharing, and administration of patient-reported outcomes (PRO) instruments. Joint Application Design (JAD) sessions were conducted with representatives from numerous institutions in order to supply a general wish list of features. Use Cases were then written to ensure that end user expectations matched programmer specifications. Program development included daily programmer "scrum" sessions, weekly Usability Acceptability Testing (UAT) and continuous Quality Assurance (QA) activities pre- and post-release. Assessment Center includes features that promote instrument development including item histories, data management, and storage of statistical analysis results. This case study of software development highlights the collection and incorporation of user input throughout the development process. Potential future applications of Assessment Center in clinical research are discussed.

  9. An experimental investigation of fault tolerant software structures in an avionics application

    NASA Technical Reports Server (NTRS)

    Caglayan, Alper K.; Eckhardt, Dave E., Jr.

    1989-01-01

    The objective of this experimental investigation is to compare the functional performance and software reliability of competing fault tolerant software structures utilizing software diversity. In this experiment, three versions of the redundancy management software for a skewed sensor array have been developed using three diverse failure detection and isolation algorithms and incorporated into various N-version, recovery block and hybrid software structures. The empirical results show that, for maximum functional performance improvement in the selected application domain, the results of diverse algorithms should be voted before being processed by multiple versions without enforced diversity. Results also suggest that when the reliability gain with an N-version structure is modest, recovery block structures are more feasible since higher reliability can be obtained using an acceptance check with a modest reliability.

  10. Near-Earth Object Survey Simulation Software

    NASA Astrophysics Data System (ADS)

    Naidu, Shantanu P.; Chesley, Steven R.; Farnocchia, Davide

    2017-10-01

    There is a significant interest in Near-Earth objects (NEOs) because they pose an impact threat to Earth, offer valuable scientific information, and are potential targets for robotic and human exploration. The number of NEO discoveries has been rising rapidly over the last two decades with over 1800 being discovered last year, making the total number of known NEOs >16000. Pan-STARRS and the Catalina Sky Survey are currently the most prolific NEO surveys, having discovered >1600 NEOs between them in 2016. As next generation surveys such as Large Synoptic Survey Telescope (LSST) and the proposed Near-Earth Object Camera (NEOCam) become operational in the next decade, the discovery rate is expected to increase tremendously. Coordination between various survey telescopes will be necessary in order to optimize NEO discoveries and create a unified global NEO discovery network. We are collaborating on a community-based, open-source software project to simulate asteroid surveys to facilitate such coordination and develop strategies for improving discovery efficiency. Our effort so far has focused on development of a fast and efficient tool capable of accepting user-defined asteroid population models and telescope parameters such as a list of pointing angles and camera field-of-view, and generating an output list of detectable asteroids. The software takes advantage of the widely used and tested SPICE library and architecture developed by NASA’s Navigation and Ancillary Information Facility (Acton, 1996) for saving and retrieving asteroid trajectories and camera pointing. Orbit propagation is done using OpenOrb (Granvik et al. 2009) but future versions will allow the user to plug in a propagator of their choice. The software allows the simulation of both ground-based and space-based surveys. Performance is being tested using the Grav et al. (2011) asteroid population model and the LSST simulated survey “enigma_1189”.

  11. IMPLEMENTATION AND VALIDATION OF STATISTICAL TESTS IN RESEARCH'S SOFTWARE HELPING DATA COLLECTION AND PROTOCOLS ANALYSIS IN SURGERY.

    PubMed

    Kuretzki, Carlos Henrique; Campos, Antônio Carlos Ligocki; Malafaia, Osvaldo; Soares, Sandramara Scandelari Kusano de Paula; Tenório, Sérgio Bernardo; Timi, Jorge Rufino Ribas

    2016-03-01

    The use of information technology is often applied in healthcare. With regard to scientific research, the SINPE(c) - Integrated Electronic Protocols was created as a tool to support researchers, offering clinical data standardization. By the time, SINPE(c) lacked statistical tests obtained by automatic analysis. Add to SINPE(c) features for automatic realization of the main statistical methods used in medicine . The study was divided into four topics: check the interest of users towards the implementation of the tests; search the frequency of their use in health care; carry out the implementation; and validate the results with researchers and their protocols. It was applied in a group of users of this software in their thesis in the strict sensu master and doctorate degrees in one postgraduate program in surgery. To assess the reliability of the statistics was compared the data obtained both automatically by SINPE(c) as manually held by a professional in statistics with experience with this type of study. There was concern for the use of automatic statistical tests, with good acceptance. The chi-square, Mann-Whitney, Fisher and t-Student were considered as tests frequently used by participants in medical studies. These methods have been implemented and thereafter approved as expected. The incorporation of the automatic SINPE (c) Statistical Analysis was shown to be reliable and equal to the manually done, validating its use as a research tool for medical research.

  12. Visualization of spiral and scroll waves in simulated and experimental cardiac tissue

    NASA Astrophysics Data System (ADS)

    Cherry, E. M.; Fenton, F. H.

    2008-12-01

    The heart is a nonlinear biological system that can exhibit complex electrical dynamics, complete with period-doubling bifurcations and spiral and scroll waves that can lead to fibrillatory states that compromise the heart's ability to contract and pump blood efficiently. Despite the importance of understanding the range of cardiac dynamics, studying how spiral and scroll waves can initiate, evolve, and be terminated is challenging because of the complicated electrophysiology and anatomy of the heart. Nevertheless, over the last two decades advances in experimental techniques have improved access to experimental data and have made it possible to visualize the electrical state of the heart in more detail than ever before. During the same time, progress in mathematical modeling and computational techniques has facilitated using simulations as a tool for investigating cardiac dynamics. In this paper, we present data from experimental and simulated cardiac tissue and discuss visualization techniques that facilitate understanding of the behavior of electrical spiral and scroll waves in the context of the heart. The paper contains many interactive media, including movies and interactive two- and three-dimensional Java appletsDisclaimer: IOP Publishing was not involved in the programming of this software and does not accept any responsibility for it. You download and run the software at your own risk. If you experience any problems with the software, please contact the author directly. To the fullest extent permitted by law, IOP Publishing Ltd accepts no responsibility for any loss, damage and/or other adverse effect on your computer system caused by your downloading and running this software. IOP Publishing Ltd accepts no responsibility for consequential loss..

  13. What Does CALL Have to Offer Computer Science and What Does Computer Science Have to Offer CALL?

    ERIC Educational Resources Information Center

    Cushion, Steve

    2006-01-01

    We will argue that CALL can usefully be viewed as a subset of computer software engineering and can profit from adopting some of the recent progress in software development theory. The unified modelling language has become the industry standard modelling technique and the accompanying unified process is rapidly gaining acceptance. The manner in…

  14. ROMI 4.0: Rough mill simulator 4.0 users manual

    Treesearch

    R. Edward Thomas; Timo Grueneberg; Urs Buehlmann

    2015-01-01

    The Rough MIll simulator (ROMI Version 4.0) is a computer software package for personal computers (PCs) that simulates current industrial practices for rip-first, chop-first, and rip and chop-first lumber processing. This guide shows how to set up the software; design, implement, and execute simulations; and examine the results. ROMI 4.0 accepts cutting bills with as...

  15. Integrated software system for low level waste management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worku, G.

    1995-12-31

    In the continually changing and uncertain world of low level waste management, many generators in the US are faced with the prospect of having to store their waste on site for the indefinite future. This consequently increases the set of tasks performed by the generators in the areas of packaging, characterizing, classifying, screening (if a set of acceptance criteria applies), and managing the inventory for the duration of onsite storage. When disposal sites become available, it is expected that the work will require re-evaluating the waste packages, including possible re-processing, re-packaging, or re-classifying in preparation for shipment for disposal undermore » the regulatory requirements of the time. In this day and age, when there is wide use of computers and computer literacy is at high levels, an important waste management tool would be an integrated software system that aids waste management personnel in conducting these tasks quickly and accurately. It has become evident that such an integrated radwaste management software system offers great benefits to radwaste generators both in the US and other countries. This paper discusses one such approach to integrated radwaste management utilizing some globally accepted radiological assessment software applications.« less

  16. Design, Assembly, Integration, and Testing of a Power Processing Unit for a Cylindrical Hall Thruster, the NORSAT-2 Flatsat, and the Vector Gravimeter for Asteroids Instrument Computer

    NASA Astrophysics Data System (ADS)

    Svatos, Adam Ladislav

    This thesis describes the author's contributions to three separate projects. The bus of the NORSAT-2 satellite was developed by the Space Flight Laboratory (SFL) for the Norwegian Space Centre (NSC) and Space Norway. The author's contributions to the mission were performing unit tests for the components of all the spacecraft subsystems as well as designing and assembling the flatsat from flight spares. Gedex's Vector Gravimeter for Asteroids (VEGA) is an accelerometer for spacecraft. The author's contributions to this payload were modifying the instrument computer board schematic, designing the printed circuit board, developing and applying test software, and performing thermal acceptance testing of two instrument computer boards. The SFL's cylindrical Hall effect thruster combines the cylindrical configuration for a Hall thruster and uses permanent magnets to achieve miniaturization and low power consumption, respectively. The author's contributions were to design, build, and test an engineering model power processing unit.

  17. Testing Scientific Software: A Systematic Literature Review.

    PubMed

    Kanewala, Upulee; Bieman, James M

    2014-10-01

    Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.

  18. Tests of the Hardware and Software for the Reconstruction of Trajectories in the Experiment MINERvA (in Portuguese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palomino Gallo, Jose Luis; /Rio de Janeiro, CBPF

    MINERvA experiment has a highly segmented and high precision neutrino detector able to record events with high statistic (over 13 millions in a four year run). MINERvA uses FERMILAB NuMI beamline. The detector will allow a detailed study of neutrino-nucleon interactions. Moreover, the detector has a target with different materials allowing, for the first time, the study of nuclear effects in neutrino interactions. We present here the work done with the MINERvA reconstruction group that has resulted in: (a) development of new codes to be added to the RecPack package so it can be adapted to the MINERvA detector structure;more » (b) finding optimum values for two of the MegaTracker reconstruction package variables: PEcut = 4 (minimum number of photo electrons for a signal to be accepted) and Chi2Cut = 200 (maximum value of {chi}{sup 2} for a track to be accepted); (c) testing of the multi anode photomultiplier tubes used at MINERvA in order to determine the correlation between different channels and for checking the device's dark counts.« less

  19. Measurements and calculations of water velocity, momentum flux, and related flow parameters obtaned from single-phase water integral acceptance tests of the PKL instrumented spool pieces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, W.

    The operation of the emergency core cooling system and its related steam-binding problems in pressurized water reactors is the subject of a cooperative study by the United States, Germany, and Japan. Lawrence Livermore Laboratory and EG and G, Inc., San Ramon Operations, are responsible for the design, hardware, and software of the 80.8-mm and 113-mm spool piece measurement systems for the German Primarkreislauf (PKL) Test Facility at Kraftwerk Union in Erlangen, West Germany. This work was done for the US Nuclear Regulatory Commission, Division of Reactor Safety Research, under its 3-D Technical Support and Instrumentation Program. Four instrumented spools capablemore » of measuring individual phase parameters in two-phase flows were constructed. Each spool contains a flow turbine, drag screen, three-beam densitometer, and pressure and temperature probes. A computerized data acquisition system is also provided to store and analyze data from the four spools. The four spools were shipped to the PKL Test Facility in West Germany for acceptance testing in a water-flow loop. Spool measurements of velocity and momentum flux were compared to the values obtained from an orifice meter installed in the loop piping system. The turbine flowmeter velocity data for all tests were within allowable tolerances. Drag screen momentum flux measurements were also within tolerance with the exception of a few points.« less

  20. Virtual and flexible digital signal processing system based on software PnP and component works

    NASA Astrophysics Data System (ADS)

    He, Tao; Wu, Qinghua; Zhong, Fei; Li, Wei

    2005-05-01

    An idea about software PnP (Plug & Play) is put forward according to the hardware PnP. And base on this idea, a virtual flexible digital signal processing system (FVDSPS) is carried out. FVDSPS is composed of a main control center, many sub-function modules and other hardware I/O modules. Main control center sends out commands to sub-function modules, and manages running orders, parameters and results of sub-functions. The software kernel of FVDSPS is DSP (Digital Signal Processing) module, which communicates with the main control center through some protocols, accept commands or send requirements. The data sharing and exchanging between the main control center and the DSP modules are carried out and managed by the files system of the Windows Operation System through the effective communication. FVDSPS real orients objects, orients engineers and orients engineering problems. With FVDSPS, users can freely plug and play, and fast reconfigure a signal process system according to engineering problems without programming. What you see is what you get. Thus, an engineer can orient engineering problems directly, pay more attention to engineering problems, and promote the flexibility, reliability and veracity of testing system. Because FVDSPS orients TCP/IP protocol, through Internet, testing engineers, technology experts can be connected freely without space. Engineering problems can be resolved fast and effectively. FVDSPS can be used in many fields such as instruments and meter, fault diagnosis, device maintenance and quality control.

  1. A Portable Electronic Nose For Hydrazine and Monomethyl Hydrazine Detection

    NASA Technical Reports Server (NTRS)

    Young, Rebecca C.; Linnell, Bruce R.; Peterson, Barbara V.; Brooks, Kathy B.; Griffin, Tim P.

    2004-01-01

    The Space Program and military use large quantities Hydrazine (Hz) and monomethyl hydrazine (MMI-I) as rocket propellant. These substances are very toxic and are suspected human carcinogens. The American Conference of Governmental Industrial Hygienist set the threshold limit value to be 10 parts per billion (ppb). Current off-the-shelf portable instruments require 10 to 20 minutes of exposure to detect 10 ppb concentration. This shortcofriing is not acceptable for many operations. A new prototype instrument using a gas sensor array and pattern recognition software technology (i.e., an electronic nose) has demonstrated the ability to identify either Hz or MM}{ and quantify their concentrations at 10 parts per billion in 90 seconds. This paper describes the design of the portable electronic nose (e-nose) instrument, test equipment setup, test protocol, pattern recognition algorithm, concentration estimation method, and laboratory test results.

  2. Development, testing, and numerical modeling of a foam sandwich biocomposite

    NASA Astrophysics Data System (ADS)

    Chachra, Ricky

    This study develops a novel sandwich composite material using plant based materials for potential use in nonstructural building applications. The face sheets comprise woven hemp fabric and a sap based epoxy, while the core comprises castor oil based foam with waste rice hulls as reinforcement. Mechanical properties of the individual materials are tested in uniaxial compression and tension for the foam and hemp, respectively. The sandwich composite is tested in 3 point bending. Flexural results are compared to a finite element model developed in the commercial software Abaqus, and the validated model is then used to investigate alternate sandwich geometries. Sandwich model responses are compared to existing standards for nonstructural building panels, showing that the novel material is roughly half the strength of equally thick drywall. When space limitations are not an issue, a double thickness sandwich biocomposite is found to be a structurally acceptable replacement for standard gypsum drywall.

  3. A methodology for evaluation of an interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Kovalick, William M.; Newcomer, Jeffrey A.; Wharton, Stephen W.

    1987-01-01

    Because of the considerable cost of an interactive multispectral image processing system, an evaluation of a prospective system should be performed to ascertain if it will be acceptable to the anticipated users. Evaluation of a developmental system indicated that the important system elements include documentation, user friendliness, image processing capabilities, and system services. The criteria and evaluation procedures for these elements are described herein. The following factors contributed to the success of the evaluation of the developmental system: (1) careful review of documentation prior to program development, (2) construction and testing of macromodules representing typical processing scenarios, (3) availability of other image processing systems for referral and verification, and (4) use of testing personnel with an applications perspective and experience with other systems. This evaluation was done in addition to and independently of program testing by the software developers of the system.

  4. Determinants of Social Networking Software Acceptance: A Multi-Theoretical Approach

    ERIC Educational Resources Information Center

    Shittu, Ahmed Tajudeen; Madarsha, Kamal Basha; AbduRahman, Nik Suryani Nik; Ahmad, Tunku Badariah Tunku

    2013-01-01

    Understanding reasons why students use social media has become a major preoccupation of researchers in recent time due to the rate of its adoption among the present generation of students. Some of the few study on social media phenomenon employed a single theory as a framework in order to understand the factors that influence the acceptance of it…

  5. Cargo Movement Operations Systems (CMOS). Revised Draft Software Test Plan

    DTIC Science & Technology

    1990-05-17

    NO [ ] COMMENT DISPOSITION: ACCEPT [ ] REJECT [ ] COMMENT STATUS: OPEN [ ] CLOSED [ ] Cmnt Page Paragraph No. No. Number Comment 1. 1 1 Delete the period following this and all other single digit paragraph numbers in order to comply with the format used in the DID. 2. 9 3.1.3 Replace "Z-248" with "PC Workstation" in the second line of the paragraph. 3. 10 3.2.1 Change the "?" to a "" in the second entry of Table 3.2.1. 10 3.2.2 rut parentheses around the phrase bounded by commas in the second and third lines, i.e.,

  6. Human-centered design of a personal health record system for metabolic syndrome management based on the ISO 9241-210:2010 standard

    PubMed Central

    Farinango, Charic D; Benavides, Juan S; Cerón, Jesús D; López, Diego M; Álvarez, Rosa E

    2018-01-01

    Background Previous studies have demonstrated the effectiveness of information and communication technologies to support healthy lifestyle interventions. In particular, personal health record systems (PHR-Ss) empower self-care, essential to support lifestyle changes. Approaches such as the user-centered design (UCD), which is already a standard within the software industry (ISO 9241-210:2010), provide specifications and guidelines to guarantee user acceptance and quality of eHealth systems. However, no single PHR-S for metabolic syndrome (MS) developed following the recommendations of the ISO 9241-210:2010 specification has been found in the literature. Objective The aim of this study was to describe the development of a PHR-S for the management of MS according to the principles and recommendations of the ISO 9241-210 standard. Methods The proposed PHR-S was developed using a formal software development process which, in addition to the traditional activities of any software process, included the principles and recommendations of the ISO 9241-210 standard. To gather user information, a survey sample of 1,187 individuals, eight interviews, and a focus group with seven people were performed. Throughout five iterations, three prototypes were built. Potential users of each system evaluated each prototype. The quality attributes of efficiency, effectiveness, and user satisfaction were assessed using metrics defined in the ISO/IEC 25022 standard. Results The following results were obtained: 1) a technology profile from 1,187 individuals at risk for MS from the city of Popayan, Colombia, identifying that 75.2% of the people use the Internet and 51% had a smartphone; 2) a PHR-S to manage MS developed (the PHR-S has the following five main functionalities: record the five MS risk factors, share these measures with health care professionals, and three educational modules on nutrition, stress management, and a physical activity); and 3) usability tests on each prototype obtaining the following results: 100% effectiveness, 100% efficiency, and 84.2 points in the system usability scale. Conclusion The software development methodology used was based on the ISO 9241-210 standard, which allowed the development team to maintain a focus on user’s needs and requirements throughout the project, which resulted in an increased satisfaction and acceptance of the system. Additionally, the establishment of a multidisciplinary team allowed the application of considerations not only from the disciplines of software engineering and health sciences but also from other disciplines such as graphical design and media communication. Finally, usability testing allowed the observation of flaws in the designs, which helped to improve the solution. PMID:29386903

  7. Using RDF and Git to Realize a Collaborative Metadata Repository.

    PubMed

    Stöhr, Mark R; Majeed, Raphael W; Günther, Andreas

    2018-01-01

    The German Center for Lung Research (DZL) is a research network with the aim of researching respiratory diseases. The participating study sites' register data differs in terms of software and coding system as well as data field coverage. To perform meaningful consortium-wide queries through one single interface, a uniform conceptual structure is required covering the DZL common data elements. No single existing terminology includes all our concepts. Potential candidates such as LOINC and SNOMED only cover specific subject areas or are not granular enough for our needs. To achieve a broadly accepted and complete ontology, we developed a platform for collaborative metadata management. The DZL data management group formulated detailed requirements regarding the metadata repository and the user interfaces for metadata editing. Our solution builds upon existing standard technologies allowing us to meet those requirements. Its key parts are RDF and the distributed version control system Git. We developed a software system to publish updated metadata automatically and immediately after performing validation tests for completeness and consistency.

  8. SMART-FDIR: Use of Artificial Intelligence in the Implementation of a Satellite FDIR

    NASA Astrophysics Data System (ADS)

    Guiotto, A.; Martelli, A.; Paccagnini, C.

    Nowadays space activities are characterized by increased constraints in terms of on-board computing power and functional complexity combined with reduction of costs and schedule. This scenario necessarily originates impacts on the on-board software with particular emphases to the interfaces between on-board software and system/mission level requirements. The questions are: How can the effectiveness of Space System Software design be improved? How can we increase sophistication in the area of autonomy and failure tolerance, maintaining the necessary quality with acceptable risks?

  9. Testing Scientific Software: A Systematic Literature Review

    PubMed Central

    Kanewala, Upulee; Bieman, James M.

    2014-01-01

    Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798

  10. Prospective comparison of speckle tracking longitudinal bidimensional strain between two vendors.

    PubMed

    Castel, Anne-Laure; Szymanski, Catherine; Delelis, François; Levy, Franck; Menet, Aymeric; Mailliet, Amandine; Marotte, Nathalie; Graux, Pierre; Tribouilloy, Christophe; Maréchaux, Sylvestre

    2014-02-01

    Speckle tracking is a relatively new, largely angle-independent technique used for the evaluation of myocardial longitudinal strain (LS). However, significant differences have been reported between LS values obtained by speckle tracking with the first generation of software products. To compare LS values obtained with the most recently released equipment from two manufacturers. Systematic scanning with head-to-head acquisition with no modification of the patient's position was performed in 64 patients with equipment from two different manufacturers, with subsequent off-line post-processing for speckle tracking LS assessment (Philips QLAB 9.0 and General Electric [GE] EchoPAC BT12). The interobserver variability of each software product was tested on a randomly selected set of 20 echocardiograms from the study population. GE and Philips interobserver coefficients of variation (CVs) for global LS (GLS) were 6.63% and 5.87%, respectively, indicating good reproducibility. Reproducibility was very variable for regional and segmental LS values, with CVs ranging from 7.58% to 49.21% with both software products. The concordance correlation coefficient (CCC) between GLS values was high at 0.95, indicating substantial agreement between the two methods. While good agreement was observed between midwall and apical regional strains with the two software products, basal regional strains were poorly correlated. The agreement between the two software products at a segmental level was very variable; the highest correlation was obtained for the apical cap (CCC 0.90) and the poorest for basal segments (CCC range 0.31-0.56). A high level of agreement and reproducibility for global but not for basal regional or segmental LS was found with two vendor-dependent software products. This finding may help to reinforce clinical acceptance of GLS in everyday clinical practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  11. Cargo Movement Operations System (CMOS). Updated Software Requirements Specifications, Increment 2, (Communications CSCI)

    DTIC Science & Technology

    1990-11-14

    NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SRS2-0002 PROGRAM OFFICE CONTROL NUMBER: DATA ITEM...e. (1st and 3rd sentence), 3.2.7.21, and 3.2.8 b. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ COMMENT DISPOSITION...3rd 3.2.7.6 4th 3.2.7.22 4th 3.2.7.7 4th 3.2.8 d. 2nd & 3rd 3.2.7.9 4th 3.2.8 e. 2nd CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [

  12. Social media and impression management: Veterinary Medicine students’ and faculty members’ attitudes toward the acceptability of social media posts

    PubMed Central

    KEDROWICZ, APRIL A.; ROYAL, KENNETH; FLAMMER, KEVEN

    2016-01-01

    Introduction: While social media has the potential to be used to make professional and personal connections, it can also be used inappropriately, with detrimental ramifications for the individual in terms of their professional reputation and even hiring decisions. This research explored students’ and faculty members’ perceptions of the acceptability of various social media postings. Methods: This cross-sectional study was conducted in 2015. All students and faculty members at the College of Veterinary Medicine were invited to participate. The sample size included 140 students and 69 faculty members who completed the Social Media Scale (SMS), a 7-point semantic differential scale. The SMS consisted of 12 items that measured the extent to which a variety of behaviors, using social media, constituted acceptable and unacceptable behaviors. Items appearing on the SMS were an amalgamation of modified items previously presented by Coe, Weijs, Muise et al. (2012) and new items generated specifically for this study. The data were collected during the spring semester of 2015 using Qualtrics online survey software and analyzed using t-tests and ANOVA. Results: The results showed that statistically significant differences existed between the students’ and faculty members’ ratings of acceptable behavior, as well as gender differences and differences across class years. Conclusion: These findings have implications for the development of policy and educational initiatives around professional identity management in the social sphere. PMID:27795965

  13. Social media and impression management: Veterinary Medicine students' and faculty members' attitudes toward the acceptability of social media posts.

    PubMed

    Kedrowicz, April A; Royal, Kenneth; Flammer, Keven

    2016-10-01

    While social media has the potential to be used to make professional and personal connections, it can also be used inappropriately, with detrimental ramifications for the individual in terms of their professional reputation and even hiring decisions. This research explored students' and faculty members' perceptions of the acceptability of various social media postings. This cross-sectional study was conducted in 2015. All students and faculty members at the College of Veterinary Medicine were invited to participate. The sample size included 140 students and 69 faculty members who completed the Social Media Scale (SMS), a 7-point semantic differential scale. The SMS consisted of 12 items that measured the extent to which a variety of behaviors, using social media, constituted acceptable and unacceptable behaviors. Items appearing on the SMS were an amalgamation of modified items previously presented by Coe, Weijs, Muise et al. (2012) and new items generated specifically for this study. The data were collected during the spring semester of 2015 using Qualtrics online survey software and analyzed using t-tests and ANOVA. The results showed that statistically significant differences existed between the students' and faculty members' ratings of acceptable behavior, as well as gender differences and differences across class years. These findings have implications for the development of policy and educational initiatives around professional identity management in the social sphere.

  14. Development of a Near Ground Remote Sensing System

    PubMed Central

    Zhang, Yanchao; Xiao, Yuzhao; Zhuang, Zaichun; Zhou, Liping; Liu, Fei; He, Yong

    2016-01-01

    Unmanned Aerial Vehicles (UAVs) have shown great potential in agriculture and are increasingly being developed for agricultural use. There are still a lot of experiments that need to be done to improve their performance and explore new uses, but experiments using UAVs are limited by many conditions like weather and location and the time it takes to prepare for a flight. To promote UAV remote sensing, a near ground remote sensing platform was developed. This platform consists of three major parts: (1) mechanical structures like a horizontal rail, vertical cylinder, and three axes gimbal; (2) power supply and control parts; (3) onboard application components. This platform covers five degrees of freedom (DOFs): horizontal, vertical, pitch, roll, yaw. A stm32 ARM single chip was used as the controller of the whole platform and another stm32 MCU was used to stabilize the gimbal. The gimbal stabilizer communicates with the main controller via a CAN bus. A multispectral camera was mounted on the gimbal. Software written in C++ language was developed as the graphical user interface. Operating parameters were set via this software and the working status was displayed in this software. To test how well the system works, a laser distance meter was used to measure the slide rail’s repeat accuracy. A 3-axis vibration analyzer was used to test the system stability. Test results show that the horizontal repeat accuracy was less than 2 mm; vertical repeat accuracy was less than 1 mm; vibration was less than 2 g and remained at an acceptable level. This system has high accuracy and stability and can therefore be used for various near ground remote sensing studies. PMID:27164111

  15. NASA PC software evaluation project

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kuan, Julie C.

    1986-01-01

    The USL NASA PC software evaluation project is intended to provide a structured framework for facilitating the development of quality NASA PC software products. The project will assist NASA PC development staff to understand the characteristics and functions of NASA PC software products. Based on the results of the project teams' evaluations and recommendations, users can judge the reliability, usability, acceptability, maintainability and customizability of all the PC software products. The objective here is to provide initial, high-level specifications and guidelines for NASA PC software evaluation. The primary tasks to be addressed in this project are as follows: to gain a strong understanding of what software evaluation entails and how to organize a structured software evaluation process; to define a structured methodology for conducting the software evaluation process; to develop a set of PC software evaluation criteria and evaluation rating scales; and to conduct PC software evaluations in accordance with the identified methodology. Communication Packages, Network System Software, Graphics Support Software, Environment Management Software, General Utilities. This report represents one of the 72 attachment reports to the University of Southwestern Louisiana's Final Report on NASA Grant NGT-19-010-900. Accordingly, appropriate care should be taken in using this report out of context of the full Final Report.

  16. The European computer model for optronic system performance prediction (ECOMOS)

    NASA Astrophysics Data System (ADS)

    Keßler, Stefan; Bijl, Piet; Labarre, Luc; Repasi, Endre; Wittenstein, Wolfgang; Bürsing, Helge

    2017-10-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE. In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short discussion of validation tests and an outlook on the future potential of simulation for sensor assessment.

  17. Total Cost of Ownership, System Acceptance and Perceived Success of Enterprise Resource Planning Software: Simulating a Dynamic Feedback Perspective of ERP in the Higher Education Environment

    ERIC Educational Resources Information Center

    Fryling, Meg

    2010-01-01

    Enterprise Research Planning (ERP) software is advertised as the product that will "run the enterprise", improving data access and accuracy as well as enhancing business process efficiency. Unfortunately, organizations often make implementation decisions with little consideration for the maintenance phase of an ERP, resulting in significant…

  18. Advanced Integrated Display System V/STOL Program Performance Specification. Volume I.

    DTIC Science & Technology

    1980-06-01

    sensor inputs required before the sensor can be designated acceptable. The reactivation count of each sensor parameter which satisfies its veri...129 3.5.2 AIDS Configuration Parameters .............. 133 3.5.3 AIDS Throughput Requirements ............... 133 4 QUALITY ASSURANCE...lists the adaptation parameters of the AIDS software; these parameters include the throughput and memory requirements of the software. 3.2 SYSTEM

  19. Technical Note: Unified imaging and robotic couch quality assurance.

    PubMed

    Cook, Molly C; Roper, Justin; Elder, Eric S; Schreibmann, Eduard

    2016-09-01

    To introduce a simplified quality assurance (QA) procedure that integrates tests for the linac's imaging components and the robotic couch. Current QA procedures for evaluating the alignment of the imaging system and linac require careful positioning of a phantom at isocenter before image acquisition and analysis. A complementary procedure for the robotic couch requires an initial displacement of the phantom and then evaluates the accuracy of repositioning the phantom at isocenter. We propose a two-in-one procedure that introduces a custom software module and incorporates both checks into one motion for increased efficiency. The phantom was manually set with random translational and rotational shifts, imaged with the in-room imaging system, and then registered to the isocenter using a custom software module. The software measured positioning accuracy by comparing the location of the repositioned phantom with a CAD model of the phantom at isocenter, which is physically verified using the MV port graticule. Repeatability of the custom software was tested by an assessment of internal marker location extraction on a series of scans taken over differing kV and CBCT acquisition parameters. The proposed method was able to correctly position the phantom at isocenter within acceptable 1 mm and 1° SRS tolerances, verified by both physical inspection and the custom software. Residual errors for mechanical accuracy were 0.26 mm vertically, 0.21 mm longitudinally, 0.55 mm laterally, 0.21° in pitch, 0.1° in roll, and 0.67° in yaw. The software module was shown to be robust across various scan acquisition parameters, detecting markers within 0.15 mm translationally in kV acquisitions and within 0.5 mm translationally and 0.3° rotationally across CBCT acquisitions with significant variations in voxel size. Agreement with vendor registration methods was well within 0.5 mm; differences were not statistically significant. As compared to the current two-step approach, the proposed QA procedure streamlines the workflow, accounts for rotational errors in imaging alignment, and simulates a broad range of variations in setup errors seen in clinical practice.

  20. A Framework of the Use of Information in Software Testing

    ERIC Educational Resources Information Center

    Kaveh, Payman

    2010-01-01

    With the increasing role that software systems play in our daily lives, software quality has become extremely important. Software quality is impacted by the efficiency of the software testing process. There are a growing number of software testing methodologies, models, and initiatives to satisfy the need to improve software quality. The main…

  1. A Study of Factors Affecting the Adoption of E-Learning Systems Enabled with Cultural Contextual Features by Instructions in Jamaican Tertiary Institutions

    ERIC Educational Resources Information Center

    Rhoden, Niccardo S.

    2014-01-01

    Understanding factors affecting the acceptance of E-Learning Systems Enabled with Cultural Contextual Features by lnstructors in Jamaican Tertiary Institutions is an important topic that's relevant to not only educational institutions, but developers of software for on line learning. The use of the unified theory of acceptance and use of…

  2. Cargo Movement Operations System (CMOS) Final Software Design Document, Change 01, Increment I

    DTIC Science & Technology

    1991-03-22

    NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SDDl-0002 PROGRAM OFFICE...COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SDDl-0003 PROGRAM OFFICE CONTROL NUMBER: DATA...CARMODE and SURRTG. RATIONALE: Request clarification of these deletions. CMOS PMO ACCEPTS COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [

  3. Cargo Movement Operations System (CMOS). Software Requirements Specification, Increment 1, Change 02

    DTIC Science & Technology

    1990-05-24

    COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ CLOSED [ ] ORIGINATOR CONTROL NUMBER: SRS2-0002 PROGRAM OFFICE CONTROL NUMBER: DATA...ACCEPTS COMMENT: YES ( J NO [ ) COMMENT DISPOSITION: COMMENT STATUS: OPEN [ J CLOSED [ ] ORIGINATOR CONTROL NUMBER: SRS2-0003 PROGRAM OFFICE CONTROL...NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO ( ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ J ORIGINATOR CONTROL NUMBER: SRS2-0004 PROGRAM

  4. A usability study of a mobile health application for rural Ghanaian midwives.

    PubMed

    Vélez, Olivia; Okyere, Portia Boakye; Kanter, Andrew S; Bakken, Suzanne

    2014-01-01

    Midwives in rural Ghana work at the frontline of the health care system, where they have access to essential data about the patient population. However, current methods of data capture, primarily pen and paper, make the data neither accessible nor usable for monitoring patient care or program evaluation. Electronic health (eHealth) systems present a potential mechanism for enhancing the roles of midwives by providing tools for collecting, exchanging, and viewing patient data as well as offering midwives the possibility for receiving information and decision support. Introducing such technology in low-resource settings has been challenging because of low levels of user acceptance, software design that does not match the end-user environment, and/or unforeseen challenges such as irregular power availability. These challenges are often attributable to a lack of understanding by the software developers of the end users' needs and work environment. A mobile health (mHealth) application known as mClinic was designed to support midwife access to the Millennium Village-Global Network, an eHealth delivery platform that captures data for managing patient care as well as program evaluation and monitoring, decision making, and management. We conducted a descriptive usability study composed of 3 phases to evaluate an mClinic prototype: 1) hybrid lab-live software evaluation of mClinic to identify usability issues; 2) completion of a usability questionnaire; and 3) interviews that included low-fidelity prototyping of new functionality proposed by midwives. The heuristic evaluation identified usability problems related to 4 of 8 usability categories. Analysis of usability questionnaire data indicated that the midwives perceived mClinic as useful but were more neutral about the ease of use. Analysis of midwives' reactions to low-fidelity prototypes during the interview process supported the applicability of mClinic to midwives' work and identified the need for additional functionality. User acceptance is essential for the success of any mHealth implementation. Usability testing identified mClinic development flaws and needed software enhancements. © 2014 by the American College of Nurse-Midwives.

  5. A Probabilistic Software System Attribute Acceptance Paradigm for COTS Software Evaluation

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    2005-01-01

    Standard software requirement formats are written from top-down perspectives only, that is, from an ideal notion of a client s needs. Despite the exactness of the standard format, software and system errors in designed systems have abounded. Bad and inadequate requirements have resulted in cost overruns, schedule slips and lost profitability. Commercial off-the-shelf (COTS) software components are even more troublesome than designed systems because they are often provided as is and subsequently delivered with unsubstantiated validation of described capabilities. For COTS software, there needs to be a way to express the client s software needs in a consistent and formal manner using software system attributes derived from software quality standards. Additionally, the format needs to be amenable to software evaluation processes that integrate observable evidence garnered from historical data. This paper presents a paradigm that effectively bridges the gap between what a client desires (top-down) and what has been demonstrated (bottom-up) for COTS software evaluation. The paradigm addresses the specification of needs before the software evaluation is performed and can be used to increase the shared understanding between clients and software evaluators about what is required and what is technically possible.

  6. [A Review on the Use of Effect Size in Nursing Research].

    PubMed

    Kang, Hyuncheol; Yeon, Kyupil; Han, Sang Tae

    2015-10-01

    The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.

  7. Bottom-up laboratory testing of the DKIST Visible Broadband Imager (VBI)

    NASA Astrophysics Data System (ADS)

    Ferayorni, Andrew; Beard, Andrew; Cole, Wes; Gregory, Scott; Wöeger, Friedrich

    2016-08-01

    The Daniel K. Inouye Solar Telescope (DKIST) is a 4-meter solar observatory under construction at Haleakala, Hawaii [1]. The Visible Broadband Imager (VBI) is a first light instrument that will record images at the highest possible spatial and temporal resolution of the DKIST at a number of scientifically important wavelengths [2]. The VBI is a pathfinder for DKIST instrumentation and a test bed for developing processes and procedures in the areas of unit, systems integration, and user acceptance testing. These test procedures have been developed and repeatedly executed during VBI construction in the lab as part of a "test early and test often" philosophy aimed at identifying and resolving issues early thus saving cost during integration test and commissioning on summit. The VBI team recently completed a bottom up end-to-end system test of the instrument in the lab that allowed the instrument's functionality, performance, and usability to be validated against documented system requirements. The bottom up testing approach includes four levels of testing, each introducing another layer in the control hierarchy that is tested before moving to the next level. First the instrument mechanisms are tested for positioning accuracy and repeatability using a laboratory position-sensing detector (PSD). Second the real-time motion controls are used to drive the mechanisms to verify speed and timing synchronization requirements are being met. Next the high-level software is introduced and the instrument is driven through a series of end-to-end tests that exercise the mechanisms, cameras, and simulated data processing. Finally, user acceptance testing is performed on operational and engineering use cases through the use of the instrument engineering graphical user interface (GUI). In this paper we present the VBI bottom up test plan, procedures, example test cases and tools used, as well as results from test execution in the laboratory. We will also discuss the benefits realized through completion of this testing, and share lessons learned from the bottoms up testing process.

  8. Fault Tolerant Considerations and Methods for Guidance and Control Systems

    DTIC Science & Technology

    1987-07-01

    multifunction devices such as microprocessors with software. In striving toward the economic goal, however, a cost is incurred in a different coin, i.e...therefore been developed which reduces the software risk to acceptable proportions. Several of the techniques thus developed incur no significant cost ...complex that their design and implementation need computerized tools in order to be cost -effective (in a broad sense, including the capability of

  9. Executable assertions and flight software

    NASA Technical Reports Server (NTRS)

    Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.

    1984-01-01

    Executable assertions are used to test flight control software. The techniques used for testing flight software; however, are different from the techniques used to test other kinds of software. This is because of the redundant nature of flight software. An experimental setup for testing flight software using executable assertions is described. Techniques for writing and using executable assertions to test flight software are presented. The error detection capability of assertions is studied and many examples of assertions are given. The issues of placement and complexity of assertions and the language features to support efficient use of assertions are discussed.

  10. 77 FR 50722 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...

  11. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...

  12. Cargo Movement Operations System (CMOS). Draft Software Design Document for the PC UNIX Prototype (Air Force Configuration), Increment III

    DTIC Science & Technology

    1991-04-21

    COMMENT: YES [ ] NO [ ] ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ I CLOSED [ ] ORIGINATOR CONTROL NUMBER: SDD-0002...ERCI ACCEPTS COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS! OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SDD-0003 PROGRAM OFFICE CONTROL...COMMENT: YES [ ] NO [ ] COMMENT DISPOSITION: COMMENT STATUS: OPEN [ ] CLOSED [ ] ORIGINATOR CONTROL NUMBER: SDD-0004 PROGPLAM OFFICE CONTROL NUMBER

  13. rpe v5: an emulator for reduced floating-point precision in large numerical simulations

    NASA Astrophysics Data System (ADS)

    Dawson, Andrew; Düben, Peter D.

    2017-06-01

    This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.

  14. Dtest Testing Software

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Cameron, Jonathan M.; Myint, Steven

    2013-01-01

    This software runs a suite of arbitrary software tests spanning various software languages and types of tests (unit level, system level, or file comparison tests). The dtest utility can be set to automate periodic testing of large suites of software, as well as running individual tests. It supports distributing multiple tests over multiple CPU cores, if available. The dtest tool is a utility program (written in Python) that scans through a directory (and its subdirectories) and finds all directories that match a certain pattern and then executes any tests in that directory as described in simple configuration files.

  15. Sharing Data between Mobile Devices, Connected Vehicles and Infrastructure Task 6: Prototype Acceptance Test Summary Report

    DOT National Transportation Integrated Search

    2017-10-30

    The Task 6 Prototype Acceptance Test Summary Report summarizes the results of Acceptance Testing carried out at Battelle facilities in accordance with the Task 6 Acceptance Test Plan. The Acceptance Tests were designed to verify that the prototype sy...

  16. 15 CFR 995.27 - Format validation software testing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 3 2013-01-01 2013-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...

  17. 15 CFR 995.27 - Format validation software testing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...

  18. 15 CFR 995.27 - Format validation software testing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...

  19. 15 CFR 995.27 - Format validation software testing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...

  20. MedlinePlus Connect: Frequently Asked Questions (FAQs)

    MedlinePlus

    ... topic data in XML format. Using the Web service, software developers can build applications that utilize MedlinePlus health topic information. The service accepts keyword searches as requests and returns relevant ...

  1. Development and application of a complex numerical model and software for the computation of dose conversion factors for radon progenies.

    PubMed

    Farkas, Árpád; Balásházy, Imre

    2015-04-01

    A more exact determination of dose conversion factors associated with radon progeny inhalation was possible due to the advancements in epidemiological health risk estimates in the last years. The enhancement of computational power and the development of numerical techniques allow computing dose conversion factors with increasing reliability. The objective of this study was to develop an integrated model and software based on a self-developed airway deposition code, an own bronchial dosimetry model and the computational methods accepted by International Commission on Radiological Protection (ICRP) to calculate dose conversion coefficients for different exposure conditions. The model was tested by its application for exposure and breathing conditions characteristic of mines and homes. The dose conversion factors were 8 and 16 mSv WLM(-1) for homes and mines when applying a stochastic deposition model combined with the ICRP dosimetry model (named PM-A model), and 9 and 17 mSv WLM(-1) when applying the same deposition model combined with authors' bronchial dosimetry model and the ICRP bronchiolar and alveolar-interstitial dosimetry model (called PM-B model). User friendly software for the computation of dose conversion factors has also been developed. The software allows one to compute conversion factors for a large range of exposure and breathing parameters and to perform sensitivity analyses. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Combining multivariate statistics and the think-aloud protocol to assess Human-Computer Interaction barriers in symptom checkers.

    PubMed

    Marco-Ruiz, Luis; Bønes, Erlend; de la Asunción, Estela; Gabarron, Elia; Aviles-Solis, Juan Carlos; Lee, Eunji; Traver, Vicente; Sato, Keiichi; Bellika, Johan G

    2017-10-01

    Symptom checkers are software tools that allow users to submit a set of symptoms and receive advice related to them in the form of a diagnosis list, health information or triage. The heterogeneity of their potential users and the number of different components in their user interfaces can make testing with end-users unaffordable. We designed and executed a two-phase method to test the respiratory diseases module of the symptom checker Erdusyk. Phase I consisted of an online test with a large sample of users (n=53). In Phase I, users evaluated the system remotely and completed a questionnaire based on the Technology Acceptance Model. Principal Component Analysis was used to correlate each section of the interface with the questionnaire responses, thus identifying which areas of the user interface presented significant contributions to the technology acceptance. In the second phase, the think-aloud procedure was executed with a small number of samples (n=15), focusing on the areas with significant contributions to analyze the reasons for such contributions. Our method was used effectively to optimize the testing of symptom checker user interfaces. The method allowed kept the cost of testing at reasonable levels by restricting the use of the think-aloud procedure while still assuring a high amount of coverage. The main barriers detected in Erdusyk were related to problems understanding time repetition patterns, the selection of levels in scales to record intensities, navigation, the quantification of some symptom attributes, and the characteristics of the symptoms. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Frequency response of electrochemical cells

    NASA Technical Reports Server (NTRS)

    Thomas, Daniel L.

    1990-01-01

    The main objective was to examine the feasibility of using frequency response techniques (1) as a tool in destructive physical analysis of batteries, particularly for estimating electrode structural parameters such as specific area, porosity, and tortuosity and (2) as a non-destructive testing technique for obtaining information such as state of charge and acceptability for space flight. The phenomena that contribute to the frequency response of an electrode include: (1) double layer capacitance; (2) Faradaic reaction resistance; (3) mass transfer of Warburg impedance; and (4) ohmic solution resistance. Nickel cadmium cells were investigated in solutions of KOH. A significant amount of data was acquired. Quantitative data analysis, using the developed software, is planned for the future.

  4. Recent enhancements to and applications of the SmartBrick structural health monitoring platform

    NASA Astrophysics Data System (ADS)

    Gunasekaran, A.; Cross, S.; Patel, N.; Sedigh, S.

    2012-04-01

    The SmartBrick network is an autonomous and wireless solution for structural health monitoring of civil infrastructures. The base station is currently in its third generation and has been laboratory- and field-tested in the United States and Italy. The second generation of the sensor nodes has been laboratory-tested as of publication. In this paper, we present recent enhancements made to hardware and software of the SmartBrick platform. Salient improvements described include the development of a new base station with fully-integrated long-range GSM (cellular) and short-range ZigBee communication. The major software improvement described in this paper is migration to the ZigBee PRO stack, which was carried out in the interest of interoperability. To broaden the application of the platform to critical environments that require survivability and fault tolerance, we have striven to achieve compliance with military standards in the areas of hardware, software, and communication. We describe these efforts and present a survey of the military standards investigated. Also described is instrumentation of a three-span experimental bridge in Washington County, Missouri; with the SmartBrick platform. The sensors, whose output is conditioned and multiplexed; include strain gauges, thermocouples, push potentiometers, and three-axis inclinometers. Data collected is stored on site and reported over the cellular network. Real-time alerts are generated if any monitored parameter falls outside its acceptable range. Redundant sensing and communication provide reliability and facilitate corroboration of the data collected. A web interface is used to issue remote configuration commands and to facilitate access to and visualization of the data collected.

  5. Accuracy of metric sex analysis of skeletal remains using Fordisc based on a recent skull collection.

    PubMed

    Ramsthaler, F; Kreutz, K; Verhoff, M A

    2007-11-01

    It has been generally accepted in skeletal sex determination that the use of metric methods is limited due to the population dependence of the multivariate algorithms. The aim of the study was to verify the applicability of software-based sex estimations outside the reference population group for which discriminant equations have been developed. We examined 98 skulls from recent forensic cases of known age, sex, and Caucasian ancestry from cranium collections in Frankfurt and Mainz (Germany) to determine the accuracy of sex determination using the statistical software solution Fordisc which derives its database and functions from the US American Forensic Database. In a comparison between metric analysis using Fordisc and morphological determination of sex, average accuracy for both sexes was 86 vs 94%, respectively, and males were identified more accurately than females. The ratio of the true test result rate to the false test result rate was not statistically different for the two methodological approaches at a significance level of 0.05 but was statistically different at a level of 0.10 (p=0.06). Possible explanations for this difference comprise different ancestry, age distribution, and socio-economic status compared to the Fordisc reference sample. It is likely that a discriminant function analysis on the basis of more similar European reference samples will lead to more valid and reliable sexing results. The use of Fordisc as a single method for the estimation of sex of recent skeletal remains in Europe cannot be recommended without additional morphological assessment and without a built-in software update based on modern European reference samples.

  6. The development of a clinical outcomes survey research application: Assessment CenterSM

    PubMed Central

    Rothrock, Nan E.; Hanrahan, Rachel T.; Jansky, Liz J.; Harniss, Mark; Riley, William

    2013-01-01

    Introduction The National Institutes of Health sponsored Patient-Reported Outcome Measurement Information System (PROMIS) aimed to create item banks and computerized adaptive tests (CATs) across multiple domains for individuals with a range of chronic diseases. Purpose Web-based software was created to enable a researcher to create study-specific Websites that could administer PROMIS CATs and other instruments to research participants or clinical samples. This paper outlines the process used to develop a user-friendly, free, Web-based resource (Assessment CenterSM) for storage, retrieval, organization, sharing, and administration of patient-reported outcomes (PRO) instruments. Methods Joint Application Design (JAD) sessions were conducted with representatives from numerous institutions in order to supply a general wish list of features. Use Cases were then written to ensure that end user expectations matched programmer specifications. Program development included daily programmer “scrum” sessions, weekly Usability Acceptability Testing (UAT) and continuous Quality Assurance (QA) activities pre- and post-release. Results Assessment Center includes features that promote instrument development including item histories, data management, and storage of statistical analysis results. Conclusions This case study of software development highlights the collection and incorporation of user input throughout the development process. Potential future applications of Assessment Center in clinical research are discussed. PMID:20306332

  7. Listening to the student voice to improve educational software.

    PubMed

    van Wyk, Mari; van Ryneveld, Linda

    2017-01-01

    Academics often develop software for teaching and learning purposes with the best of intentions, only to be disappointed by the low acceptance rate of the software by their students once it is implemented. In this study, the focus is on software that was designed to enable veterinary students to record their clinical skills. A pilot of the software clearly showed that the program had not been received as well as had been anticipated, and therefore the researchers used a group interview and a questionnaire with closed-ended and open-ended questions to obtain the students' feedback. The open-ended questions were analysed with conceptual content analysis, and themes were identified. Students made valuable suggestions about what they regarded as important considerations when a new software program is introduced. The most important lesson learnt was that students cannot always predict their needs accurately if they are asked for input prior to the development of software. For that reason student input should be obtained on a continuous and regular basis throughout the design and development phases.

  8. Simulation Testing of Embedded Flight Software

    NASA Technical Reports Server (NTRS)

    Shahabuddin, Mohammad; Reinholtz, William

    2004-01-01

    Virtual Real Time (VRT) is a computer program for testing embedded flight software by computational simulation in a workstation, in contradistinction to testing it in its target central processing unit (CPU). The disadvantages of testing in the target CPU include the need for an expensive test bed, the necessity for testers and programmers to take turns using the test bed, and the lack of software tools for debugging in a real-time environment. By virtue of its architecture, most of the flight software of the type in question is amenable to development and testing on workstations, for which there is an abundance of commercially available debugging and analysis software tools. Unfortunately, the timing of a workstation differs from that of a target CPU in a test bed. VRT, in conjunction with closed-loop simulation software, provides a capability for executing embedded flight software on a workstation in a close-to-real-time environment. A scale factor is used to convert between execution time in VRT on a workstation and execution on a target CPU. VRT includes high-resolution operating- system timers that enable the synchronization of flight software with simulation software and ground software, all running on different workstations.

  9. Software Fault Tolerance: A Tutorial

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2000-01-01

    Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.

  10. Automated Measurement of Visual Acuity in Pediatric Ophthalmic Patients Using Principles of Game Design and Tablet Computers.

    PubMed

    Aslam, Tariq M; Tahir, Humza J; Parry, Neil R A; Murray, Ian J; Kwak, Kun; Heyes, Richard; Salleh, Mahani M; Czanner, Gabriela; Ashworth, Jane

    2016-10-01

    To report on the utility of a computer tablet-based method for automated testing of visual acuity in children based on the principles of game design. We describe the testing procedure and present repeatability as well as agreement of the score with accepted visual acuity measures. Reliability and validity study. Setting: Manchester Royal Eye Hospital Pediatric Ophthalmology Outpatients Department. Total of 112 sequentially recruited patients. For each patient 1 eye was tested with the Mobile Assessment of Vision by intERactIve Computer for Children (MAVERIC-C) system, consisting of a software application running on a computer tablet, housed in a bespoke viewing chamber. The application elicited touch screen responses using a game design to encourage compliance and automatically acquire visual acuity scores of participating patients. Acuity was then assessed by an examiner with a standard chart-based near ETDRS acuity test before the MAVERIC-C assessment was repeated. Reliability of MAVERIC-C near visual acuity score and agreement of MAVERIC-C score with near ETDRS chart for visual acuity. Altogether, 106 children (95%) completed the MAVERIC-C system without assistance. The vision scores demonstrated satisfactory reliability, with test-retest VA scores having a mean difference of 0.001 (SD ±0.136) and limits of agreement of 2 SD (LOA) of ±0.267. Comparison with the near EDTRS chart showed agreement with a mean difference of -0.0879 (±0.106) with LOA of ±0.208. This study demonstrates promising utility for software using a game design to enable automated testing of acuity in children with ophthalmic disease in an objective and accurate manner. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Modular Rocket Engine Control Software (MRECS)

    NASA Technical Reports Server (NTRS)

    Tarrant, C.; Crook, J.

    1998-01-01

    The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for advanced engine control systems that will result in lower software maintenance (operations) costs. It effectively accommodates software requirement changes that occur due to hardware technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives, benefits, and status of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishments are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software architecture, reuse software, and reduced software reverification time related to software changes. MRECS was recently modified to support a Space Shuttle Main Engine (SSME) hot-fire test. Cold Flow and Flight Readiness Testing were completed before the test was cancelled. Currently, the program is focused on supporting NASA MSFC in accomplishing development testing of the Fastrac Engine, part of NASA's Low Cost Technologies (LCT) Program. MRECS will be used for all engine development testing.

  12. Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects

    ERIC Educational Resources Information Center

    Buffardi, Kevin John

    2014-01-01

    Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's…

  13. Educational Software Acquisition for Microcomputers.

    ERIC Educational Resources Information Center

    Erikson, Warren; Turban, Efraim

    1985-01-01

    Examination of issues involved in acquiring appropriate microcomputer software for higher education focuses on the following points: developing your own software; finding commercially available software; using published evaluations; pre-purchase testing; customizing and adapting commercial software; post-purchase testing; and software use. A…

  14. Tactical Approaches for Making a Successful Satellite Passive Microwave ESDR

    NASA Astrophysics Data System (ADS)

    Hardman, M.; Brodzik, M. J.; Gotberg, J.; Long, D. G.; Paget, A. C.

    2014-12-01

    Our NASA MEaSUREs project is producing a new, enhanced resolution gridded Earth System Data Record for the entire satellite passive microwave (SMMR, SSM/I-SSMIS and AMSR-E) time series. Our project goals are twofold: to produce a well-documented, consistently processed, high-quality historical record at higher spatial resolutions than have previously been available, and to transition the production software to the NSIDC DAAC for ongoing processing after our project completion. In support of these goals, our distributed team at BYU and NSIDC faces project coordination challenges to produce a high-quality data set that our user community will accept as a replacement for the currently available historical versions of these data. We work closely with our DAAC liaison on format specifications, data and metadata plans, and project progress. In order for the user community to understand and support our project, we have solicited a team of Early Adopters who are reviewing and evaluating a prototype version of the data. Early Adopter feedback will be critical input to our final data content and format decisions. For algorithm transparency and accountability, we have released an Algorithm Theoretical Basis Document (ATBD) and detailed supporting technical documentation, with rationale for all algorithm implementation decisions. For distributed team management, we are using collaborative tools for software revision control and issue tracking. For reliably transitioning a research-quality image reconstruction software system to production-quality software suitable for use at the DAAC, we have adopted continuous integration methods for running automated regression testing. Our presentation will summarize bothadvantages and challenges of each of these tactics in ensuring production of a successful ESDR and an enduring production software system.

  15. The successful implementation of a licensed data management interface between a Sunquest(®) laboratory information system and an AB SCIEX™ mass spectrometer.

    PubMed

    French, Deborah; Terrazas, Enrique

    2013-01-01

    Interfacing complex laboratory equipment to laboratory information systems (LIS) has become a more commonly encountered problem in clinical laboratories, especially for instruments that do not have an interface provided by the vendor. Liquid chromatography-tandem mass spectrometry is a great example of such complex equipment, and has become a frequent addition to clinical laboratories. As the testing volume on such instruments can be significant, manual data entry will also be considerable and the potential for concomitant transcription errors arises. Due to this potential issue, our aim was to interface an AB SCIEX™ mass spectrometer to our Sunquest(®) LIS. WE LICENSED SOFTWARE FOR THE DATA MANAGEMENT INTERFACE FROM THE UNIVERSITY OF PITTSBURGH, BUT EXTENDED THIS WORK AS FOLLOWS: The interface was designed so that it would accept a text file exported from the AB SCIEX™ × 5500 QTrap(®) mass spectrometer, pre-process the file (using newly written code) into the correct format and upload it into Sunquest(®) via file transfer protocol. The licensed software handled the majority of the interface tasks with the exception of converting the output from the Analyst(®) software to the required Sunquest(®) import format. This required writing of a "pre-processor" by one of the authors which was easily integrated with the supplied software. We successfully implemented the data management interface licensed from the University of Pittsburgh. Given the coding that was required to write the pre-processor, and alterations to the source code that were performed when debugging the software, we would suggest that before a laboratory decides to implement such an interface, it would be necessary to have a competent computer programmer available.

  16. Feasibility study of a clinical decision support system for the management of multimorbid seniors in primary care: study protocol.

    PubMed

    Weltermann, Birgitta; Kersting, Christine

    2016-01-01

    Care for seniors is complex because patients often have more than one disease, one medication, and one physician. It is a key challenge for primary care physicians to structure the various aspects of each patient's care, to integrate each patient's preferences, and to maintain a long-term overview. This article describes the design for the development and feasibility testing of the clinical decision support system (CDSS) eCare*Seniors© which is electronic health record (EHR)-based allowing for a long-term, comprehensive, evidence-based, and patient preference-oriented management of multimorbid seniors. This mixed-methods study is designed in three steps. First, focus groups and practice observations will be conducted to develop criteria for software design from a physicians' and practice assistants' perspective. Second, based on these criteria, a CDSS prototype will be developed. Third, the prototype's feasibility will be tested by five primary care practices in the care of 30 multimorbid seniors. Primary outcome is the usability of the software measured by the validated system usability scale (SUS) after 3 months. Secondary outcomes are the (a) willingness to routinely use the CDSS, (b) degree of utilization of the CDSS, (c) acceptance of the CDSS, (d) willingness of the physicians to purchase the CDSS, and (e) willingness of the practice assistants to use the CDSS in the long term. These outcomes will be measured using semi-structured interviews and software usage data. If the SUS score reaches ≥70 %, feasibility testing will be judged successful. Otherwise, the CDSS prototype will be refined according to the users' needs and retested by the physicians and practice assistants until it is fully adapted to their requirements and reaches a usability score ≥70 %. The study will support the development of a CDSS which is primary care-defined, user-friendly, easy-to-comprehend, workflow-oriented, and comprehensive. The software will assist physicians and practices in their long-term, individualized care for multimorbid seniors. German Clinical Trials Register, DRKS00008777.

  17. Factors That Affect Software Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.

    1991-01-01

    Software faults that infrequently affect software's output are dangerous. When a software fault causes frequent software failures, testing is likely to reveal the fault before the software is releases; when the fault remains undetected during testing, it can cause disaster after the software is installed. A technique for predicting whether a particular piece of software is likely to reveal faults within itself during testing is found in [Voas91b]. A piece of software that is likely to reveal faults within itself during testing is said to have high testability. A piece of software that is not likely to reveal faults within itself during testing is said to have low testability. It is preferable to design software with higher testabilities from the outset, i.e., create software with as high of a degree of testability as possible to avoid the problems of having undetected faults that are associated with low testability. Information loss is a phenomenon that occurs during program execution that increases the likelihood that a fault will remain undetected. In this paper, I identify two brad classes of information loss, define them, and suggest ways of predicting the potential for information loss to occur. We do this in order to decrease the likelihood that faults will remain undetected during testing.

  18. Evaluation of a whole-farm model for pasture-based dairy systems.

    PubMed

    Beukes, P C; Palliser, C C; Macdonald, K A; Lancaster, J A S; Levy, G; Thorrold, B S; Wastney, M E

    2008-06-01

    In the temperate climate of New Zealand, animals can be grazed outdoors all year round. The pasture is supplemented with conserved feed, with the amount being determined by seasonal pasture growth, genetics of the herd, and stocking rate. The large number of factors that affect production makes it impractical and expensive to use field trials to explore all the farm system options. A model of an in situ-grazed pasture system has been developed to provide a tool for developing and testing novel farm systems; for example, different levels of bought-in supplements and different levels of nitrogen fertilizer application, to maintain sustainability or environmental integrity and profitability. It consists of a software framework that links climate information, on a daily basis, with dynamic, mechanistic component-models for pasture growth and animal metabolism, as well as management policies. A unique feature is that the component models were developed and published by other groups, and are retained in their original software language. The aim of this study was to compare the model, called the whole-farm model (WFM) with a farm trial that was conducted over 3 yr and in which data were collected specifically for evaluating the WFM. Data were used from the first year to develop the WFM and data from the second and third year to evaluate the model. The model predicted annual pasture production, end-of-season cow liveweight, cow body condition score, and pasture cover across season with relative prediction error <20%. Milk yield and milksolids (fat + protein) were overpredicted by approximately 30% even though both annual and monthly pasture and supplement intake were predicted with acceptable accuracy, suggesting that the metabolic conversion of feed to fat, protein, and lactose in the mammary gland needs to be refined. Because feed growth and intake predictions were acceptable, economic predictions can be made using the WFM, with an adjustment for milk yield, to test different management policies, alterations in climate, or the use of genetically improved animals, pastures, or crops.

  19. User interface design for mobile-based sexual health interventions for young people: design recommendations from a qualitative study on an online Chlamydia clinical care pathway.

    PubMed

    Gkatzidou, Voula; Hone, Kate; Sutcliffe, Lorna; Gibbs, Jo; Sadiq, Syed Tariq; Szczepura, Ala; Sonnenberg, Pam; Estcourt, Claudia

    2015-08-26

    The increasing pervasiveness of mobile technologies has given potential to transform healthcare by facilitating clinical management using software applications. These technologies may provide valuable tools in sexual health care and potentially overcome existing practical and cultural barriers to routine testing for sexually transmitted infections. In order to inform the design of a mobile health application for STIs that supports self-testing and self-management by linking diagnosis with online care pathways, we aimed to identify the dimensions and range of preferences for user interface design features among young people. Nine focus group discussions were conducted (n = 49) with two age-stratified samples (16 to 18 and 19 to 24 year olds) of young people from Further Education colleges and Higher Education establishments. Discussions explored young people's views with regard to: the software interface; the presentation of information; and the ordering of interaction steps. Discussions were audio recorded and transcribed verbatim. Interview transcripts were analysed using thematic analysis. Four over-arching themes emerged: privacy and security; credibility; user journey support; and the task-technology-context fit. From these themes, 20 user interface design recommendations for mobile health applications are proposed. For participants, although privacy was a major concern, security was not perceived as a major potential barrier as participants were generally unaware of potential security threats and inherently trusted new technology. Customisation also emerged as a key design preference to increase attractiveness and acceptability. Considerable effort should be focused on designing healthcare applications from the patient's perspective to maximise acceptability. The design recommendations proposed in this paper provide a valuable point of reference for the health design community to inform development of mobile-based health interventions for the diagnosis and treatment of a number of other conditions for this target group, while stimulating conversation across multidisciplinary communities.

  20. Application experience with the NASA aircraft interrogation and display system - A ground-support equipment for digital flight systems

    NASA Technical Reports Server (NTRS)

    Glover, R. D.

    1983-01-01

    The NASA Dryden Flight Research Facility has developed a microprocessor-based, user-programmable, general-purpose aircraft interrogation and display system (AIDS). The hardware and software of this ground-support equipment have been designed to permit diverse applications in support of aircraft digital flight-control systems and simulation facilities. AIDS is often employed to provide engineering-units display of internal digital system parameters during development and qualification testing. Such visibility into the system under test has proved to be a key element in the final qualification testing of aircraft digital flight-control systems. Three first-generation 8-bit units are now in service in support of several research aircraft projects, and user acceptance has been high. A second-generation design, extended AIDS (XAIDS), incorporating multiple 16-bit processors, is now being developed to support the forward swept wing aircraft project (X-29A). This paper outlines the AIDS concept, summarizes AIDS operational experience, and describes the planned XAIDS design and mechanization.

  1. Detection of incipient defects in cables by partial discharge signal analysis

    NASA Astrophysics Data System (ADS)

    Martzloff, F. D.; Simmon, E.; Steiner, J. P.; Vanbrunt, R. J.

    1992-07-01

    As one of the objectives of a program aimed at assessing test methods for in-situ detection of incipient defects in cables due to aging, a laboratory test system was implemented to demonstrate that the partial discharge analysis method can be successfully applied to low-voltage cables. Previous investigations generally involved cables rated 5 kV or higher, while the objective of the program focused on the lower voltages associated with the safety systems of nuclear power plants. The defect detection system implemented for the project was based on commercially available signal analysis hardware and software packages, customized for the specific purposes of the project. The test specimens included several cables of the type found in nuclear power plants, including artificial defects introduced at various points of the cable. The results indicate that indeed, partial discharge analysis is capable of detecting incipient defects in low-voltage cables. There are, however, some limitations of technical and non-technical nature that need further exploration before this method can be accepted in the industry.

  2. Formulation of a drinkable peanut-based therapeutic food for malnourished children using plant sources.

    PubMed

    Nabuuma, Deborah; Nakimbugwe, Dorothy; Byaruhanga, Yusuf B; Saalia, Firibu Kwesi; Phillips, Robert Dixon; Chen, Jinru

    2013-06-01

    High ingredient costs continue to hamper local production of therapeutic foods (TFs). Development of formulations without milk, the most expensive ingredient, is one way of reducing cost. This study formulated a ready-to-drink peanut-based TF that matched the nutrient composition of F100 using plant sources. Three least cost formulations namely, A, B and C were designed using computer formulation software with peanuts, beans, sesame, cowpeas and grain amaranth as ingredients. A 100 g portion of the TF provided 101-111 kcal, 5 g protein and 5.3-6.5 g fat. Consumer acceptability hedonic tests showed that the products were liked (extremely and moderately) by 62-65% of mothers. These results suggest that nutrient dense TFs formulated from only plant sources have the potential to be used in the rehabilitation phase of the management of malnourished children after clinical testing.

  3. Human-Centered Development of an Online Social Network for Metabolic Syndrome Management.

    PubMed

    Núñez-Nava, Jefersson; Orozco-Sánchez, Paola A; López, Diego M; Ceron, Jesus D; Alvarez-Rosero, Rosa E

    2016-01-01

    According to the International Diabetes Federation (IDF), a quarter of the world's population has Metabolic Syndrome (MS). To develop (and assess the users' degree of satisfaction of) an online social network for patients who suffer from Metabolic Syndrome, based on the recommendations and requirements of the Human-Centered Design. Following the recommendations of the ISO 9241-210 for Human-Centered Design (HCD), an online social network was designed to promote physical activity and healthy nutrition. In order to guarantee the active participation of the users during the development of the social network, a survey, an in-depth interview, a focal group, and usability tests were carried out with people suffering from MS. The study demonstrated how the different activities, recommendations, and requirements of the ISO 9241-210 are integrated into a traditional software development process. Early usability tests demonstrated that the user's acceptance and the effectiveness and efficiency of the social network are satisfactory.

  4. A methodology for testing fault-tolerant software

    NASA Technical Reports Server (NTRS)

    Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.

    1985-01-01

    A methodology for testing fault tolerant software is presented. There are problems associated with testing fault tolerant software because many errors are masked or corrected by voters, limiter, or automatic channel synchronization. This methodology illustrates how the same strategies used for testing fault tolerant hardware can be applied to testing fault tolerant software. For example, one strategy used in testing fault tolerant hardware is to disable the redundancy during testing. A similar testing strategy is proposed for software, namely, to move the major emphasis on testing earlier in the development cycle (before the redundancy is in place) thus reducing the possibility that undetected errors will be masked when limiters and voters are added.

  5. Dental Informatics tool "SOFPRO" for the study of oral submucous fibrosis.

    PubMed

    Erlewad, Dinesh Masajirao; Mundhe, Kalpana Anandrao; Hazarey, Vinay K

    2016-01-01

    Dental informatics is an evolving branch widely used in dental education and practice. Numerous applications that support clinical care, education and research have been developed. However, very few such applications are developed and utilized in the epidemiological studies of oral submucous fibrosis (OSF) which is affecting a significant population of Asian countries. To design and develop an user friendly software for the descriptive epidemiological study of OSF. With the help of a software engineer a computer program SOFPRO was designed and developed by using, Ms-Visual Basic 6.0 (VB), Ms-Access 2000, Crystal Report 7.0 and Ms-Paint in operating system XP. For the analysis purpose the available OSF data from the departmental precancer registry was fed into the SOFPRO. Known data, not known and null data are successfully accepted in data entry and represented in data analysis of OSF. Smooth working of SOFPRO and its correct data flow was tested against real-time data of OSF. SOFPRO was found to be a user friendly automated tool for easy data collection, retrieval, management and analysis of OSF patients.

  6. Seamless transitions from early prototypes to mature operational software - A technology that enables the process for planning and scheduling applications

    NASA Technical Reports Server (NTRS)

    Hornstein, Rhoda S.; Wunderlich, Dana A.; Willoughby, John K.

    1992-01-01

    New and innovative software technology is presented that provides a cost effective bridge for smoothly transitioning prototype software, in the field of planning and scheduling, into an operational environment. Specifically, this technology mixes the flexibility and human design efficiency of dynamic data typing with the rigor and run-time efficiencies of static data typing. This new technology provides a very valuable tool for conducting the extensive, up-front system prototyping that leads to specifying the correct system and producing a reliable, efficient version that will be operationally effective and will be accepted by the intended users.

  7. Programming Makes Software; Support Makes Users

    NASA Astrophysics Data System (ADS)

    Batcheller, A. L.

    2010-12-01

    Skilled software engineers may build fantastic software for climate modeling, yet fail to achieve their project’s objectives. Software support and related activities are just as critical as writing software. This study followed three different software projects in the climate sciences, using interviews, observation, and document analysis to examine the value added by support work. Supporting the project and interacting with users was a key task for software developers, who often spent 50% of their time on it. Such support work most often involved replying to questions on an email list, but also included talking to users on teleconference calls and in person. Software support increased adoption by building the software’s reputation and showing individuals how the software can meet their needs. In the process of providing support, developers often learned new of requirements as users reported features they desire and bugs they found. As software matures and gains widespread use, support work often increases. In fact, such increases can be one signal that the software has achieved broad acceptance. Maturing projects also find demand for instructional classes, online tutorials and detailed examples of how to use the software. The importance of support highlights the fact that building software systems involves both social and technical aspects. Yes, we need to build the software, but we also need to “build” the users and practices that can take advantage of it.

  8. Future Field Programmable Gate Array (FPGA) Design Methodologies and Tool Flows

    DTIC Science & Technology

    2008-07-01

    a) that the results are accepted by users, vendors, … and (b) that they can quantitatively explain HPC rules of thumb such as: “OpenMP is easier...in productivity that were demonstrated by traditional software systems. Using advances in software productivity as a guide , we have identified three...of this study we developed a productivity model to guide our investigation (14). Models have limitations and the model we propose is no exception

  9. Waste receiving and processing facility module 1 data management system software project management plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, R.E.

    1994-11-02

    This document provides the software development plan for the Waste Receiving and Processing (WRAP) Module 1 Data Management System (DMS). The DMS is one of the plant computer systems for the new WRAP 1 facility (Project W-026). The DMS will collect, store, and report data required to certify the low level waste (LLW) and transuranic (TRU) waste items processed at WRAP 1 as acceptable for shipment, storage, or disposal.

  10. STS-51 pad abort. OV103-engine 2033 (ME-2) fuel flowmeter sensor open circuit

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The STS-51 initial launch attempt of Discovery (OV-103) was terminated on KSC launch pad 39B on 12 Aug. 1993 at 9:12 AM E.S.T. due to a sensor redundancy failure in the liquid hydrogen system of ME-2 (Engine 2033). The event description and time line are summarized. Propellant loading was initiated on 12 Aug. 1993 at 12:00 AM EST. All space shuttle main engine (SSME) chill parameters and Launch Commit Criteria (LCC) were nominal. At engine start plus 1.34 seconds a Failure Identification (FID) was posted against Engine 2033 for exceeding the 1800 spin intra-channel (A1-A2) Fuel Flowrate sensor channel qualification limit. The engine was shut down at 1.50 seconds followed by Engines 2032 and 2030. All shut down sequences were nominal and the mission was safely aborted. SSME Avionics hardware and software performed nominally during the incident. A review of vehicle data table (VDT) data and controller software logic revealed no failure indications other than the single FID 111-101, Fuel Flowrate Intra-Channel Test Channel A disqualification. Software logic was executed according to requirements and there was no anomalous controller software operation. Immediately following the abort, a Rocketdyne/NASA failure investigation team was assembled. The team successfully isolated the failure cause to an open circuit in a Fuel Flowrate Sensor. This type of failure has occurred eight previous times in ground testing. The sensor had performed acceptably on three previous flights of the engine and SSME flight history shows 684 combined fuel flow rate sensor channel flights without failure. The disqualification of an Engine 2 (SSME No. 2033) Fuel Flowrate sensor channel was a result of an instrumentation failure and not engine performance. All other engine operations were nominal. This disqualification resulted in an engine shutdown and safe sequential shutdown of all three engines prior to ignition of the solid boosters.

  11. A high order approach to flight software development and testing

    NASA Technical Reports Server (NTRS)

    Steinbacher, J.

    1981-01-01

    The use of a software development facility is discussed as a means of producing a reliable and maintainable ECS software system, and as a means of providing efficient use of the ECS hardware test facility. Principles applied to software design are given, including modularity, abstraction, hiding, and uniformity. The general objectives of each phase of the software life cycle are also given, including testing, maintenance, code development, and requirement specifications. Software development facility tools are summarized, and tool deficiencies recognized in the code development and testing phases are considered. Due to limited lab resources, the functional simulation capabilities may be indispensable in the testing phase.

  12. A Framework for Testing Scientific Software: A Case Study of Testing Amsterdam Discrete Dipole Approximation Software

    NASA Astrophysics Data System (ADS)

    Shao, Hongbing

    Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.

  13. A semi-quantitative and thematic analysis of medical student attitudes towards M-Learning.

    PubMed

    Green, Ben L; Kennedy, Iain; Hassanzadeh, Hadi; Sharma, Suneal; Frith, Gareth; Darling, Jonathan C

    2015-10-01

    Smartphone and mobile application technology have in recent years furthered the development of novel learning and assessment resources. 'MBChB Mobile' is a pioneering mobile learning (M-Learning) programme at University of Leeds, United Kingdom and provides all senior medical students with iPhone handsets complete with academic applications, assessment software and a virtual reflective environment. This study aimed to evaluate the impact of MBChB Mobile on student learning. Ethical approval was granted to invite fourth and fifth year medical students to participate in a semi-quantitative questionnaire: data were collected anonymously with informed consent and analysed where appropriate using chi-squared test of association. Qualitative data generated through focus group participation were subjected to both content and thematic analysis. A total of 278 of 519 (53.6%) invited participants responded. Overall, 72.6% of students agreed that MBChB Mobile enhanced their learning experience; however, this was significantly related to overall usage (P < 0.001) and self-reported mobile technology proficiency (P < 0.001). Qualitative data revealed barriers to efficacy including technical software issues, non-transferability to different mobile devices, and perceived patient acceptability. As one of the largest evaluative and only quantitative study of smartphone-assisted M-Learning in undergraduate medical education, MBChB Mobile suggests that smartphone and application technology enhances students' learning experience. Barriers to implementation may be addressed through the provision of tailored learning resources, along with user-defined support systems, and appropriate means of ensuring acceptability to patients. © 2015 John Wiley & Sons, Ltd.

  14. Beyond formalism

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1991-01-01

    The ongoing debate over the role of formalism and formal specifications in software features many speakers with diverse positions. Yet, in the end, they share the conviction that the requirements of a software system can be unambiguously specified, that acceptable software is a product demonstrably meeting the specifications, and that the design process can be carried out with little interaction between designers and users once the specification has been agreed to. This conviction is part of a larger paradigm prevalent in American management thinking, which holds that organizations are systems that can be precisely specified and optimized. This paradigm, which traces historically to the works of Frederick Taylor in the early 1900s, is no longer sufficient for organizations and software systems today. In the domain of software, a new paradigm, called user-centered design, overcomes the limitations of pure formalism. Pioneered in Scandinavia, user-centered design is spreading through Europe and is beginning to make its way into the U.S.

  15. Achieving reutilization of scheduling software through abstraction and generalization

    NASA Technical Reports Server (NTRS)

    Wilkinson, George J.; Monteleone, Richard A.; Weinstein, Stuart M.; Mohler, Michael G.; Zoch, David R.; Tong, G. Michael

    1995-01-01

    Reutilization of software is a difficult goal to achieve particularly in complex environments that require advanced software systems. The Request-Oriented Scheduling Engine (ROSE) was developed to create a reusable scheduling system for the diverse scheduling needs of the National Aeronautics and Space Administration (NASA). ROSE is a data-driven scheduler that accepts inputs such as user activities, available resources, timing contraints, and user-defined events, and then produces a conflict-free schedule. To support reutilization, ROSE is designed to be flexible, extensible, and portable. With these design features, applying ROSE to a new scheduling application does not require changing the core scheduling engine, even if the new application requires significantly larger or smaller data sets, customized scheduling algorithms, or software portability. This paper includes a ROSE scheduling system description emphasizing its general-purpose features, reutilization techniques, and tasks for which ROSE reuse provided a low-risk solution with significant cost savings and reduced software development time.

  16. MATTS- A Step Towards Model Based Testing

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.

    2016-08-01

    In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.

  17. An Architecture, System Engineering, and Acquisition Approach for Space System Software Resiliency

    NASA Astrophysics Data System (ADS)

    Phillips, Dewanne Marie

    Software intensive space systems can harbor defects and vulnerabilities that may enable external adversaries or malicious insiders to disrupt or disable system functions, risking mission compromise or loss. Mitigating this risk demands a sustained focus on the security and resiliency of the system architecture including software, hardware, and other components. Robust software engineering practices contribute to the foundation of a resilient system so that the system "can take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". Software resiliency must be a priority and addressed early in the life cycle development to contribute a secure and dependable space system. Those who develop, implement, and operate software intensive space systems must determine the factors and systems engineering practices to address when investing in software resiliency. This dissertation offers methodical approaches for improving space system resiliency through software architecture design, system engineering, increased software security, thereby reducing the risk of latent software defects and vulnerabilities. By providing greater attention to the early life cycle phases of development, we can alter the engineering process to help detect, eliminate, and avoid vulnerabilities before space systems are delivered. To achieve this objective, this dissertation will identify knowledge, techniques, and tools that engineers and managers can utilize to help them recognize how vulnerabilities are produced and discovered so that they can learn to circumvent them in future efforts. We conducted a systematic review of existing architectural practices, standards, security and coding practices, various threats, defects, and vulnerabilities that impact space systems from hundreds of relevant publications and interviews of subject matter experts. We expanded on the system-level body of knowledge for resiliency and identified a new software architecture framework and acquisition methodology to improve the resiliency of space systems from a software perspective with an emphasis on the early phases of the systems engineering life cycle. This methodology involves seven steps: 1) Define technical resiliency requirements, 1a) Identify standards/policy for software resiliency, 2) Develop a request for proposal (RFP)/statement of work (SOW) for resilient space systems software, 3) Define software resiliency goals for space systems, 4) Establish software resiliency quality attributes, 5) Perform architectural tradeoffs and identify risks, 6) Conduct architecture assessments as part of the procurement process, and 7) Ascertain space system software architecture resiliency metrics. Data illustrates that software vulnerabilities can lead to opportunities for malicious cyber activities, which could degrade the space mission capability for the user community. Reducing the number of vulnerabilities by improving architecture and software system engineering practices can contribute to making space systems more resilient. Since cyber-attacks are enabled by shortfalls in software, robust software engineering practices and an architectural design are foundational to resiliency, which is a quality that allows the system to "take a hit to a critical component and recover in a known, bounded, and generally acceptable period of time". To achieve software resiliency for space systems, acquirers and suppliers must identify relevant factors and systems engineering practices to apply across the lifecycle, in software requirements analysis, architecture development, design, implementation, verification and validation, and maintenance phases.

  18. Using Automation to Improve the Flight Software Testing Process

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Andrews, Stephen F.; Morgenstern, Wendy M.; Bartholomew, Maureen O.; McComas, David C.; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, attitude control, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on previous missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the perceived benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.

  19. Using Automation to Improve the Flight Software Testing Process

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Morgenstern, Wendy M.; Bartholomew, Maureen O.

    2001-01-01

    One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, knowledge of attitude control, and attitude control hardware, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on other missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.

  20. Effect of Nursing Intervention on Mothers’ Knowledge of Cervical Cancer and Acceptance of Human Papillomavirus Vaccination for their Adolescent Daughters in Abuja – Nigeria

    PubMed Central

    Odunyemi, Funmilola T.; Ndikom, Chizoma M.; Oluwatosin, O. Abimbola

    2018-01-01

    Objective: The aim of this study is to evaluate the effect of nursing intervention on mothers’ knowledge of cervical cancer and acceptance of human papillomavirus (HPV) vaccination for their adolescent daughters in Abuja, Nigeria. Methods: This was a quasi-experimental study that utilized two groups pre and post-test design. The study was carried out among civil servant mothers in Bwari (experimental group [EG]) and Kwali (control group[CG]) Area Councils of Abuja, Nigeria. One hundred and forty-six women who met the inclusion criteria were purposively selected for this study. EG consists of 69 women while 77 are from CG. The intervention consisted of two days workshop on cervical cancer and HPV vaccination. Descriptive and inferential analyses of the data were performed using SPSS software 20 version. Results: The mean age of the respondents was 35 years ± 6.6 in the EG and 41 years ± 8.2 in the CG. The mean knowledge score of cervical cancer was low at baseline in both EG (9.58 ± 7.1) and CG (11.61 ± 6.5). However, there was a significant increase to 21.45 ± 6.2 after the intervention in EG (P < 0.0001). The baseline acceptance of HPV vaccination was high in EG after intervention from 74% to 99%. Exposure to nursing intervention and acceptance of HPV vaccination was statistically significant after intervention (P < 0.0001). Conclusions: The nursing intervention has been found to increase mothers’ knowledge of cervical cancer and acceptance of HPV vaccination. It is therefore recommended that nurses should use every available opportunity in mothers’ clinic to educate on cervical cancer and HPV vaccination. PMID:29607384

  1. Flight evaluation of two-segment approaches using area navigation guidance equipment

    NASA Technical Reports Server (NTRS)

    Schwind, G. K.; Morrison, J. A.; Nylen, W. E.; Anderson, E. B.

    1976-01-01

    A two-segment noise abatement approach procedure for use on DC-8-61 aircraft in air carrier service was developed and evaluated. The approach profile and procedures were developed in a flight simulator. Full guidance is provided throughout the approach by a Collins Radio Company three-dimensional area navigation (RNAV) system which was modified to provide the two-segment approach capabilities. Modifications to the basic RNAV software included safety protection logic considered necessary for an operationally acceptable two-segment system. With an aircraft out of revenue service, the system was refined and extensively flight tested, and the profile and procedures were evaluated by representatives of the airlines, airframe manufacturers, the Air Line Pilots Association, and the Federal Aviation Adminstration. The system was determined to be safe and operationally acceptable. It was then placed into scheduled airline service for an evaluation during which 180 approaches were flown by 48 airline pilots. The approach was determined to be compatible with the airline operational environment, although operation of the RNAV system in the existing terminal area air traffic control environment was difficult.

  2. Chemical Genetic Screens for TDP-43 Modifiers and ALS Drug Discovery

    DTIC Science & Technology

    2012-10-01

    Mayo Clinic, United States of America Received October 11, 2011; Accepted January 5, 2012; Published February 21, 2012 Copyright: 2012 Vaccaro et al...Editor: Weidong Le, Baylor College of Medicine, Jiao Tong University School of Medicine, United States of America Received March 9, 2012; Accepted July...Grasshopper 2 Camera (Point Grey Research) at 30 Hz. The movies were then analyzed using the manual tracking plugin of ImageJ 1.45r software (NIH) and the swim

  3. Space shuttle orbiter avionics software: Post review report for the entry FACI (First Article Configuration Inspection). [including orbital flight tests integrated system

    NASA Technical Reports Server (NTRS)

    Markos, H.

    1978-01-01

    Status of the computer programs dealing with space shuttle orbiter avionics is reported. Specific topics covered include: delivery status; SSW software; SM software; DL software; GNC software; level 3/4 testing; level 5 testing; performance analysis, SDL readiness for entry first article configuration inspection; and verification assessment.

  4. Software Piracy in Research: A Moral Analysis.

    PubMed

    Santillanes, Gary; Felder, Ryan Marshall

    2015-08-01

    Researchers in virtually every discipline rely on sophisticated proprietary software for their work. However, some researchers are unable to afford the licenses and instead procure the software illegally. We discuss the prohibition of software piracy by intellectual property laws, and argue that the moral basis for the copyright law offers the possibility of cases where software piracy may be morally justified. The ethics codes that scientific institutions abide by are informed by a rule-consequentialist logic: by preserving personal rights to authored works, people able to do so will be incentivized to create. By showing that the law has this rule-consequentialist grounding, we suggest that scientists who blindly adopt their institutional ethics codes will commit themselves to accepting that software piracy could be morally justified, in some cases. We hope that this conclusion will spark debate over important tensions between ethics codes, copyright law, and the underlying moral basis for these regulations. We conclude by offering practical solutions (other than piracy) for researchers.

  5. Individualized 3D printing navigation template for pedicle screw fixation in upper cervical spine

    PubMed Central

    Guo, Fei; Dai, Jianhao; Zhang, Junxiang; Ma, Yichuan; Zhu, Guanghui; Shen, Junjie; Niu, Guoqi

    2017-01-01

    Purpose Pedicle screw fixation in the upper cervical spine is a difficult and high-risk procedure. The screw is difficult to place rapidly and accurately, and can lead to serious injury of spinal cord or vertebral artery. The aim of this study was to design an individualized 3D printing navigation template for pedicle screw fixation in the upper cervical spine. Methods Using CT thin slices data, we employed computer software to design the navigation template for pedicle screw fixation in the upper cervical spine (atlas and axis). The upper cervical spine models and navigation templates were produced by 3D printer with equal proportion, two sets for each case. In one set (Test group), pedicle screws fixation were guided by the navigation template; in the second set (Control group), the screws were fixed under fluoroscopy. According to the degree of pedicle cortex perforation and whether the screw needed to be refitted, the fixation effects were divided into 3 types: Type I, screw is fully located within the vertebral pedicle; Type II, degree of pedicle cortex perforation is <1 mm, but with good internal fixation stability and no need to renovate; Type III, degree of pedicle cortex perforation is >1 mm or with the poor internal fixation stability and in need of renovation. Type I and Type II were acceptable placements; Type III placements were unacceptable. Results A total of 19 upper cervical spine and 19 navigation templates were printed, and 37 pedicle screws were fixed in each group. Type I screw-placements in the test group totaled 32; Type II totaled 3; and Type III totaled 2; with an acceptable rate of 94.60%. Type I screw placements in the control group totaled 23; Type II totaled 3; and Type III totaled 11, with an acceptable rate of 70.27%. The acceptability rate in test group was higher than the rate in control group. The operation time and fluoroscopic frequency for each screw were decreased, compared with control group. Conclusion The individualized 3D printing navigation template for pedicle screw fixation is easy and safe, with a high success rate in the upper cervical spine surgery. PMID:28152039

  6. Individualized 3D printing navigation template for pedicle screw fixation in upper cervical spine.

    PubMed

    Guo, Fei; Dai, Jianhao; Zhang, Junxiang; Ma, Yichuan; Zhu, Guanghui; Shen, Junjie; Niu, Guoqi

    2017-01-01

    Pedicle screw fixation in the upper cervical spine is a difficult and high-risk procedure. The screw is difficult to place rapidly and accurately, and can lead to serious injury of spinal cord or vertebral artery. The aim of this study was to design an individualized 3D printing navigation template for pedicle screw fixation in the upper cervical spine. Using CT thin slices data, we employed computer software to design the navigation template for pedicle screw fixation in the upper cervical spine (atlas and axis). The upper cervical spine models and navigation templates were produced by 3D printer with equal proportion, two sets for each case. In one set (Test group), pedicle screws fixation were guided by the navigation template; in the second set (Control group), the screws were fixed under fluoroscopy. According to the degree of pedicle cortex perforation and whether the screw needed to be refitted, the fixation effects were divided into 3 types: Type I, screw is fully located within the vertebral pedicle; Type II, degree of pedicle cortex perforation is <1 mm, but with good internal fixation stability and no need to renovate; Type III, degree of pedicle cortex perforation is >1 mm or with the poor internal fixation stability and in need of renovation. Type I and Type II were acceptable placements; Type III placements were unacceptable. A total of 19 upper cervical spine and 19 navigation templates were printed, and 37 pedicle screws were fixed in each group. Type I screw-placements in the test group totaled 32; Type II totaled 3; and Type III totaled 2; with an acceptable rate of 94.60%. Type I screw placements in the control group totaled 23; Type II totaled 3; and Type III totaled 11, with an acceptable rate of 70.27%. The acceptability rate in test group was higher than the rate in control group. The operation time and fluoroscopic frequency for each screw were decreased, compared with control group. The individualized 3D printing navigation template for pedicle screw fixation is easy and safe, with a high success rate in the upper cervical spine surgery.

  7. Big Software for SmallSats: Adapting cFS to CubeSat Missions

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan P.; Crum, Gary Alex; Sheikh, Salman; Marshall, James

    2015-01-01

    Expanding capabilities and mission objectives for SmallSats and CubeSats is driving the need for reliable, reusable, and robust flight software. While missions are becoming more complicated and the scientific goals more ambitious, the level of acceptable risk has decreased. Design challenges are further compounded by budget and schedule constraints that have not kept pace. NASA's Core Flight Software System (cFS) is an open source solution which enables teams to build flagship satellite level flight software within a CubeSat schedule and budget. NASA originally developed cFS to reduce mission and schedule risk for flagship satellite missions by increasing code reuse and reliability. The Lunar Reconnaissance Orbiter, which launched in 2009, was the first of a growing list of Class B rated missions to use cFS.

  8. Architecture for Survivable System Processing (ASSP)

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.

    1991-11-01

    The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.

  9. Architecture for Survivable System Processing (ASSP)

    NASA Technical Reports Server (NTRS)

    Wood, Richard J.

    1991-01-01

    The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.

  10. Reasons for non-adherence to cardiometabolic medications, and acceptability of an interactive voice response intervention in patients with hypertension and type 2 diabetes in primary care: a qualitative study

    PubMed Central

    Sutton, Stephen

    2017-01-01

    Objectives This study explored the reasons for patients’ non-adherence to cardiometabolic medications, and tested the acceptability of the interactive voice response (IVR) as a way to address these reasons, and support patients, between primary care consultations. Design, method, participants and setting The study included face-to-face interviews with 19 patients with hypertension and/or type 2 diabetes mellitus, selected from primary care databases, and presumed to be non-adherent. Thirteen of these patients pretested elements of the IVR intervention few months later, using a think-aloud protocol. Five practice nurses were interviewed. Data were analysed using multiperspective, and longitudinalthematic analysis. Results Negative beliefs about taking medications, the complexity of prescribed medication regimens, and the limited ability to cope with the underlying affective state, within challenging contexts, were mentioned as important reasons for non-adherence. Nurses reported time constraints to address each patient’s different reasons for non-adherence, and limited efficacy to support patients, between primary care consultations. Patients gave positive experiential feedback about the IVR messages as a way to support them take their medicines, and provided recommendations for intervention content and delivery mode. Specifically, they liked the voice delivering the messages and the voice recognition software. For intervention content, they preferred messages that were tailored, and included messages with ‘information about health consequences’, ‘action plans’, or simple reminders for performing the behaviour. Conclusions Patients with hypertension and/or type 2 diabetes, and practice nurses, suggested messages tailored to each patient’s reasons for non-adherence. Participants recommended IVR as an acceptable platform to support adherence to cardiometabolic medications between primary care consultations. Future studies could usefully test the acceptability, and feasibility, of tailored IVR interventions to support medication adherence, as an adjunct to primary care. PMID:28801402

  11. Common Data Acquisition Systems (DAS) Software Development for Rocket Propulsion Test (RPT) Test Facilities

    NASA Technical Reports Server (NTRS)

    Hebert, Phillip W., Sr.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Hughes, Mark S.

    2012-01-01

    The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition systems (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis development and deployment.

  12. Listening to the student voice to improve educational software

    PubMed Central

    van Wyk, Mari; van Ryneveld, Linda

    2017-01-01

    ABSTRACT Academics often develop software for teaching and learning purposes with the best of intentions, only to be disappointed by the low acceptance rate of the software by their students once it is implemented. In this study, the focus is on software that was designed to enable veterinary students to record their clinical skills. A pilot of the software clearly showed that the program had not been received as well as had been anticipated, and therefore the researchers used a group interview and a questionnaire with closed-ended and open-ended questions to obtain the students’ feedback. The open-ended questions were analysed with conceptual content analysis, and themes were identified. Students made valuable suggestions about what they regarded as important considerations when a new software program is introduced. The most important lesson learnt was that students cannot always predict their needs accurately if they are asked for input prior to the development of software. For that reason student input should be obtained on a continuous and regular basis throughout the design and development phases. PMID:28678678

  13. Software Certification - Coding, Code, and Coders

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  14. Full Life-Cycle Defect Management Assessment: Initial Inspection Data Collection Results and Research Questions for Further Study

    NASA Technical Reports Server (NTRS)

    Shull, Forrest; Feldmann, Raimund; Haingaertner, Ralf; Regardie, Myrna; Seaman, Carolyn

    2007-01-01

    It is often the case in software projects that when schedule and budget resources are limited, the Verification and Validation (V&V) activities suffer. Fewer V&V activities can be afforded and moreover, short-term challenges can result in V&V activities being scaled back or dropped altogether. As a result, too often the default solution is to save activities for improving software quality until too late in the life-cycle, relying on late-term code inspections followed by thorough testing activities to reduce defect counts to acceptable levels. As many project managers realize, however, this is a resource-intensive way of achieving the required quality for software. The Full Life-cycle Defect Management Assessment Initiative, funded by NASA s Office of Safety and Mission Assurance under the Software Assurance Research Program, aims to address these problems by: Improving the effectiveness of early life-cycle V&V activities to make their benefits more attractive to team leads. Specifically, we focus on software inspection, a proven method that can be applied to any software work product, long before executable code has been developed; Better communicating this effectiveness to software development teams, along with suggestions for parameters to improve in the future to increase effectiveness; Analyzing the impact of early life-cycle V&V on the effectiveness and cost required for late life-cycle V&V activities, such as testing, in order to make the tradeoffs more apparent. This white paper reports on an initial milestone in this work, the development of a preliminary model of inspection effectiveness across multiple NASA Centers. This model contributes toward reaching our project goals by: Allowing an examination of inspection parameters, across different types of projects and different work products, for an analysis of factors that impact defect detection effectiveness. Allowing a comparison of this NASA-specific model to existing recommendations in the literature regarding how to plan effective inspections. Forming a baseline model which can be extended to incorporate factors describing: the numbers and types of defects that are missed by inspections; how such defects flow downstream through software development phases; how effectively they can be caught by testing activities in the late stages of development. The model has been implemented in a prototype web-enabled decision-support tool which allows developers to enter their inspection data and receive feedback based on a comparison against the model. The tool also allows users to access reusable materials (such as checklists) from projects included in the baseline. Both the tool itself and the model underlying it will continue to be extended throughout the remainder of this initiative. As results of analyzing inspection effectiveness for defect containment are determined, they can be shared via the tool and also via updates to existing training courses on metrics and software inspections. Moreover, the tool will help satisfy key CMMI requirements for the NASA Centers, as it will enable NASA to take a global view across peer review results for various types of projects to identify systemic problems. This analysis can result in continuous improvements to the approach to verification.

  15. SLS Flight Software Testing: Using a Modified Agile Software Testing Approach

    NASA Technical Reports Server (NTRS)

    Bolton, Albanie T.

    2016-01-01

    NASA's Space Launch System (SLS) is an advanced launch vehicle for a new era of exploration beyond earth's orbit (BEO). The world's most powerful rocket, SLS, will launch crews of up to four astronauts in the agency's Orion spacecraft on missions to explore multiple deep-space destinations. Boeing is developing the SLS core stage, including the avionics that will control vehicle during flight. The core stage will be built at NASA's Michoud Assembly Facility (MAF) in New Orleans, LA using state-of-the-art manufacturing equipment. At the same time, the rocket's avionics computer software is being developed here at Marshall Space Flight Center in Huntsville, AL. At Marshall, the Flight and Ground Software division provides comprehensive engineering expertise for development of flight and ground software. Within that division, the Software Systems Engineering Branch's test and verification (T&V) team uses an agile test approach in testing and verification of software. The agile software test method opens the door for regular short sprint release cycles. The idea or basic premise behind the concept of agile software development and testing is that it is iterative and developed incrementally. Agile testing has an iterative development methodology where requirements and solutions evolve through collaboration between cross-functional teams. With testing and development done incrementally, this allows for increased features and enhanced value for releases. This value can be seen throughout the T&V team processes that are documented in various work instructions within the branch. The T&V team produces procedural test results at a higher rate, resolves issues found in software with designers at an earlier stage versus at a later release, and team members gain increased knowledge of the system architecture by interfacing with designers. SLS Flight Software teams want to continue uncovering better ways of developing software in an efficient and project beneficial manner. Through agile testing, there has been increased value through individuals and interactions over processes and tools, improved customer collaboration, and improved responsiveness to changes through controlled planning. The presentation will describe agile testing methodology as taken with the SLS FSW Test and Verification team at Marshall Space Flight Center.

  16. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel 'V-plot' methodology to display accuracy values.

    PubMed

    Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.

  17. Overview of software development at the parabolic dish test site

    NASA Technical Reports Server (NTRS)

    Miyazono, C. K.

    1985-01-01

    The development history of the data acquisition and data analysis software is discussed. The software development occurred between 1978 and 1984 in support of solar energy module testing at the Jet Propulsion Laboratory's Parabolic Dish Test Site, located within Edwards Test Station. The development went through incremental stages, starting with a simple single-user BASIC set of programs, and progressing to the relative complex multi-user FORTRAN system that was used until the termination of the project. Additional software in support of testing is discussed including software in support of a meteorological subsystem and the Test Bed Concentrator Control Console interface. Conclusions and recommendations for further development are discussed.

  18. Stereoscopy in orthopaedics

    NASA Astrophysics Data System (ADS)

    Tan, S. L. E.

    2005-03-01

    Stereoscopy was used in medicine as long ago as 1898, but has not gained widespread acceptance except for a peak in the 1930's. It retains a use in orthopaedics in the form of Radiostereogrammetrical Analysis (RSA), though this is now done by computer software without using stereopsis. Combining computer assisted stereoscopic displays with both conventional plain films and reconstructed volumetric axial data, we are reassessing the use of stereoscopy in orthopaedics. Applications include use in developing nations or rural settings, erect patients where axial imaging cannot be used, and complex deformity and trauma reconstruction. Extension into orthopaedic endoscopic systems and teaching aids (e.g. operative videos) are further possibilities. The benefits of stereoscopic vision in increased perceived resolution and depth perception can help orthopaedic surgeons achieve more accurate diagnosis and better pre-operative planning. Limitations to currently available stereoscopic displays which need to be addressed prior to widespread acceptance are: availability of hardware and software, loss of resolution, use of glasses, and image "ghosting". Journal publication, the traditional mode of information dissemination in orthopaedics, is also viewed as a hindrance to the acceptance of stereoscopy - it does not deliver the full impact of stereoscopy and "hands-on" demonstrations are needed.

  19. Adaptive software architecture based on confident HCI for the deployment of sensitive services in Smart Homes.

    PubMed

    Vega-Barbas, Mario; Pau, Iván; Martín-Ruiz, María Luisa; Seoane, Fernando

    2015-03-25

    Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the acceptance of new technologies is limited and restrictive. The integration of smart home design patterns with sensitive solutions can increase user acceptance. In this paper, we present the main challenges that have been identified in the literature for the successful deployment of sensitive services (e.g., telemedicine and assistive services) in smart spaces and a software architecture that models the functionalities of a Smart Home platform that are required to maintain and support such sensitive services. This architecture emphasizes user interaction as a key concept to facilitate the acceptance of sensitive services by end-users and utilizes activity theory to support its innovative design. The application of activity theory to the architecture eases the handling of novel concepts, such as understanding of the system by patients at home or the affordability of assistive services. Finally, we provide a proof-of-concept implementation of the architecture and compare the results with other architectures from the literature.

  20. Research Prototype: Automated Analysis of Scientific and Engineering Semantics

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.; Follen, Greg (Technical Monitor)

    2001-01-01

    Physical and mathematical formulae and concepts are fundamental elements of scientific and engineering software. These classical equations and methods are time tested, universally accepted, and relatively unambiguous. The existence of this classical ontology suggests an ideal problem for automated comprehension. This problem is further motivated by the pervasive use of scientific code and high code development costs. To investigate code comprehension in this classical knowledge domain, a research prototype has been developed. The prototype incorporates scientific domain knowledge to recognize code properties (including units, physical, and mathematical quantity). Also, the procedure implements programming language semantics to propagate these properties through the code. This prototype's ability to elucidate code and detect errors will be demonstrated with state of the art scientific codes.

  1. Representation of thermal infrared imaging data in the DICOM using XML configuration files.

    PubMed

    Ruminski, Jacek

    2007-01-01

    The DICOM standard has become a widely accepted and implemented format for the exchange and storage of medical imaging data. Different imaging modalities are supported however there is not a dedicated solution for thermal infrared imaging in medicine. In this article we propose new ideas and improvements to final proposal of the new DICOM Thermal Infrared Imaging structures and services. Additionally, we designed, implemented and tested software packages for universal conversion of existing thermal imaging files to the DICOM format using XML configuration files. The proposed solution works fast and requires minimal number of user interactions. The XML configuration file enables to compose a set of attributes for any source file format of thermal imaging camera.

  2. A portable structural analysis library for reaction networks.

    PubMed

    Bedaso, Yosef; Bergmann, Frank T; Choi, Kiri; Medley, Kyle; Sauro, Herbert M

    2018-07-01

    The topology of a reaction network can have a significant influence on the network's dynamical properties. Such influences can include constraints on network flows and concentration changes or more insidiously result in the emergence of feedback loops. These effects are due entirely to mass constraints imposed by the network configuration and are important considerations before any dynamical analysis is made. Most established simulation software tools usually carry out some kind of structural analysis of a network before any attempt is made at dynamic simulation. In this paper, we describe a portable software library, libStructural, that can carry out a variety of popular structural analyses that includes conservation analysis, flux dependency analysis and enumerating elementary modes. The library employs robust algorithms that allow it to be used on large networks with more than a two thousand nodes. The library accepts either a raw or fully labeled stoichiometry matrix or models written in SBML format. The software is written in standard C/C++ and comes with extensive on-line documentation and a test suite. The software is available for Windows, Mac OS X, and can be compiled easily on any Linux operating system. A language binding for Python is also available through the pip package manager making it simple to install on any standard Python distribution. The bulk of the source code is licensed under the open source BSD license with other parts using as either the MIT license or more simply public domain. All source is available on GitHub (https://github.com/sys-bio/Libstructural). Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Evaluation of the BD BACTEC FX blood volume monitoring system as a continuous quality improvement measure.

    PubMed

    Coorevits, L; Van den Abeele, A-M

    2015-07-01

    The yield of blood cultures is proportional to the volume of blood cultured. We evaluated an automatic blood volume monitoring system, recently developed by Becton Dickinson within its BACTEC EpiCenter module, that calculates mean volumes of negative aerobic bottles and generates boxplots and histograms. First, we evaluated the filling degree of 339 aerobic glass blood cultures by calculating the weight-based volume for each bottle. A substantial amount of the bottles (48.3%) were inadequately filled. Evaluation of the accuracy of the monitoring system showed a mean bias of -1.4 mL (-15.4%). Additional evaluation, using the amended software on 287 aerobic blood culture bottles, resulted in an acceptable mean deviation of -0.3 mL (-3.3%). The new software version was also tested on 200 of the recently introduced plastic bottles, which will replace the glass bottles in the near future, showing a mean deviation of +2.8 mL (+26.7%). In conclusion, the mean calculated volumes can be used for the training of a single phlebotomist. However, filling problems appear to be masked when using them for phlebotomist groups or on wards. Here, visual interpretation of boxplots and histograms can serve as a useful tool to observe the spread of the filling degrees and to develop a continuous improvement program. Re-adjustment of the software has proven to be necessary for use with plastic bottles. Due to our findings, BD has developed further adjustments to the software for validated use with plastic bottles, which will be released soon.

  4. "On-screen" writing and composing: two years experience with Manuscript Manager, Apple II and IBM-PC versions.

    PubMed

    Offerhaus, L

    1989-06-01

    The problems of the direct composition of a biomedical manuscript on a personal computer are discussed. Most word processing software is unsuitable because literature references, once stored, cannot be rearranged if major changes are necessary. These obstacles have been overcome in Manuscript Manager, a combination of word processing and database software. As it follows Council of Biology Editors and Vancouver rules, the printouts should be technically acceptable to most leading biomedical journals.

  5. CrossTalk: The Journal of Defense Software Engineering. Volume 26, Number 3, May-June 2013

    DTIC Science & Technology

    2013-06-01

    in which the pieces are being shaped at the same time they are being as- sembled. If I am honest, software is probably more like a Rube Goldberg ...losing the focus on architecting activities that help maintain the desired state, enable cost savings, and ensure delivery tempo when other agile...masters degrees and working as developers), accepted by the same in- structor, with counted LOC identically, yielded variations as great as 22:1, and

  6. Designing Test Suites for Software Interactions Testing

    DTIC Science & Technology

    2004-01-01

    the annual cost of insufficient software testing methods and tools in the United States is between 22.2 to 59.5 billion US dollars [13, 14]. This study...10 (2004), 1–29. [21] Cheng, C., Dumitrescu, A., and Schroeder , P. Generating small com- binatorial test suites to cover input-output relationships... Proceedings of the Conference on the Future of Software Engineering (May 2000), pp. 61 – 72. [51] Hartman, A. Software and hardware testing using

  7. Integration, acceptance testing, and clinical operation of the Medical Information, Communication and Archive System, phase II.

    PubMed

    Smith, E M; Wandtke, J; Robinson, A

    1999-05-01

    The Medical Information, Communication and Archive System (MICAS) is a multivendor incremental approach to picture archiving and communications system (PACS). It is a multimodality integrated image management system that is seamlessly integrated with the radiology information system (RIS). Phase II enhancements of MICAS include a permanent archive, automated workflow, study caches, Microsoft (Redmond, WA) Windows NT diagnostic workstations with all components adhering to Digital Information Communications in Medicine (DICOM) standards. MICAS is designed as an enterprise-wide PACS to provide images and reports throughout the Strong Health healthcare network. Phase II includes the addition of a Cemax-Icon (Fremont, CA) archive, PACS broker (Mitra, Waterloo, Canada), an interface (IDX PACSlink, Burlington, VT) to the RIS (IDXrad) plus the conversion of the UNIX-based redundant array of inexpensive disks (RAID) 5 temporary archives in phase I to NT-based RAID 0 DICOM modality-specific study caches (ImageLabs, Bedford, MA). The phase I acquisition engines and workflow management software was uninstalled and the Cemax archive manager (AM) assumed these functions. The existing ImageLabs UNIX-based viewing software was enhanced and converted to an NT-based DICOM viewer. Installation of phase II hardware and software and integration with existing components began in July 1998. Phase II of MICAS demonstrates that a multivendor open-system incremental approach to PACS is feasible, cost-effective, and has significant advantages over a single-vendor implementation.

  8. 77 FR 51880 - Requirements for Maintenance of Inspections, Tests, Analyses, and Acceptance Criteria

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-28

    ... Maintenance of Inspections, Tests, Analyses, and Acceptance Criteria AGENCY: Nuclear Regulatory Commission... construction activities through inspections, tests, analyses, and acceptance criteria (ITAAC) under a combined... inspections, tests, or analyses were performed as required, or that acceptance criteria are met, and to notify...

  9. Quality Assurance of Software Used In Aircraft Or Related Products

    DOT National Transportation Integrated Search

    1993-02-01

    This advisory circular (AC) provides an acceptable means, but not the only means, to show compliance with the quality assurance requirements of Federal Aviation Regulations (FAR) Part 21, Certification Procedures for Products and Parts, as applicable...

  10. Text File Comparator

    NASA Technical Reports Server (NTRS)

    Kotler, R. S.

    1983-01-01

    File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.

  11. Advanced quality systems : probabilistic optimization for profit (Prob.O.Prof) software

    DOT National Transportation Integrated Search

    2009-04-01

    Contractors constantly have to make decisions regarding how to maximize profit and minimize risk on paving projects. With more and more States adopting incentive/disincentive pay adjustment provisions for quality, as measured by various acceptance qu...

  12. ETICS: the international software engineering service for the grid

    NASA Astrophysics Data System (ADS)

    Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.

    2008-07-01

    The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.

  13. Apollo experience report environmental acceptance testing

    NASA Technical Reports Server (NTRS)

    Laubach, C. H. M.

    1976-01-01

    Environmental acceptance testing was used extensively to screen selected spacecraft hardware for workmanship defects and manufacturing flaws. The minimum acceptance levels and durations and methods for their establishment are described. Component selection and test monitoring, as well as test implementation requirements, are included. Apollo spacecraft environmental acceptance test results are summarized, and recommendations for future programs are presented.

  14. Integrating Testing into Software Engineering Courses Supported by a Collaborative Learning Environment

    ERIC Educational Resources Information Center

    Clarke, Peter J.; Davis, Debra; King, Tariq M.; Pava, Jairo; Jones, Edward L.

    2014-01-01

    As software becomes more ubiquitous and complex, the cost of software bugs continues to grow at a staggering rate. To remedy this situation, there needs to be major improvement in the knowledge and application of software validation techniques. Although there are several software validation techniques, software testing continues to be one of the…

  15. Spacecraft attitude control using a smart control system

    NASA Technical Reports Server (NTRS)

    Buckley, Brian; Wheatcraft, Louis

    1992-01-01

    Traditionally, spacecraft attitude control has been implemented using control loops written in native code for a space hardened processor. The Naval Research Lab has taken this approach during the development of the Attitude Control Electronics (ACE) package. After the system was developed and delivered, NRL decided to explore alternate technologies to accomplish this same task more efficiently. The approach taken by NRL was to implement the ACE control loops using systems technologies. The purpose of this effort was to: (1) research capabilities required of an expert system in processing a classic closed-loop control algorithm; (2) research the development environment required to design and test an embedded expert systems environment; (3) research the complexity of design and development of expert systems versus a conventional approach; and (4) test the resulting systems against the flight acceptance test software for both response and accuracy. Two expert systems were selected to implement the control loops. Criteria used for the selection of the expert systems included that they had to run in both embedded systems and ground based environments. Using two different expert systems allowed a comparison of the real-time capabilities, inferencing capabilities, and the ground-based development environment. The two expert systems chosen for the evaluation were Spacecraft Command Language (SCL), and NEXTPERT Object. SCL is a smart control system produced for the NRL by Interface and Control Systems (ICS). SCL was developed to be used for real-time command, control, and monitoring of a new generation of spacecraft. NEXPERT Object is a commercially available product developed by Neuron Data. Results of the effort were evaluated using the ACE test bed. The ACE test bed had been developed and used to test the original flight hardware and software using simulators and flight-like interfaces. The test bed was used for testing the expert systems in a 'near-flight' environment. The technical approach, the system architecture, the development environments, knowledge base development, and results of this effort are detailed.

  16. 49 CFR 232.505 - Pre-revenue service acceptance testing plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Pre-revenue service acceptance testing plan. 232... § 232.505 Pre-revenue service acceptance testing plan. (a) General; submission of plan. Except as... its system the operating railroad or railroads shall submit a pre-revenue service acceptance testing...

  17. Path generation algorithm for UML graphic modeling of aerospace test software

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  18. Integration of an open interface PC scene generator using COTS DVI converter hardware

    NASA Astrophysics Data System (ADS)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  19. Médicarte software developed for the Quebec microprocessor health card project.

    PubMed

    Lavoie, G; Tremblay, L; Durant, P; Papillon, M J; Bérubé, J; Fortin, J P

    1995-01-01

    The Quebec Patient Smart Card Project is a Provincial Government initiative under the responsibility of the Rgie de l'assurance-maladie du Québec (Quebec Health Insurance Board). Development, implementation, and assessment duties were assigned to a team from Université Laval, which in turn joined a group from the Direction de la santé publique du Bas-St-Laurent in Rimouski, where the experiment is taking place. The pilot project seeks to evaluate the use and acceptance of a microprocessor card as a way to improve the exchange of clinical information between card users and various health professionals. The card can be best described as a résumé containing information pertinent to an individual's health history. It is not a complete medical file; rather, it is a summary to be used as a starting point for a discussion between health professionals and patients. The target population is composed of persons 60 years and over, pregnant women, infants under 18 months, and the residents of a small town located in the target area, St-Fabien, regardless of age. The health professionals involved are general practitioners, specialists, pharmacists, nurses, and ambulance personnel. Participation in the project is on a voluntary basis. Each health care provider participating in the project has a personal identification number (PIN) and must use both an access card and a user card to access information. This prevents unauthorized access to a patient's card and allows the staff to sign and date information entered onto the patient card. To test the microprocessor card, we developed software based on a problem-oriented approach integrating diagnosis, investigations, treatments, and referrals. This software is not an expert system that constrains the clinician to a particular decisional algorithm. Instead, the software supports the physician in decision making. The software was developed with a graphical interface (Windows 3.1) to maximize its user friendliness. A version of the software was developed for each of the four groups of health care providers involved. In addition we designed an application to interface with existing pharmaceutical software. For practical reasons and to make it possible to differentiate between the different access profiles, the information stored on the card is divided in several blocks: Identification, Emergency, History (personal and family), Screening Tests, Vaccinations, Drug Profile, General follow-up, and some Specific follow-ups (Pregnancy, Ophthalmology, Kidney failure, Cardiology, Pediatrics, Diabetes, Pneumology, Specific parameters). Over 14,000 diagnoses and symptoms are classified with four levels of precision, the codification being based on the ICPC (International Classification for Primary Care). The software contains different applications to assist the clinician in decision making. A "Drug Advisor" helps the prescriber by detecting possible interactions between drugs, giving indications (doses) and contraindications, cautions, potential side-effects and therapeutic alternatives. There is also a prevention module providing recommendations for vaccination and periodic examinations based on the patient's age and sex. The pharmaceutical, vaccination, and screening tests data banks are updated every six months. These sections of the software are accessible to access card holders at any times, even without a patient card, and constitute in themselves an interesting clinical tool. We developed a software server (SCAM) allowing the different applications to access the data in a memory card regardless of the type of memory card used. Using a single high level command language, this server provides a standardized utilization of memory cards from various manufacturers. It ensures the compatibility of the applications using the card as a storage medium. (abstract truncated)

  20. The clinical utility of lung clearance index in early cystic fibrosis lung disease is not impacted by the number of multiple-breath washout trials

    PubMed Central

    Foong, Rachel E.; Harper, Alana J.; King, Louise; Turkovic, Lidija; Davis, Miriam; Clem, Charles C.; Davis, Stephanie D.; Ranganathan, Sarath; Hall, Graham L.

    2018-01-01

    The lung clearance index (LCI) from the multiple-breath washout (MBW) test is a promising surveillance tool for pre-school children with cystic fibrosis (CF). Current guidelines for MBW testing recommend that three acceptable trials are required. However, success rates to achieve these criteria are low in children aged <7 years and feasibility may improve with modified pre-school criteria that accepts tests with two acceptable trials. This study aimed to determine if relationships between LCI and clinical outcomes of CF lung disease differ when only two acceptable MBW trials are assessed. Healthy children and children with CF aged 3–6 years were recruited for MBW testing. Children with CF also underwent bronchoalveolar lavage fluid collection and a chest computed tomography scan. MBW feasibility increased from 46% to 75% when tests with two trials were deemed acceptable compared with tests where three acceptable trials were required. Relationships between MBW outcomes and markers of pulmonary inflammation, infection and structural lung disease were not different between tests with three acceptable trials compared with tests with two acceptable trials. This study indicates that pre-school MBW data from two acceptable trials may provide sufficient information on ventilation distribution if three acceptable trials are not possible. PMID:29707562

  1. Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing

    NASA Astrophysics Data System (ADS)

    Srivastava, Praveen Ranjan; Pareek, Deepak

    Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.

  2. Space station software reliability analysis based on failures observed during testing at the multisystem integration facility

    NASA Technical Reports Server (NTRS)

    Tamayo, Tak Chai

    1987-01-01

    Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.

  3. Statistics of software vulnerability detection in certification testing

    NASA Astrophysics Data System (ADS)

    Barabanov, A. V.; Markov, A. S.; Tsirlov, V. L.

    2018-05-01

    The paper discusses practical aspects of introduction of the methods to detect software vulnerability in the day-to-day activities of the accredited testing laboratory. It presents the approval results of the vulnerability detection methods as part of the study of the open source software and the software that is a test object of the certification tests under information security requirements, including software for communication networks. Results of the study showing the allocation of identified vulnerabilities by types of attacks, country of origin, programming languages used in the development, methods for detecting vulnerability, etc. are given. The experience of foreign information security certification systems related to the detection of certified software vulnerabilities is analyzed. The main conclusion based on the study is the need to implement practices for developing secure software in the development life cycle processes. The conclusions and recommendations for the testing laboratories on the implementation of the vulnerability analysis methods are laid down.

  4. Users' acceptance and attitude in regarding electronic medical record at central polyclinic of oil industry in Isfahan, Iran.

    PubMed

    Tavakoli, Nahid; Shahin, Arash; Jahanbakhsh, Maryam; Mokhtari, Habibollah; Rafiei, Maryam

    2013-01-01

    Simultaneous with the rapid changes in the technology and information systems, hospitals interest in using them. One of the most common systems in hospitals is electronic medical record (EMR) whose one of uses is providing better health care quality via health information technology. Prior to its use, attempts should be put to identifying factors affecting the acceptance, attitude and utilizing of this technology. The current article aimed to study the effective factors of EMR acceptance by technology acceptance model (TAM) at central polyclinic of Oil Industry in Isfahan. This was a practical, descriptive and regression study. The population research were all EMR users at polyclinic of Oil Industry in 2012 and its sampling was simple random with 62 users. The tool of data collection was a research-made questionnaire based on TAM. The validity of questionnaire has been assigned through the strategy of content validity and health information technology experts' views and its reliability by test-retest. The system users have positive attitude toward using EMR (56.6%). Also, users are not very satisfied with effective external (38.14%) and behavioral factors (47.8%) upon using the system. Perceived ease-of-use (PEU) and perceived usefulness (PU) were at a good level. Lack of relative satisfaction with using of EMR derives from factors such as appearance, screen, data and information quality and terminology. In this study, it is suggested to improve the system and the efficiency of the users through software' external factors development. So that PEU and users' attitude to be changed and moved in positive manner.

  5. Design of a sampling plan to detect ochratoxin A in green coffee.

    PubMed

    Vargas, E A; Whitaker, T B; Dos Santos, E A; Slate, A B; Lima, F B; Franca, R C A

    2006-01-01

    The establishment of maximum limits for ochratoxin A (OTA) in coffee by importing countries requires that coffee-producing countries develop scientifically based sampling plans to assess OTA contents in lots of green coffee before coffee enters the market thus reducing consumer exposure to OTA, minimizing the number of lots rejected, and reducing financial loss for producing countries. A study was carried out to design an official sampling plan to determine OTA in green coffee produced in Brazil. Twenty-five lots of green coffee (type 7 - approximately 160 defects) were sampled according to an experimental protocol where 16 test samples were taken from each lot (total of 16 kg) resulting in a total of 800 OTA analyses. The total, sampling, sample preparation, and analytical variances were 10.75 (CV = 65.6%), 7.80 (CV = 55.8%), 2.84 (CV = 33.7%), and 0.11 (CV = 6.6%), respectively, assuming a regulatory limit of 5 microg kg(-1) OTA and using a 1 kg sample, Romer RAS mill, 25 g sub-samples, and high performance liquid chromatography. The observed OTA distribution among the 16 OTA sample results was compared to several theoretical distributions. The 2 parameter-log normal distribution was selected to model OTA test results for green coffee as it gave the best fit across all 25 lot distributions. Specific computer software was developed using the variance and distribution information to predict the probability of accepting or rejecting coffee lots at specific OTA concentrations. The acceptation probability was used to compute an operating characteristic (OC) curve specific to a sampling plan design. The OC curve was used to predict the rejection of good lots (sellers' or exporters' risk) and the acceptance of bad lots (buyers' or importers' risk).

  6. A Mobile Computerized Decision Support System to Prevent Hypoglycemia in Hospitalized Patients With Type 2 Diabetes Mellitus

    PubMed Central

    Spat, Stephan; Donsa, Klaus; Beck, Peter; Höll, Bernhard; Mader, Julia K.; Schaupp, Lukas; Augustin, Thomas; Chiarugi, Franco; Lichtenegger, Katharina M.; Plank, Johannes; Pieber, Thomas R.

    2016-01-01

    Background: Diabetes management requires complex and interdisciplinary cooperation of health care professionals (HCPs). To support this complex process, IT-support is recommended by clinical guidelines. The aim of this article is to report on results from a clinical feasibility study testing the prototype of a mobile, tablet-based client-server system for computerized decision and workflow support (GlucoTab®) and to discuss its impact on hypoglycemia prevention. Methods: The system was tested in a monocentric, open, noncontrolled intervention study in 30 patients with type 2 diabetes mellitus (T2DM). The system supports HCPs in performing a basal-bolus insulin therapy. Diabetes therapy, adverse events, software errors and user feedback were documented. Safety, efficacy and user acceptance of the system were investigated. Results: Only 1.3% of blood glucose (BG) measurements were <70 mg/dl and only 2.6% were >300 mg/dl. The availability of the system (97.3%) and the rate of treatment activities documented with the system (>93.5%) were high. Only few suggestions from the system were overruled by the users (>95.7% adherence). Evaluation of the 3 anonymous questionnaires showed that confidence in the system increased over time. The majority of users believed that treatment errors could be prevented by using this system. Conclusions: Data from our feasibility study show a significant reduction of hypoglycemia by implementing a computerized system for workflow and decision support for diabetes management, compared to a paper-based process. The system was well accepted by HCPs, which is shown in the user acceptance analysis and that users adhered to the insulin dose suggestions made by the system. PMID:27810995

  7. Integrated testing and verification system for research flight software

    NASA Technical Reports Server (NTRS)

    Taylor, R. N.

    1979-01-01

    The MUST (Multipurpose User-oriented Software Technology) program is being developed to cut the cost of producing research flight software through a system of software support tools. An integrated verification and testing capability was designed as part of MUST. Documentation, verification and test options are provided with special attention on real-time, multiprocessing issues. The needs of the entire software production cycle were considered, with effective management and reduced lifecycle costs as foremost goals.

  8. Post-abortion care and voluntary HIV counselling and testing--an example of integrating HIV prevention into reproductive health services.

    PubMed

    Rasch, Vibeke; Yambesi, Fortunata; Massawe, Siriel

    2006-05-01

    To assess the acceptance and outcome of voluntary HIV counselling and testing (VCT) among women who had an unsafe abortion. 706 women were provided with post-abortion contraceptive service and offered VCT. We collected data on socioeconomic characteristics and contraceptive use and determined the HIV status of those who accepted VCT. Using a nested case-control design, we compared women who accepted HIV testing with women who did not. To study the association between socioeconomic factors, HIV testing acceptance and condom use in more detail, we did stratified analyses based on age and marital status. 58% of the women who had an unsafe abortion accepted HIV testing. Women who earned an income were more likely to accept testing than housewives. Women who accepted testing were more likely to accept using a condom. The HIV prevalence rate was 19% among single women aged 20-24 years and 25% among single women aged 25-45 years. HIV testing and condoms were accepted by most women who had an unsafe abortion. The poor reproductive health of these women could be improved by good post-abortion care that includes contraceptive counselling, VCT and condom promotion.

  9. Common Data Acquisition Systems (DAS) Software Development for Rocket Propulsion Test (RPT) Test Facilities - A General Overview

    NASA Technical Reports Server (NTRS)

    Hebert, Phillip W., Sr.; Hughes, Mark S.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Marshall, PeggL.; Duncan, Michael E.; Morris, Jon A.; Franzl, Richard W.

    2012-01-01

    The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition system (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis' development and deployment.

  10. 15 CFR 995.27 - Format validation software testing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Format validation software testing... CERTIFICATION REQUIREMENTS FOR NOAA HYDROGRAPHIC PRODUCTS AND SERVICES CERTIFICATION REQUIREMENTS FOR... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying...

  11. The Design of Software for Three-Phase Induction Motor Test System

    NASA Astrophysics Data System (ADS)

    Haixiang, Xu; Fengqi, Wu; Jiai, Xue

    2017-11-01

    The design and development of control system software is important to three-phase induction motor test equipment, which needs to be completely familiar with the test process and the control procedure of test equipment. In this paper, the software is developed according to the national standard (GB/T1032-2005) about three-phase induction motor test method by VB language. The control system and data analysis software and the implement about motor test system are described individually, which has the advantages of high automation and high accuracy.

  12. CATS, continuous automated testing of seismological, hydroacoustic, and infrasound (SHI) processing software.

    NASA Astrophysics Data System (ADS)

    Brouwer, Albert; Brown, David; Tomuta, Elena

    2017-04-01

    To detect nuclear explosions, waveform data from over 240 SHI stations world-wide flows into the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), located in Vienna, Austria. A complex pipeline of software applications processes this data in numerous ways to form event hypotheses. The software codebase comprises over 2 million lines of code, reflects decades of development, and is subject to frequent enhancement and revision. Since processing must run continuously and reliably, software changes are subjected to thorough testing before being put into production. To overcome the limitations and cost of manual testing, the Continuous Automated Testing System (CATS) has been created. CATS provides an isolated replica of the IDC processing environment, and is able to build and test different versions of the pipeline software directly from code repositories that are placed under strict configuration control. Test jobs are scheduled automatically when code repository commits are made. Regressions are reported. We present the CATS design choices and test methods. Particular attention is paid to how the system accommodates the individual testing of strongly interacting software components that lack test instrumentation.

  13. Analysis of key technologies for virtual instruments metrology

    NASA Astrophysics Data System (ADS)

    Liu, Guixiong; Xu, Qingui; Gao, Furong; Guan, Qiuju; Fang, Qiang

    2008-12-01

    Virtual instruments (VIs) require metrological verification when applied as measuring instruments. Owing to the software-centered architecture, metrological evaluation of VIs includes two aspects: measurement functions and software characteristics. Complexity of software imposes difficulties on metrological testing of VIs. Key approaches and technologies for metrology evaluation of virtual instruments are investigated and analyzed in this paper. The principal issue is evaluation of measurement uncertainty. The nature and regularity of measurement uncertainty caused by software and algorithms can be evaluated by modeling, simulation, analysis, testing and statistics with support of powerful computing capability of PC. Another concern is evaluation of software features like correctness, reliability, stability, security and real-time of VIs. Technologies from software engineering, software testing and computer security domain can be used for these purposes. For example, a variety of black-box testing, white-box testing and modeling approaches can be used to evaluate the reliability of modules, components, applications and the whole VI software. The security of a VI can be assessed by methods like vulnerability scanning and penetration analysis. In order to facilitate metrology institutions to perform metrological verification of VIs efficiently, an automatic metrological tool for the above validation is essential. Based on technologies of numerical simulation, software testing and system benchmarking, a framework for the automatic tool is proposed in this paper. Investigation on implementation of existing automatic tools that perform calculation of measurement uncertainty, software testing and security assessment demonstrates the feasibility of the automatic framework advanced.

  14. Handheld Devices: Toward a More Mobile Campus.

    ERIC Educational Resources Information Center

    Fallon, Mary A. C.

    2002-01-01

    Offers an overview of the acceptance and use of handheld personal computing devices on campus that connect wirelessly to the campus network. Considers access; present and future software applications; uses in medial education; faculty training needs; and wireless technology issues. (Author/LRW)

  15. System engineering of the Atacama Large Millimeter/submillimeter Array

    NASA Astrophysics Data System (ADS)

    Bhatia, Ravinder; Marti, Javier; Sugimoto, Masahiro; Sramek, Richard; Miccolis, Maurizio; Morita, Koh-Ichiro; Arancibia, Demián.; Araya, Andrea; Asayama, Shin'ichiro; Barkats, Denis; Brito, Rodrigo; Brundage, William; Grammer, Wes; Haupt, Christoph; Kurlandczyk, Herve; Mizuno, Norikazu; Napier, Peter; Pizarro, Eduardo; Saini, Kamaljeet; Stahlman, Gretchen; Verzichelli, Gianluca; Whyborn, Nick; Yagoubov, Pavel

    2012-09-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) will be composed of 66 high precision antennae located at 5000 meters altitude in northern Chile. This paper will present the methodology, tools and processes adopted to system engineer a project of high technical complexity, by system engineering teams that are remotely located and from different cultures, and in accordance with a demanding schedule and within tight financial constraints. The technical and organizational complexity of ALMA requires a disciplined approach to the definition, implementation and verification of the ALMA requirements. During the development phase, System Engineering chairs all technical reviews and facilitates the resolution of technical conflicts. We have developed analysis tools to analyze the system performance, incorporating key parameters that contribute to the ultimate performance, and are modeled using best estimates and/or measured values obtained during test campaigns. Strict tracking and control of the technical budgets ensures that the different parts of the system can operate together as a whole within ALMA boundary conditions. System Engineering is responsible for acceptances of the thousands of hardware items delivered to Chile, and also supports the software acceptance process. In addition, System Engineering leads the troubleshooting efforts during testing phases of the construction project. Finally, the team is conducting System level verification and diagnostics activities to assess the overall performance of the observatory. This paper will also share lessons learned from these system engineering and verification approaches.

  16. WWSSF - a worldwide study on radioisotopic renal split function: reproducibility of renal split function assessment in children.

    PubMed

    Geist, Barbara Katharina; Dobrozemsky, Georg; Samal, Martin; Schaffarich, Michael P; Sinzinger, Helmut; Staudenherz, Anton

    2015-12-01

    The split or differential renal function is the most widely accepted quantitative parameter derived from radionuclide renography. To examine the intercenter variance of this parameter, we designed a worldwide round robin test. Five selected dynamic renal studies have been distributed all over the world by e-mail. Three of these studies are anonymized patient data acquired using the EANM standardized protocol and two studies are phantom studies. In a simple form, individual participants were asked to measure renal split function as well as to provide additional information such as data analysis software, positioning of background region of interest, or the method of calculation. We received the evaluation forms from 34 centers located in 21 countries. The analysis of the round robin test yielded an overall z-score of 0.3 (a z-score below 1 reflecting a good result). However, the z-scores from several centers were unacceptably high, with values greater than 3. In particular, the studies with impaired renal function showed a wide variance. A wide variance in the split renal function was found in patients with impaired kidney function. This study indicates the ultimate importance of quality control and standardization of the measurement of the split renal function. It is especially important with respect to the commonly accepted threshold for significant change in split renal function by 10%.

  17. Sustaining Software-Intensive Systems

    DTIC Science & Technology

    2006-05-01

    2.2 Multi- Service Operational Test and Evaluation .......................................4 2.3 Stable Software Baseline...or equivalent document • completed Multi- Service Operational Test and Evaluation (MOT&E) for the potential production software package (or OT&E if...not multi- service ) • stable software production baseline • complete and current software documentation • Authority to Operate (ATO) for an

  18. Development and Flight Test of an Augmented Thrust-Only Flight Control System on an MD-11 Transport Airplane

    NASA Technical Reports Server (NTRS)

    Burcham, Frank W., Jr.; Maine, Trindel A.; Burken, John J.; Pappas, Drew

    1996-01-01

    An emergency flight control system using only engine thrust, called Propulsion-Controlled Aircraft (PCA), has been developed and flight tested on an MD-11 airplane. In this thrust-only control system, pilot flight path and track commands and aircraft feedback parameters are used to control the throttles. The PCA system was installed on the MD-11 airplane using software modifications to existing computers. Flight test results show that the PCA system can be used to fly to an airport and safely land a transport airplane with an inoperative flight control system. In up-and-away operation, the PCA system served as an acceptable autopilot capable of extended flight over a range of speeds and altitudes. The PCA approaches, go-arounds, and three landings without the use of any non-nal flight controls have been demonstrated, including instrument landing system-coupled hands-off landings. The PCA operation was used to recover from an upset condition. In addition, PCA was tested at altitude with all three hydraulic systems turned off. This paper reviews the principles of throttles-only flight control; describes the MD-11 airplane and systems; and discusses PCA system development, operation, flight testing, and pilot comments.

  19. Artificial intelligence and expert systems in-flight software testing

    NASA Technical Reports Server (NTRS)

    Demasie, M. P.; Muratore, J. F.

    1991-01-01

    The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.

  20. 77 FR 50720 - Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Test Documentation for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1207, ``Test Documentation for Digital... practices for test documentation for software and computer systems as described in the Institute of...

  1. Novel absorptivity centering method utilizing normalized and factorized spectra for analysis of mixtures with overlapping spectra in different matrices using built-in spectrophotometer software

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom

    2018-07-01

    A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D0) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing.

  2. Novel absorptivity centering method utilizing normalized and factorized spectra for analysis of mixtures with overlapping spectra in different matrices using built-in spectrophotometer software.

    PubMed

    Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom

    2018-07-05

    A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D 0 ) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Acceptance-test report for El Toro Library solar heating and cooling demonstration project (SHAC no. 1501)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    A partial acceptance test was conducted on the El Toro Library Solar Energy System, and the detailed results of the various mode acceptance tests are given. All the modes tested function as designed. Collector array efficiencies were calculated at approximately 40%. Chiller COP was estimated at .50, with chiller loop flow rates approximately 85 to 90% of design flow. The acceptance test included visual inspection, preoperational testing and procedure verification, operational mode checkout, and performance testing. (LEW)

  4. Validation and Verification of LADEE Models and Software

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen

    2013-01-01

    The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.

  5. General purpose optimization software for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1990-01-01

    The author has developed several general purpose optimization programs over the past twenty years. The earlier programs were developed as research codes and served that purpose reasonably well. However, in taking the formal step from research to industrial application programs, several important lessons have been learned. Among these are the importance of clear documentation, immediate user support, and consistent maintenance. Most important has been the issue of providing software that gives a good, or at least acceptable, design at minimum computational cost. Here, the basic issues developing optimization software for industrial applications are outlined and issues of convergence rate, reliability, and relative minima are discussed. Considerable feedback has been received from users, and new software is being developed to respond to identified needs. The basic capabilities of this software are outlined. A major motivation for the development of commercial grade software is ease of use and flexibility, and these issues are discussed with reference to general multidisciplinary applications. It is concluded that design productivity can be significantly enhanced by the more widespread use of optimization as an everyday design tool.

  6. An Interactive, Mobile-Based Tool for Personal Social Network Data Collection and Visualization Among a Geographically Isolated and Socioeconomically Disadvantaged Population: Early-Stage Feasibility Study With Qualitative User Feedback.

    PubMed

    Eddens, Katherine S; Fagan, Jesse M; Collins, Tom

    2017-06-22

    Personal social networks have a profound impact on our health, yet collecting personal network data for use in health communication, behavior change, or translation and dissemination interventions has proved challenging. Recent advances in social network data collection software have reduced the burden of network studies on researchers and respondents alike, yet little testing has occurred to discover whether these methods are: (1) acceptable to a variety of target populations, including those who may have limited experience with technology or limited literacy; and (2) practical in the field, specifically in areas that are geographically and technologically disconnected, such as rural Appalachian Kentucky. We explored the early-stage feasibility (Acceptability, Demand, Implementation, and Practicality) of using innovative, interactive, tablet-based network data collection and visualization software (OpenEddi) in field collection of personal network data in Appalachian Kentucky. A total of 168 rural Appalachian women who had previously participated in a study on the use of a self-collected vaginal swab (SCVS) for human papillomavirus testing were recruited by community-based nurse interviewers between September 2013 and August 2014. Participants completed egocentric network surveys via OpenEddi, which captured social and communication network influences on participation in, and recruitment to, the SCVS study. After study completion, we conducted a qualitative group interview with four nurse interviewers and two participants in the network study. Using this qualitative data, and quantitative data from the network study, we applied guidelines from Bowen et al to assess feasibility in four areas of early-stage development of OpenEddi: Acceptability, Demand, Implementation, and Practicality. Basic descriptive network statistics (size, edges, density) were analyzed using RStudio. OpenEddi was perceived as fun, novel, and superior to other data collection methods or tools. Respondents enjoyed the social network survey component, and visualizing social networks produced thoughtful responses from participants about leveraging or changing network content and structure for specific health-promoting purposes. Areas for improved literacy and functionality of the tool were identified. However, technical issues led to substantial (50%) data loss, limiting the success of its implementation from a researcher's perspective, and hindering practicality in the field. OpenEddi is a promising data collection tool for use in geographically isolated and socioeconomically disadvantaged populations. Future development will mitigate technical problems, improve usability and literacy, and test new methods of data collection. These changes will support goals for use of this tool in the delivery of network-based health communication and social support interventions to socioeconomically disadvantaged populations. ©Katherine S Eddens, Jesse M Fagan, Tom Collins. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 22.06.2017.

  7. An Interactive, Mobile-Based Tool for Personal Social Network Data Collection and Visualization Among a Geographically Isolated and Socioeconomically Disadvantaged Population: Early-Stage Feasibility Study With Qualitative User Feedback

    PubMed Central

    Fagan, Jesse M; Collins, Tom

    2017-01-01

    Background Personal social networks have a profound impact on our health, yet collecting personal network data for use in health communication, behavior change, or translation and dissemination interventions has proved challenging. Recent advances in social network data collection software have reduced the burden of network studies on researchers and respondents alike, yet little testing has occurred to discover whether these methods are: (1) acceptable to a variety of target populations, including those who may have limited experience with technology or limited literacy; and (2) practical in the field, specifically in areas that are geographically and technologically disconnected, such as rural Appalachian Kentucky. Objective We explored the early-stage feasibility (Acceptability, Demand, Implementation, and Practicality) of using innovative, interactive, tablet-based network data collection and visualization software (OpenEddi) in field collection of personal network data in Appalachian Kentucky. Methods A total of 168 rural Appalachian women who had previously participated in a study on the use of a self-collected vaginal swab (SCVS) for human papillomavirus testing were recruited by community-based nurse interviewers between September 2013 and August 2014. Participants completed egocentric network surveys via OpenEddi, which captured social and communication network influences on participation in, and recruitment to, the SCVS study. After study completion, we conducted a qualitative group interview with four nurse interviewers and two participants in the network study. Using this qualitative data, and quantitative data from the network study, we applied guidelines from Bowen et al to assess feasibility in four areas of early-stage development of OpenEddi: Acceptability, Demand, Implementation, and Practicality. Basic descriptive network statistics (size, edges, density) were analyzed using RStudio. Results OpenEddi was perceived as fun, novel, and superior to other data collection methods or tools. Respondents enjoyed the social network survey component, and visualizing social networks produced thoughtful responses from participants about leveraging or changing network content and structure for specific health-promoting purposes. Areas for improved literacy and functionality of the tool were identified. However, technical issues led to substantial (50%) data loss, limiting the success of its implementation from a researcher’s perspective, and hindering practicality in the field. Conclusions OpenEddi is a promising data collection tool for use in geographically isolated and socioeconomically disadvantaged populations. Future development will mitigate technical problems, improve usability and literacy, and test new methods of data collection. These changes will support goals for use of this tool in the delivery of network-based health communication and social support interventions to socioeconomically disadvantaged populations. PMID:28642217

  8. Ethical education in software engineering: responsibility in the production of complex systems.

    PubMed

    Génova, Gonzalo; González, M Rosario; Fraga, Anabel

    2007-12-01

    Among the various contemporary schools of moral thinking, consequence-based ethics, as opposed to rule-based, seems to have a good acceptance among professionals such as software engineers. But naïve consequentialism is intellectually too weak to serve as a practical guide in the profession. Besides, the complexity of software systems makes it very hard to know in advance the consequences that will derive from professional activities in the production of software. Therefore, following the spirit of well-known codes of ethics such as the ACM/IEEE's, we advocate for a more solid position in the ethical education of software engineers, which we call 'moderate deontologism', that takes into account both rules and consequences to assess the goodness of actions, and at the same time pays an adequate consideration to the absolute values of human dignity. In order to educate responsible professionals, however, this position should be complemented with a pedagogical approach to virtue ethics.

  9. Testing of Hand-Held Mine Detection Systems

    DTIC Science & Technology

    2015-01-08

    ITOP 04-2-5208 for guidance on software testing . Testing software is necessary to ensure that safety is designed into the software algorithm, and that...sensor verification areas or target lanes. F.2. TESTING OBJECTIVES. a. Testing objectives will impact on the test design . Some examples of...overall safety, performance, and reliability of the system. It describes activities necessary to ensure safety is designed into the system under test

  10. Strategies for Teaching Internet Ethics.

    ERIC Educational Resources Information Center

    Rader, Martha H.

    2002-01-01

    Ten strategies for teaching Internet ethics are as follows: establish acceptable use policy; communicate ethical codes; model behaviors and values; encourage discussion of ethical issues; reinforce ethical conduct; monitor student behavior; secure systems and software; discourage surfing without supervision; monitor e-mail and websites; and…

  11. SSE software test management STM capability: Using STM in the Ground Systems Development Environment (GSDE)

    NASA Technical Reports Server (NTRS)

    Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo

    1992-01-01

    This report is one of a series discussing configuration management (CM) topics for Space Station ground systems software development. It provides a description of the Software Support Environment (SSE)-developed Software Test Management (STM) capability, and discusses the possible use of this capability for management of developed software during testing performed on target platforms. This is intended to supplement the formal documentation of STM provided by the SEE Project. How STM can be used to integrate contractor CM and formal CM for software before delivery to operations is described. STM provides a level of control that is flexible enough to support integration and debugging, but sufficiently rigorous to insure the integrity of the testing process.

  12. Software testing

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.

    2016-01-01

    Now more than ever, scientific results are dependent on sophisticated software and analysis. Why should we trust code written by others? How do you ensure your own code produces sensible results? How do you make sure it continues to do so as you update, modify, and add functionality? Software testing is an integral part of code validation and writing tests should be a requirement for any software project. I will talk about Python-based tools that make managing and running tests much easier and explore some statistics for projects hosted on GitHub that contain tests.

  13. A Bayesian Model for the Prediction and Early Diagnosis of Alzheimer's Disease.

    PubMed

    Alexiou, Athanasios; Mantzavinos, Vasileios D; Greig, Nigel H; Kamal, Mohammad A

    2017-01-01

    Alzheimer's disease treatment is still an open problem. The diversity of symptoms, the alterations in common pathophysiology, the existence of asymptomatic cases, the different types of sporadic and familial Alzheimer's and their relevance with other types of dementia and comorbidities, have already created a myth-fear against the leading disease of the twenty first century. Many failed latest clinical trials and novel medications have revealed the early diagnosis as the most critical treatment solution, even though scientists tested the amyloid hypothesis and few related drugs. Unfortunately, latest studies have indicated that the disease begins at the very young ages thus making it difficult to determine the right time of proper treatment. By taking into consideration all these multivariate aspects and unreliable factors against an appropriate treatment, we focused our research on a non-classic statistical evaluation of the most known and accepted Alzheimer's biomarkers. Therefore, in this paper, the code and few experimental results of a computational Bayesian tool have being reported, dedicated to the correlation and assessment of several Alzheimer's biomarkers to export a probabilistic medical prognostic process. This new statistical software is executable in the Bayesian software Winbugs, based on the latest Alzheimer's classification and the formulation of the known relative probabilities of the various biomarkers, correlated with Alzheimer's progression, through a set of discrete distributions. A user-friendly web page has been implemented for the supporting of medical doctors and researchers, to upload Alzheimer's tests and receive statistics on the occurrence of Alzheimer's disease development or presence, due to abnormal testing in one or more biomarkers.

  14. Adaptation of all-ceramic fixed partial dentures.

    PubMed

    Borba, Márcia; Cesar, Paulo F; Griggs, Jason A; Della Bona, Álvaro

    2011-11-01

    To measure the marginal and internal fit of three-unit fixed partial dentures (FPDs) using the micro-CT technique, testing the null hypothesis that there is no difference in the adaptation between the ceramic systems studied. Stainless steel models of prepared abutments were fabricated to design the FPDs. Ten FPDs were produced from each framework ceramic (YZ - Vita In-Ceram YZ and IZ - Vita In-Ceram Zirconia) using CEREC inLab according to the manufacturer instructions. All FPDs were veneered using the recommended porcelain. Each FPD was seated on the original model and scanned using micro-CT. Files were processed using NRecon and CTAn software. Adobe Photoshop and Image J software were used to analyze the cross-sections images. Five measuring locations were used as follows: MG - marginal gap; CA - chamfer area; AW - axial wall; AOT - axio-occlusal transition area; OA - occlusal area. The horizontal marginal discrepancy (HMD) was evaluated in another set of images. Results were statistically analyzed using ANOVA and Tukey tests (α=0.05). The mean values for MG, CA, AW, OA and HMD were significantly different for all tested groups (p<0.05). IZ exhibited greater mean values than YZ for all measuring locations except for AW and AOT. OA showed the greatest mean gap values for both ceramic systems. MG and AW mean gap values were low for both systems. The ceramic systems evaluated showed different levels of marginal and internal fit, rejecting the study hypothesis. Yet, both ceramic systems showed clinically acceptable marginal and internal fit. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  15. Target controlled infusion for kids: trials and simulations.

    PubMed

    Mehta, Disha; McCormack, Jon; Fung, Parry; Dumont, Guy; Ansermino, J

    2008-01-01

    Target controlled infusion (TCI) for Kids is a computer controlled system designed to administer propofol for general anesthesia. A controller establishes infusion rates required to achieve a specified concentration at the drug's effect site (C(e)) by implementing a continuously updated pharmacokinetic-pharmacodymanic model. This manuscript provides an overview of the system's design, preclinical tests, and a clinical pilot study. In pre-clinical tests, predicted infusion rates for 20 simulated procedures displayed complete convergent validity between two software implementations, Labview and Matlab, at computational intervals of 5, 10, and 15s, but diverged with 20s intervals due to system rounding errors. The volume of drug delivered by the TCI system also displayed convergent validity with Tivatrainer, a widely used TCI simulation software. Further tests, were conducted for 50 random procedures to evaluate discrepancies between volumes reported and those actually delivered by the system. Accuracies were within clinically acceptable ranges and normally distributed with a mean of 0.08 +/- 0.01 ml. In the clinical study, propofol pharmacokinetics were simulated for 30 surgical procedures involving children aged 3 months to 9 years. Predicted C(e) values during standard clinical practice, the accuracy of wake-up times predicted by the system, and potential correlations between patient wake-up times, C(e), and state entropy (SE) were assessed. Neither Ce nor SE was a reliable predictor of wake-up time in children, but the small sample size of this study does not fully accommodate the noted variation in children's response to propofol. A C(e) value of 1.9 mug/ml was found to best predict emergence from anesthesia in children.

  16. Adaptation of all-ceramic fixed partial dentures

    PubMed Central

    Borba, Márcia; Cesar, Paulo F.; Griggs, Jason A.; Della Bona, Álvaro

    2011-01-01

    Objectives To measure the marginal and internal fit of three-unit fixed partial dentures (FPDs) using the micro-CT technique, testing the null hypothesis that there is no difference in the adaptation between the ceramic systems studied. Methods Stainless steel models of prepared abutments were fabricated to design the FPDs. Ten FPDs were produced from each framework ceramic (YZ - Vita In-Ceram YZ and IZ - Vita In-Ceram Zirconia) using CEREC inLab according to the manufacturer instructions. All FPDs were veneered using the recommended porcelain. Each FPD was seated on the original model and scanned using micro-CT. Files were processed using NRecon and CTAn software. Adobe Photoshop and Image J software were used to analyze the cross-sections images. Five measuring locations were used as follows: MG – marginal gap; CA - chamfer area; AW - axial wall; AOT - axio-occlusal transition area; OA - occlusal area. The horizontal marginal discrepancy (HMD) was evaluated in another set of images. Results were statistically analyzed using ANOVA and Tukey tests (α=0.05). Results The mean values for MG, CA, AW, OA and HMD were significantly different for all tested groups (p<0.05). IZ exhibited greater mean values than YZ for all measuring locations except for AW and AOT. OA showed the greatest mean gap values for both ceramic systems. MG and AW mean gap values were low for both systems. Significance The ceramic systems evaluated showed different levels of marginal and internal fit, rejecting the study hypothesis. Yet, both ceramic systems showed clinically acceptable marginal and internal fit. PMID:21920595

  17. Academic Testing and Grading with Spreadsheet Software.

    ERIC Educational Resources Information Center

    Ho, James K.

    1987-01-01

    Explains how spreadsheet software can be used in the design and grading of academic tests and in assigning grades. Macro programs and menu-driven software are highlighted and an example using IBM PCs and Lotus 1-2-3 software is given. (Author/LRW)

  18. An experimental evaluation of software redundancy as a strategy for improving reliability

    NASA Technical Reports Server (NTRS)

    Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.

    1990-01-01

    The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.

  19. Assessment Environment for Complex Systems Software Guide

    NASA Technical Reports Server (NTRS)

    2013-01-01

    This Software Guide (SG) describes the software developed to test the Assessment Environment for Complex Systems (AECS) by the West Virginia High Technology Consortium (WVHTC) Foundation's Mission Systems Group (MSG) for the National Aeronautics and Space Administration (NASA) Aeronautics Research Mission Directorate (ARMD). This software is referred to as the AECS Test Project throughout the remainder of this document. AECS provides a framework for developing, simulating, testing, and analyzing modern avionics systems within an Integrated Modular Avionics (IMA) architecture. The purpose of the AECS Test Project is twofold. First, it provides a means to test the AECS hardware and system developed by MSG. Second, it provides an example project upon which future AECS research may be based. This Software Guide fully describes building, installing, and executing the AECS Test Project as well as its architecture and design. The design of the AECS hardware is described in the AECS Hardware Guide. Instructions on how to configure, build and use the AECS are described in the User's Guide. Sample AECS software, developed by the WVHTC Foundation, is presented in the AECS Software Guide. The AECS Hardware Guide, AECS User's Guide, and AECS Software Guide are authored by MSG. The requirements set forth for AECS are presented in the Statement of Work for the Assessment Environment for Complex Systems authored by NASA Dryden Flight Research Center (DFRC). The intended audience for this document includes software engineers, hardware engineers, project managers, and quality assurance personnel from WVHTC Foundation (the suppliers of the software), NASA (the customer), and future researchers (users of the software). Readers are assumed to have general knowledge in the field of real-time, embedded computer software development.

  20. ACTS (Advanced Communications Technology Satellite) Propagation Experiment: Preprocessing Software User's Manual

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.; Wang, Xuhe; Westenhaver, David

    1996-01-01

    The preprocessing software manual describes the Actspp program originally developed to observe and diagnose Advanced Communications Technology Satellite (ACTS) propagation terminal/receiver problems. However, it has been quite useful for automating the preprocessing functions needed to convert the terminal output to useful attenuation estimates. Prior to having data acceptable for archival functions, the individual receiver system must be calibrated and the power level shifts caused by ranging tone modulation must be received. Actspp provides three output files: the daylog, the diurnal coefficient file, and the file that contains calibration information.

  1. OSI for hardware/software interoperability

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.; Harvey, Donald L.; Linderman, Richard W.; Gardener, Gary A.; Capraro, Gerard T.

    1994-03-01

    There is a need in public safety for real-time data collection and transmission from one or more sensors. The Rome Laboratory and the Ballistic Missile Defense Organization are pursuing an effort to bring the benefits of Open System Architectures (OSA) to embedded systems within the Department of Defense. When developed properly OSA provides interoperability, commonality, graceful upgradeability, survivability and hardware/software transportability to greatly minimize life cycle costs, integration and supportability. Architecture flexibility can be achieved to take advantage of commercial accomplishments by basing these developments on vendor-neutral commercially accepted standards and protocols.

  2. Software interface verifier

    NASA Technical Reports Server (NTRS)

    Soderstrom, Tomas J.; Krall, Laura A.; Hope, Sharon A.; Zupke, Brian S.

    1994-01-01

    A Telos study of 40 recent subsystem deliveries into the DSN at JPL found software interface testing to be the single most expensive and error-prone activity, and the study team suggested creating an automated software interface test tool. The resulting Software Interface Verifier (SIV), which was funded by NASA/JPL and created by Telos, employed 92 percent software reuse to quickly create an initial version which incorporated early user feedback. SIV is now successfully used by developers for interface prototyping and unit testing, by test engineers for formal testing, and by end users for non-intrusive data flow tests in the operational environment. Metrics, including cost, are included. Lessons learned include the need for early user training. SIV is ported to many platforms and can be successfully used or tailored by other NASA groups.

  3. Proactive Security Testing and Fuzzing

    NASA Astrophysics Data System (ADS)

    Takanen, Ari

    Software is bound to have security critical flaws, and no testing or code auditing can ensure that software is flaw-less. But software security testing requirements have improved radically during the past years, largely due to criticism from security conscious consumers and Enterprise customers. Whereas in the past, security flaws were taken for granted (and patches were quietly and humbly installed), they now are probably one of the most common reasons why people switch vendors or software providers. The maintenance costs from security updates often add to become one of the biggest cost items to large Enterprise users. Fortunately test automation techniques have also improved. Techniques like model-based testing (MBT) enable efficient generation of security tests that reach good confidence levels in discovering zero-day mistakes in software. This technique is called fuzzing.

  4. Agile deployment and code coverage testing metrics of the boot software on-board Solar Orbiter's Energetic Particle Detector

    NASA Astrophysics Data System (ADS)

    Parra, Pablo; da Silva, Antonio; Polo, Óscar R.; Sánchez, Sebastián

    2018-02-01

    In this day and age, successful embedded critical software needs agile and continuous development and testing procedures. This paper presents the overall testing and code coverage metrics obtained during the unit testing procedure carried out to verify the correctness of the boot software that will run in the Instrument Control Unit (ICU) of the Energetic Particle Detector (EPD) on-board Solar Orbiter. The ICU boot software is a critical part of the project so its verification should be addressed at an early development stage, so any test case missed in this process may affect the quality of the overall on-board software. According to the European Cooperation for Space Standardization ESA standards, testing this kind of critical software must cover 100% of the source code statement and decision paths. This leads to the complete testing of fault tolerance and recovery mechanisms that have to resolve every possible memory corruption or communication error brought about by the space environment. The introduced procedure enables fault injection from the beginning of the development process and enables to fulfill the exigent code coverage demands on the boot software.

  5. Florida alternative NTCIP testing software (ANTS) for actuated signal controllers.

    DOT National Transportation Integrated Search

    2009-01-01

    The scope of this research project did include the development of a software tool to test devices for NTCIP compliance. Development of the Florida Alternative NTCIP Testing Software (ANTS) was developed by the research team due to limitations found w...

  6. Space vehicle onboard command encoder

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A flexible onboard encoder system was designed for the space shuttle. The following areas were covered: (1) implementation of the encoder design into hardware to demonstrate the various encoding algorithms/code formats, (2) modulation techniques in a single hardware package to maintain comparable reliability and link integrity of the existing link systems and to integrate the various techniques into a single design using current technology. The primary function of the command encoder is to accept input commands, generated either locally onboard the space shuttle or remotely from the ground, format and encode the commands in accordance with the payload input requirements and appropriately modulate a subcarrier for transmission by the baseband RF modulator. The following information was provided: command encoder system design, brassboard hardware design, test set hardware and system packaging, and software.

  7. [Application of ARIMA model on prediction of malaria incidence].

    PubMed

    Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai

    2016-01-29

    To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.

  8. Pulse Code Modulation (PCM) data storage and analysis using a microcomputer

    NASA Technical Reports Server (NTRS)

    Massey, D. E.

    1986-01-01

    The current widespread use of microcomputers has led to the creation of some very low-cost instrumentation. A Pulse Code Modulation (PCM) storage device/data analyzer -- a peripheral plug-in board especially constructed to enable a personal computer to store and analyze data from a PCM source -- was designed and built for use on the NASA Sounding Rocket Program for PMC encoder configuration and testing. This board and custom-written software turns a computer into a snapshot PCM decommutator which will accept and store many hundreds or thousands of PCM telemetry data frames, then sift through them repeatedly. These data can be converted to any number base and displayed, examined for any bit dropouts or changes (in particular, words or frames), graphically plotted, or statistically analyzed.

  9. Identifying a maximum tolerated contour in two-dimensional dose-finding

    PubMed Central

    Wages, Nolan A.

    2016-01-01

    The majority of Phase I methods for multi-agent trials have focused on identifying a single maximum tolerated dose combination (MTDC) among those being investigated. Some published methods in the area have been based on the notion that there is no unique MTDC, and that the set of dose combinations with acceptable toxicity forms an equivalence contour in two dimensions. Therefore, it may be of interest to find multiple MTDC's for further testing for efficacy in a Phase II setting. In this paper, we present a new dose-finding method that extends the continual reassessment method to account for the location of multiple MTDC's. Operating characteristics are demonstrated through simulation studies, and are compared to existing methodology. Some brief discussion of implementation and available software is also provided. PMID:26910586

  10. Comparing the effect of mefenamic Acid and vitex agnus on intrauterine device induced bleeding.

    PubMed

    Yavarikia, Parisa; Shahnazi, Mahnaz; Hadavand Mirzaie, Samira; Javadzadeh, Yousef; Lutfi, Razieh

    2013-09-01

    Increased bleeding is the most common cause of intrauterine device (IUD) removal. The use of alternative therapies to treat bleeding has increased due to the complications of medications. But most alternative therapies are not accepted by women. Therefore, conducting studies to find the right treatment with fewer complications and being acceptable is necessary. This study aimed to compare the effect of mefenamic acid and vitex agnus castus on IUD induced bleeding. This was a double blinded randomized controlled clinical trial. It was conducted on 84 women with random allocation in to two groups of 42 treated with mefenamic acid and vitex agnus capsules taking three times a day during menstruation for four months. Data were collected by demographic questionnaire and Higham 5 stage chart (1 month before the treatment and 4 months during the treatment)., Paired t-test, independent t-test, chi-square test, analysis of variance (ANOVA) with repeated measurements, and SPSS software were used to determine the results. Mefenamic acid and vitex agnus significantly decreased bleeding. This decrease in month 4 was 52% in the mefenamic acid group and 47.6% in the vitex agnus group. The mean bleeding score changes was statistically significant between the two groups in the first three months and before the intervention. In the mefenamic acid group, the decreased bleeding was significantly more than the vitex agnus group. However, during the 4(th) month, the mean change was not statistically significant. Mefenamic acid and vitex agnus were both effective on IUD induced bleeding; however, mefenamic acid was more effective.

  11. Comparing the Effect of Mefenamic Acid and Vitex Agnus on Intrauterine Device Induced Bleeding

    PubMed Central

    Yavarikia, Parisa; Shahnazi, Mahnaz; Hadavand Mirzaie, Samira; Javadzadeh, Yousef; Lutfi, Razieh

    2013-01-01

    Introduction: Increased bleeding is the most common cause of intrauterine device (IUD) removal. The use of alternative therapies to treat bleeding has increased due to the complications of medications. But most alternative therapies are not accepted by women. Therefore, conducting studies to find the right treatment with fewer complications and being acceptable is necessary. This study aimed to compare the effect of mefenamic acid and vitex agnus castus on IUD induced bleeding. Methods: This was a double blinded randomized controlled clinical trial. It was conducted on 84 women with random allocation in to two groups of 42 treated with mefenamic acid and vitex agnus capsules taking three times a day during menstruation for four months. Data were collected by demographic questionnaire and Higham 5 stage chart (1 month before the treatment and 4 months during the treatment)., Paired t-test, independent t-test, chi-square test, analysis of variance (ANOVA) with repeated measurements, and SPSS software were used to determine the results. Results: Mefenamic acid and vitex agnus significantly decreased bleeding. This decrease in month 4 was 52% in the mefenamic acid group and 47.6% in the vitex agnus group. The mean bleeding score changes was statistically significant between the two groups in the first three months and before the intervention. In the mefenamic acid group, the decreased bleeding was significantly more than the vitex agnus group. However, during the 4th month, the mean change was not statistically significant. Conclusion: Mefenamic acid and vitex agnus were both effective on IUD induced bleeding; however, mefenamic acid was more effective. PMID:25276733

  12. Adaptive Software Architecture Based on Confident HCI for the Deployment of Sensitive Services in Smart Homes

    PubMed Central

    Vega-Barbas, Mario; Pau, Iván; Martín-Ruiz, María Luisa; Seoane, Fernando

    2015-01-01

    Smart spaces foster the development of natural and appropriate forms of human-computer interaction by taking advantage of home customization. The interaction potential of the Smart Home, which is a special type of smart space, is of particular interest in fields in which the acceptance of new technologies is limited and restrictive. The integration of smart home design patterns with sensitive solutions can increase user acceptance. In this paper, we present the main challenges that have been identified in the literature for the successful deployment of sensitive services (e.g., telemedicine and assistive services) in smart spaces and a software architecture that models the functionalities of a Smart Home platform that are required to maintain and support such sensitive services. This architecture emphasizes user interaction as a key concept to facilitate the acceptance of sensitive services by end-users and utilizes activity theory to support its innovative design. The application of activity theory to the architecture eases the handling of novel concepts, such as understanding of the system by patients at home or the affordability of assistive services. Finally, we provide a proof-of-concept implementation of the architecture and compare the results with other architectures from the literature. PMID:25815449

  13. Dental Informatics tool “SOFPRO” for the study of oral submucous fibrosis

    PubMed Central

    Erlewad, Dinesh Masajirao; Mundhe, Kalpana Anandrao; Hazarey, Vinay K

    2016-01-01

    Background: Dental informatics is an evolving branch widely used in dental education and practice. Numerous applications that support clinical care, education and research have been developed. However, very few such applications are developed and utilized in the epidemiological studies of oral submucous fibrosis (OSF) which is affecting a significant population of Asian countries. Aims and Objectives: To design and develop an user friendly software for the descriptive epidemiological study of OSF. Materials and Methods: With the help of a software engineer a computer program SOFPRO was designed and developed by using, Ms-Visual Basic 6.0 (VB), Ms-Access 2000, Crystal Report 7.0 and Ms-Paint in operating system XP. For the analysis purpose the available OSF data from the departmental precancer registry was fed into the SOFPRO. Results: Known data, not known and null data are successfully accepted in data entry and represented in data analysis of OSF. Smooth working of SOFPRO and its correct data flow was tested against real-time data of OSF. Conclusion: SOFPRO was found to be a user friendly automated tool for easy data collection, retrieval, management and analysis of OSF patients. PMID:27601808

  14. Gas turbine engines and transmissions for bus demonstration programs. Technical status report, 30 April 1979-31 July 1979

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigro, D.N.

    1979-07-01

    The quarterly status report covers the period from 30 April 1979 through 31 July 1979 and is a summary of DDA activities for the effort performed on the procurement and delivery of eleven (11) Allison GT 404-4 gas turbine engines and five (5) HT740CT and six (6) V730CT Allison automatic transmissions and the required associated software. The contract requires the delivery of eleven (11) Allison GT 404-4 Industrial Gas Turbine Engines and five (5) HT740CT and six (6) V730CT Allison Automatic Transmissions for the Greyhound and Transit Coaches, respectively. In addition, software items such as cost reports, technical reports, installationmore » drawings, acceptance test data and parts lists are required. A recent decision by the DOE will modify the build configuration for the last four (4) Transit Coach engines. It was decided by the DOE at a meeting in Washington, DC on March 28, 1979 with representatives from DDA, NASA/LeRC, JPL and Booz-Allen and Hamilton that these engines will be built with ceramic regenerators. (TFD)« less

  15. Open control/display system for a telerobotics work station

    NASA Technical Reports Server (NTRS)

    Keslowitz, Saul

    1987-01-01

    A working Advanced Space Cockpit was developed that integrated advanced control and display devices into a state-of-the-art multimicroprocessor hardware configuration, using window graphics and running under an object-oriented, multitasking real-time operating system environment. This Open Control/Display System supports the idea that the operator should be able to interactively monitor, select, control, and display information about many payloads aboard the Space Station using sets of I/O devices with a single, software-reconfigurable workstation. This is done while maintaining system consistency, yet the system is completely open to accept new additions and advances in hardware and software. The Advanced Space Cockpit, linked to Grumman's Hybrid Computing Facility and Large Amplitude Space Simulator (LASS), was used to test the Open Control/Display System via full-scale simulation of the following tasks: telerobotic truss assembly, RCS and thermal bus servicing, CMG changeout, RMS constrained motion and space constructible radiator assembly, HPA coordinated control, and OMV docking and tumbling satellite retrieval. The proposed man-machine interface standard discussed has evolved through many iterations of the tasks, and is based on feedback from NASA and Air Force personnel who performed those tasks in the LASS.

  16. Air Force Research Laboratory Spacecraft Cryocooler Endurance Evaluation Facility Closing Report

    NASA Astrophysics Data System (ADS)

    Armstrong, J.; Martin, K. W.; Fraser, T.

    2015-12-01

    The Air Force Research Laboratory (AFRL) Spacecraft Component Thermal Research Group has been devoted to evaluating lifetime performance of space cryocooler technology for over twenty years. Long-life data is essential for confirming design lifetimes for space cryocoolers. Continuous operation in a simulated space environment is the only accepted method to test for degradation. AFRL has provided raw data and detailed evaluations to cryocooler developers for advancing the technology, correcting discovered deficiencies, and improving cryocooler designs. At AFRL, units of varying design and refrigeration cycles were instrumented in state-of-the-art experiment stands to provide spacelike conditions and were equipped with software data acquisition to track critical cryocooler operating parameters. This data allowed an assessment of the technology's ability to meet the desired lifetime and documented any long-term changes in performance. This paper will outline a final report of the various flight cryocoolers tested in our laboratory. The data summarized includes the seven cryocoolers tested during 2014-2015. These seven coolers have a combined total of 433,326 hours (49.5 years) of operation.

  17. Modernized build and test infrastructure for control software at ESO: highly flexible building, testing, and automatic quality practices for telescope control software

    NASA Astrophysics Data System (ADS)

    Pellegrin, F.; Jeram, B.; Haucke, J.; Feyrin, S.

    2016-07-01

    The paper describes the introduction of a new automatized build and test infrastructure, based on the open-source software Jenkins1, into the ESO Very Large Telescope control software to replace the preexisting in-house solution. A brief introduction to software quality practices is given, a description of the previous solution, the limitations of it and new upcoming requirements. Modifications required to adapt the new system are described, how these were implemented to current software and the results obtained. An overview on how the new system may be used in future projects is also presented.

  18. Field Test of Route Planning Software for Lunar Polar Missions

    NASA Astrophysics Data System (ADS)

    Horchler, A. D.; Cunningham, C.; Jones, H. L.; Arnett, D.; Fang, E.; Amoroso, E.; Otten, N.; Kitchell, F.; Holst, I.; Rock, G.; Whittaker, W.

    2017-10-01

    A novel field test paradigm has been developed to demonstrate and validate route planning software in the stark low-angled light and sweeping shadows a rover would experience at the poles of the Moon. Software, ConOps, and test results are presented.

  19. Are You Listening to Your Computer?

    ERIC Educational Resources Information Center

    Shugg, Alan

    1992-01-01

    Accepting the great motivational value of computers in second-language learning, this article describes ways to use authentic language recorded on a computer with HyperCard. Graphics, sound, and hardware/software requirements are noted, along with brief descriptions of programing with sound and specific programs. (LB)

  20. Ethical and Professional Issues in Computer-Assisted Therapy.

    ERIC Educational Resources Information Center

    Ford, B. Douglas

    1993-01-01

    Discusses ethical and professional issues in psychology regarding computer-assisted therapy (CAT). Topics addressed include an explanation of CAT; whether CAT is psychotherapy; software, including independent use, validation of effectiveness, and restricted access; clinician resistance; client acceptance; the impact on ethical standards; and a…

  1. The Role of Crop Systems Simulation in Agriculture and Environment

    USDA-ARS?s Scientific Manuscript database

    Over the past 30 to 40 years, simulation of crop systems has advanced from a neophyte science with inadequate computing power into a robust and increasingly accepted science supported by improved software, languages, development tools, and computer capabilities. Crop system simulators contain mathe...

  2. 49 CFR 238.111 - Pre-revenue service acceptance testing plan.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the times and places of the pre-revenue service tests to permit FRA observation of such tests. For... 49 Transportation 4 2010-10-01 2010-10-01 false Pre-revenue service acceptance testing plan. 238... and General Requirements § 238.111 Pre-revenue service acceptance testing plan. (a) Passenger...

  3. 49 CFR 238.111 - Pre-revenue service acceptance testing plan.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the times and places of the pre-revenue service tests to permit FRA observation of such tests. For... 49 Transportation 4 2011-10-01 2011-10-01 false Pre-revenue service acceptance testing plan. 238... and General Requirements § 238.111 Pre-revenue service acceptance testing plan. (a) Passenger...

  4. 49 CFR 238.111 - Pre-revenue service acceptance testing plan.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the times and places of the pre-revenue service tests to permit FRA observation of such tests. For... 49 Transportation 4 2012-10-01 2012-10-01 false Pre-revenue service acceptance testing plan. 238... and General Requirements § 238.111 Pre-revenue service acceptance testing plan. (a) Passenger...

  5. 49 CFR 238.111 - Pre-revenue service acceptance testing plan.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the times and places of the pre-revenue service tests to permit FRA observation of such tests. For... 49 Transportation 4 2014-10-01 2014-10-01 false Pre-revenue service acceptance testing plan. 238... and General Requirements § 238.111 Pre-revenue service acceptance testing plan. (a) Passenger...

  6. 49 CFR 238.111 - Pre-revenue service acceptance testing plan.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the times and places of the pre-revenue service tests to permit FRA observation of such tests. For... 49 Transportation 4 2013-10-01 2013-10-01 false Pre-revenue service acceptance testing plan. 238... and General Requirements § 238.111 Pre-revenue service acceptance testing plan. (a) Passenger...

  7. Proposed acceptance, qualification, and characterization tests for thin-film PV modules

    NASA Technical Reports Server (NTRS)

    Waddington, D.; Mrig, L.; Deblasio, R.; Ross, R.

    1988-01-01

    Details of a proposed test program for PV thin-film modules which the Department of Energy has directed the Solar Energy Research Institute (SERI) to prepare are presented. Results of one of the characterization tests that SERI has performed are also presented. The objective is to establish a common approach to testing modules that will be acceptable to both users and manufacturers. The tests include acceptance, qualification, and characterization tests. Acceptance tests verify that randomly selected modules have similar characteristics. Qualification tests are based on accelerated test methods designed to simulate adverse conditions. Characterization tests provide data on performance in a predefined environment.

  8. Proceedings of the Annual Ada Software Engineering Education and Training Symposium (3rd) Held in Denver, Colorado on June 14-16, 1988

    DTIC Science & Technology

    1988-06-01

    Based Software Engineering Project Course .............. 83 SSoftware Engineering, Software Engineering Concepts: The Importance of Object-Based...quality assurance, and independent system testing . The Chief Programmer is responsible for all software development activities, including prototyping...during the Requirements Analysis phase, the Preliminary Design, the Detailed Design, Coding and Unit Testing , CSC Integration and Testing , and informal

  9. Software OT&E Guidelines. Volume 1. Software Test Manager’s Handbook

    DTIC Science & Technology

    1981-02-01

    on reverse side If neceeary and identify by block number) The Software OT&E Guidelines is a set of handbooks prepared by the Computer / Support Systems...is one of a set of handbooks prepared by the Computer /Support Systems Division of the Test and Evaluation Directorate, Air Force Test and Evaluation...15 E. Software Maintainability .. .. ........ ... 16 F. Standard Questionnaires. .. .. ....... .... 16 1. Operator- Computer Interface Evaluation

  10. Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction

    PubMed Central

    Venkatesan, R.

    2016-01-01

    Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets. PMID:27738649

  11. Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction.

    PubMed

    Kumudha, P; Venkatesan, R

    Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets.

  12. Coordination and organization of security software process for power information application environment

    NASA Astrophysics Data System (ADS)

    Wang, Qiang

    2017-09-01

    As an important part of software engineering, the software process decides the success or failure of software product. The design and development feature of security software process is discussed, so is the necessity and the present significance of using such process. Coordinating the function software, the process for security software and its testing are deeply discussed. The process includes requirement analysis, design, coding, debug and testing, submission and maintenance. In each process, the paper proposed the subprocesses to support software security. As an example, the paper introduces the above process into the power information platform.

  13. Dosimetric validation for an automatic brain metastases planning software using single-isocenter dynamic conformal arcsDosimetric validation for an automatic brain metastases planning software using single-isocenter dynamic conformal arcs.

    PubMed

    Liu, Haisong; Li, Jun; Pappas, Evangelos; Andrews, David; Evans, James; Werner-Wasik, Maria; Yu, Yan; Dicker, Adam; Shi, Wenyin

    2016-09-08

    An automatic brain-metastases planning (ABMP) software has been installed in our institution. It is dedicated for treating multiple brain metastases with radiosurgery on linear accelerators (linacs) using a single-setup isocenter with noncoplanar dynamic conformal arcs. This study is to validate the calculated absolute dose and dose distribution of ABMP. Three types of measurements were performed to validate the planning software: 1, dual micro ion chambers were used with an acrylic phantom to measure the absolute dose; 2, a 3D cylindrical phantom with dual diode array was used to evaluate 2D dose distribution and point dose for smaller targets; and 3, a 3D pseudo-in vivo patient-specific phantom filled with polymer gels was used to evaluate the accuracy of 3D dose distribution and radia-tion delivery. Micro chamber measurement of two targets (volumes of 1.2 cc and 0.9 cc, respectively) showed that the percentage differences of the absolute dose at both targets were less than 1%. Averaged GI passing rate of five different plans measured with the diode array phantom was above 98%, using criteria of 3% dose difference, 1 mm distance to agreement (DTA), and 10% low-dose threshold. 3D gel phantom measurement results demonstrated a 3D displacement of nine targets of 0.7 ± 0.4 mm (range 0.2 ~ 1.1 mm). The averaged two-dimensional (2D) GI passing rate for several region of interests (ROI) on axial slices that encompass each one of the nine targets was above 98% (5% dose difference, 2 mm DTA, and 10% low-dose threshold). Measured D95, the minimum dose that covers 95% of the target volume, of the nine targets was 0.7% less than the calculated D95. Three different types of dosimetric verification methods were used and proved the dose calculation of the new automatic brain metastases planning (ABMP) software was clinical acceptable. The 3D pseudo-in vivo patient-specific gel phantom test also served as an end-to-end test for validating not only the dose calculation, but the treatment delivery accuracy as well. © 2016 The Authors.

  14. IMCS reflight certification requirements and design specifications

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The requirements for reflight certification are established. Software requirements encompass the software programs that are resident in the PCC, DEP, PDSS, EC, or any related GSE. A design approach for the reflight software packages is recommended. These designs will be of sufficient detail to permit the implementation of reflight software. The PDSS/IMC Reflight Certification system provides the tools and mechanisms for the user to perform the reflight certification test procedures, test data capture, test data display, and test data analysis. The system as defined will be structured to permit maximum automation of reflight certification procedures and test data analysis.

  15. Cassini's Test Methodology for Flight Software Verification and Operations

    NASA Technical Reports Server (NTRS)

    Wang, Eric; Brown, Jay

    2007-01-01

    The Cassini spacecraft was launched on 15 October 1997 on a Titan IV-B launch vehicle. The spacecraft is comprised of various subsystems, including the Attitude and Articulation Control Subsystem (AACS). The AACS Flight Software (FSW) and its development has been an ongoing effort, from the design, development and finally operations. As planned, major modifications to certain FSW functions were designed, tested, verified and uploaded during the cruise phase of the mission. Each flight software upload involved extensive verification testing. A standardized FSW testing methodology was used to verify the integrity of the flight software. This paper summarizes the flight software testing methodology used for verifying FSW from pre-launch through the prime mission, with an emphasis on flight experience testing during the first 2.5 years of the prime mission (July 2004 through January 2007).

  16. Simple Colorimetric Sensor for Trinitrotoluene Testing

    NASA Astrophysics Data System (ADS)

    Samanman, S.; Masoh, N.; Salah, Y.; Srisawat, S.; Wattanayon, R.; Wangsirikul, P.; Phumivanichakit, K.

    2017-02-01

    A simple operating colorimetric sensor for trinitrotoluene (TNT) determination using a commercial scanner as a captured image was designed. The sensor is based on the chemical reaction between TNT and sodium hydroxide reagent to produce the color change within 96 well plates, which observed finally, recorded using a commercial scanner. The intensity of the color change increased with increase in TNT concentration and could easily quantify the concentration of TNT by digital image analysis using the Image J free software. Under optimum conditions, the sensor provided a linear dynamic range between 0.20 and 1.00 mg mL-1(r = 0.9921) with a limit of detection of 0.10± 0.01 mg mL-1. The relative standard deviation for eight experiments for the sensitivity was 3.8%. When applied for the analysis of TNT in two soil extract samples, the concentrations were found to be non-detectable to 0.26±0.04 mg mL-1. The obtained recovery values (93-95%) were acceptable for soil samples tested.

  17. Argon-oxygen atmospheric pressure plasma treatment on carbon fiber reinforced polymer for improved bonding

    NASA Astrophysics Data System (ADS)

    Chartosias, Marios

    Acceptance of Carbon Fiber Reinforced Polymer (CFRP) structures requires a robust surface preparation method with improved process controls capable of ensuring high bond quality. Surface preparation in a production clean room environment prior to applying adhesive for bonding would minimize risk of contamination and reduce cost. Plasma treatment is a robust surface preparation process capable of being applied in a production clean room environment with process parameters that are easily controlled and documented. Repeatable and consistent processing is enabled through the development of a process parameter window utilizing techniques such as Design of Experiments (DOE) tailored to specific adhesive and substrate bonding applications. Insight from respective plasma treatment Original Equipment Manufacturers (OEMs) and screening tests determined critical process factors from non-factors and set the associated factor levels prior to execution of the DOE. Results from mode I Double Cantilever Beam (DCB) testing per ASTM D 5528 [1] standard and DOE statistical analysis software are used to produce a regression model and determine appropriate optimum settings for each factor.

  18. PDB_REDO: constructive validation, more than just looking for errors.

    PubMed

    Joosten, Robbie P; Joosten, Krista; Murshudov, Garib N; Perrakis, Anastassis

    2012-04-01

    Developments of the PDB_REDO procedure that combine re-refinement and rebuilding within a unique decision-making framework to improve structures in the PDB are presented. PDB_REDO uses a variety of existing and custom-built software modules to choose an optimal refinement protocol (e.g. anisotropic, isotropic or overall B-factor refinement, TLS model) and to optimize the geometry versus data-refinement weights. Next, it proceeds to rebuild side chains and peptide planes before a final optimization round. PDB_REDO works fully automatically without the need for intervention by a crystallographic expert. The pipeline was tested on 12 000 PDB entries and the great majority of the test cases improved both in terms of crystallographic criteria such as R(free) and in terms of widely accepted geometric validation criteria. It is concluded that PDB_REDO is useful to update the otherwise `static' structures in the PDB to modern crystallographic standards. The publically available PDB_REDO database provides better model statistics and contributes to better refinement and validation targets.

  19. PDB_REDO: constructive validation, more than just looking for errors

    PubMed Central

    Joosten, Robbie P.; Joosten, Krista; Murshudov, Garib N.; Perrakis, Anastassis

    2012-01-01

    Developments of the PDB_REDO procedure that combine re-refinement and rebuilding within a unique decision-making framework to improve structures in the PDB are presented. PDB_REDO uses a variety of existing and custom-built software modules to choose an optimal refinement protocol (e.g. anisotropic, isotropic or overall B-factor refinement, TLS model) and to optimize the geometry versus data-refinement weights. Next, it proceeds to rebuild side chains and peptide planes before a final optimization round. PDB_REDO works fully automatically without the need for intervention by a crystallographic expert. The pipeline was tested on 12 000 PDB entries and the great majority of the test cases improved both in terms of crystallographic criteria such as R free and in terms of widely accepted geometric validation criteria. It is concluded that PDB_REDO is useful to update the otherwise ‘static’ structures in the PDB to modern crystallographic standards. The publically available PDB_REDO database provides better model statistics and contributes to better refinement and validation targets. PMID:22505269

  20. Orion Scripted Interface Generator (OrionSIG)

    NASA Technical Reports Server (NTRS)

    Dooling, Robert J.

    2013-01-01

    The Orion spacecraft undergoing development at NASA and Lockheed Martin aims to launch the first humans to set foot on asteroids and Mars.' Sensors onboard Orion must transmit back to Earth astronomical amounts of data recording almost everything in 50,231 lb. (22,784 kg)2 of spacecraft, down to the temperatures, voltages, or torsions of even the most minor components. This report introduces the new Orion Scripted Interface Generator (OrionSIG) software created by summer 2013 NASA interns Robert Dooling and Samuel Harris. OrionSIG receives a list of Orion variables and produces a script to graph these measurements regardless of their size or type. The program also accepts many other input options to manipulate displays, such as limits on the graph's range or commands to graph different values in a reverse sawtooth wave. OrionSIG paves the way for monitoring stations on Earth to process, display, and test Orion data much more efficiently, a helpful asset in preparation for Orion's first test mission in 2014. Figure I.

  1. Scalable Integrated Multi-Mission Support System (SIMSS) Simulator Release 2.0 for GMSEC

    NASA Technical Reports Server (NTRS)

    Kim, John; Velamuri, Sarma; Casey, Taylor; Bemann, Travis

    2012-01-01

    Scalable Integrated Multi-Mission Support System (SIMSS) Simulator Release 2.0 software is designed to perform a variety of test activities related to spacecraft simulations and ground segment checks. This innovation uses the existing SIMSS framework, which interfaces with the GMSEC (Goddard Mission Services Evolution Center) Application Programming Interface (API) Version 3.0 message middleware, and allows SIMSS to accept GMSEC standard messages via the GMSEC message bus service. SIMSS is a distributed, component-based, plug-and-play client-server system that is useful for performing real-time monitoring and communications testing. SIMSS runs on one or more workstations, and is designed to be user-configurable, or to use predefined configurations for routine operations. SIMSS consists of more than 100 modules that can be configured to create, receive, process, and/or transmit data. The SIMSS/GMSEC innovation is intended to provide missions with a low-cost solution for implementing their ground systems, as well as to significantly reduce a mission s integration time and risk.

  2. LIMS user acceptance testing.

    PubMed

    Klein, Corbett S

    2003-01-01

    Laboratory Information Management Systems (LIMS) play a key role in the pharmaceutical industry. Thorough and accurate validation of such systems is critical and is a regulatory requirement. LIMS user acceptance testing is one aspect of this testing and enables the user to make a decision to accept or reject implementation of the system. This paper discusses key elements in facilitating the development and execution of a LIMS User Acceptance Test Plan (UATP).

  3. Technical Report on the Modification of 3-Dimensional Non-contact Human Body Laser Scanner for the Measurement of Anthropometric Dimensions: Verification of its Accuracy and Precision.

    PubMed

    Jafari Roodbandi, Akram Sadat; Naderi, Hamid; Hashenmi-Nejad, Naser; Choobineh, Alireza; Baneshi, Mohammad Reza; Feyzi, Vafa

    2017-01-01

    Introduction: Three-dimensional (3D) scanners are widely used in medicine. One of the applications of 3D scanners is the acquisition of anthropometric dimensions for ergonomics and the creation of an anthropometry data bank. The aim of this study was to evaluate the precision and accuracy of a modified 3D scanner fabricated in this study. Methods: In this work, a 3D scan of the human body was obtained using DAVID Laser Scanner software and its calibration background, a linear low-power laser, and one advanced webcam. After the 3D scans were imported to the Geomagic software, 10 anthropometric dimensions of 10 subjects were obtained. The measurements of the 3D scanner were compared to the measurements of the same dimensions by a direct anthropometric method. The precision and accuracy of the measurements of the 3D scanner were then evaluated. The obtained data were analyzed using an independent sample t test with the SPSS software. Results: The minimum and maximum measurement differences from three consecutive scans by the 3D scanner were 0.03 mm and 18 mm, respectively. The differences between the measurements by the direct anthropometry method and the 3D scanner were not statistically significant. Therefore, the accuracy of the 3D scanner is acceptable. Conclusion: Future studies will need to focus on the improvement of the scanning speed and the quality of the scanned image.

  4. Technical Report on the Modification of 3-Dimensional Non-contact Human Body Laser Scanner for the Measurement of Anthropometric Dimensions: Verification of its Accuracy and Precision

    PubMed Central

    Jafari Roodbandi, Akram Sadat; Naderi, Hamid; Hashenmi-Nejad, Naser; Choobineh, Alireza; Baneshi, Mohammad Reza; Feyzi, Vafa

    2017-01-01

    Introduction: Three-dimensional (3D) scanners are widely used in medicine. One of the applications of 3D scanners is the acquisition of anthropometric dimensions for ergonomics and the creation of an anthropometry data bank. The aim of this study was to evaluate the precision and accuracy of a modified 3D scanner fabricated in this study. Methods: In this work, a 3D scan of the human body was obtained using DAVID Laser Scanner software and its calibration background, a linear low-power laser, and one advanced webcam. After the 3D scans were imported to the Geomagic software, 10 anthropometric dimensions of 10 subjects were obtained. The measurements of the 3D scanner were compared to the measurements of the same dimensions by a direct anthropometric method. The precision and accuracy of the measurements of the 3D scanner were then evaluated. The obtained data were analyzed using an independent sample t test with the SPSS software. Results: The minimum and maximum measurement differences from three consecutive scans by the 3D scanner were 0.03 mm and 18 mm, respectively. The differences between the measurements by the direct anthropometry method and the 3D scanner were not statistically significant. Therefore, the accuracy of the 3D scanner is acceptable. Conclusion: Future studies will need to focus on the improvement of the scanning speed and the quality of the scanned image. PMID:28912940

  5. 78 FR 38411 - Vogtle Electric Generating Plant, Unit 4; Inspections, Tests, Analyses, and Acceptance Criteria

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-26

    ... Plant, Unit 4; Inspections, Tests, Analyses, and Acceptance Criteria AGENCY: Nuclear Regulatory Commission. ACTION: Determination of inspections, tests, analyses, and acceptance criteria completion. SUMMARY: The U.S. Nuclear Regulatory Commission (NRC) staff has determined that the inspections, tests...

  6. Mars Science Laboratory Flight Software Boot Robustness Testing Project Report

    NASA Technical Reports Server (NTRS)

    Roth, Brian

    2011-01-01

    On the surface of Mars, the Mars Science Laboratory will boot up its flight computers every morning, having charged the batteries through the night. This boot process is complicated, critical, and affected by numerous hardware states that can be difficult to test. The hardware test beds do not facilitate testing a long duration of back-to-back unmanned automated tests, and although the software simulation has provided the necessary functionality and fidelity for this boot testing, there has not been support for the full flexibility necessary for this task. Therefore to perform this testing a framework has been build around the software simulation that supports running automated tests loading a variety of starting configurations for software and hardware states. This implementation has been tested against the nominal cases to validate the methodology, and support for configuring off-nominal cases is ongoing. The implication of this testing is that the introduction of input configurations that have yet proved difficult to test may reveal boot scenarios worth higher fidelity investigation, and in other cases increase confidence in the robustness of the flight software boot process.

  7. 46 CFR 164.023-11 - Acceptance tests.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 6 2014-10-01 2014-10-01 false Acceptance tests. 164.023-11 Section 164.023-11 Shipping...: SPECIFICATIONS AND APPROVAL MATERIALS Thread for Personal Flotation Devices § 164.023-11 Acceptance tests. (a) Performance testing. Manufacturers shall ensure that the performance tests described in § 164.023-7 (a) or (b...

  8. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.

    1987-01-01

    The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.

  9. In-office distance learning for practitioners.

    PubMed

    Klein, Katherine P; Miller, Kenneth T; Brown, Matthew W; Proffit, William R

    2011-07-01

    Distance learning studies involving orthodontic residents have shown that, although residents prefer being live and interactive with an instructor, they learn almost as much from watching a recorded interactive seminar followed by a live discussion. Our objective in this study was to test the acceptability and perceived effectiveness of using recorded interactive seminars and video conference follow-up discussions for in-office continuing education. Four small groups of practitioners (total, n = 23) were asked to prepare for, view, and then discuss previously recorded interactive seminars on a variety of subjects; a fifth group (5 previous participants) had live discussions of 3 topics without viewing a prerecorded seminar. All discussions were via video conference through typical broadband Internet connections, by using either WebEx (Cisco, Santa Clara, Calif) or Elluminate (Pleasanton, Calif) software. The participants evaluated their experiences by rating presented statements on a 7-point Likert scale and by providing open-ended responses. Twenty-two of the 23 participants agreed (with varying degrees of enthusiasm) that this was an enjoyable, effective way to learn, and that they would like to participate in this type of learning in the future. Everyone agreed that they would recommend this method of learning to others. The age and experience of the participants had only minor effects on their perceptions of acceptance and acceptability. The use of recorded seminars followed by live interaction through videoconferencing can be an acceptable and effective method of providing continuing education to the home or office of orthodontists in private practice, potentially saving them both time and travel expenses. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  10. IHE cross-enterprise document sharing for imaging: interoperability testing software

    PubMed Central

    2010-01-01

    Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241

  11. IHE cross-enterprise document sharing for imaging: interoperability testing software.

    PubMed

    Noumeir, Rita; Renaud, Bérubé

    2010-09-21

    With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.

  12. Orion Script Generator

    NASA Technical Reports Server (NTRS)

    Dooling, Robert J.

    2012-01-01

    NASA Engineering's Orion Script Generator (OSG) is a program designed to run on Exploration Flight Test One Software. The script generator creates a SuperScript file that, when run, accepts the filename for a listing of Compact Unique Identifiers (CUIs). These CUIs will correspond to different variables on the Orion spacecraft, such as the temperature of a component X, the active or inactive status of another component Y, and so on. OSG will use a linked database to retrieve the value for each CUI, such as "100 05," "True," and so on. Finally, OSG writes SuperScript code to display each of these variables before outputting the ssi file that allows recipients to view a graphical representation of Orion Flight Test One's status through these variables. This project's main challenge was creating flexible software that accepts and transfers many types of data, from Boolean (true or false) values to "Unsigned Long Long'' values (any number from 0 to 18,446,744,073,709,551,615). We also needed to allow bit manipulation for each variable, requiring us to program functions that could convert any of the multiple types of data into binary code. Throughout the project, we explored different methods to optimize the speed of working with the CUI database and long binary numbers. For example, the program handled extended binary numbers much more efficiently when we stored them as collections of Boolean values (true or false representing 1 or 0) instead of as collections of character strings or numbers. We also strove to make OSG as user-friendly and accommodating of different needs as possible its default behavior is to display a current CUI's maximum value and minimum value with three to five intermediate values in between, all in descending order. Fortunately, users can also add other input on the same lines as each CUI name to request different high values, low values, display options (ascending, sine, and so on), and interval sizes for generating intermediate values. Developing input validation took up quite a bit of time, but OSG's flexibility in the end was worth it.

  13. WE-G-204-07: Automated Characterization of Perceptual Quality of Clinical Chest Radiographs: Improvements in Lung, Spine, and Hardware Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, J; Zhang, L; Samei, E

    Purpose: To develop and validate more robust methods for automated lung, spine, and hardware detection in AP/PA chest images. This work is part of a continuing effort to automatically characterize the perceptual image quality of clinical radiographs. [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] Methods: Our previous implementation of lung/spine identification was applicable to only one vendor. A more generalized routine was devised based on three primary components: lung boundary detection, fuzzy c-means (FCM) clustering, and a clinically-derived lung pixel probability map. Boundary detection was used to constrain the lung segmentations. FCM clustering produced grayscale- and neighborhood-based pixelmore » classification probabilities which are weighted by the clinically-derived probability maps to generate a final lung segmentation. Lung centerlines were set along the left-right lung midpoints. Spine centerlines were estimated as a weighted average of body contour, lateral lung contour, and intensity-based centerline estimates. Centerline estimation was tested on 900 clinical AP/PA chest radiographs which included inpatient/outpatient, upright/bedside, men/women, and adult/pediatric images from multiple imaging systems. Our previous implementation further did not account for the presence of medical hardware (pacemakers, wires, implants, staples, stents, etc.) potentially biasing image quality analysis. A hardware detection algorithm was developed using a gradient-based thresholding method. The training and testing paradigm used a set of 48 images from which 1920 51×51 pixel{sup 2} ROIs with and 1920 ROIs without hardware were manually selected. Results: Acceptable lung centerlines were generated in 98.7% of radiographs while spine centerlines were acceptable in 99.1% of radiographs. Following threshold optimization, the hardware detection software yielded average true positive and true negative rates of 92.7% and 96.9%, respectively. Conclusion: Updated segmentation and centerline estimation methods in addition to new gradient-based hardware detection software provide improved data integrity control and error-checking for automated clinical chest image quality characterization across multiple radiography systems.« less

  14. 15 CFR 740.9 - Temporary imports, exports, and reexports (TMP).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... the end of the beta test period as defined by the software producer or, if the software producer does... States; and exports and reexports of beta test software. (a) Temporary exports and reexports—(1) Scope. You may export and reexport commodities and software for temporary use abroad (including use in...

  15. 15 CFR 740.9 - Temporary imports, exports, and reexports (TMP).

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... the end of the beta test period as defined by the software producer or, if the software producer does... States; and exports and reexports of beta test software. (a) Temporary exports and reexports—(1) Scope. You may export and reexport commodities and software for temporary use abroad (including use in...

  16. Mars Science Laboratory Boot Robustness Testing

    NASA Technical Reports Server (NTRS)

    Banazadeh, Payam; Lam, Danny

    2011-01-01

    Mars Science Laboratory (MSL) is one of the most complex spacecrafts in the history of mankind. Due to the nature of its complexity, a large number of flight software (FSW) requirements have been written for implementation. In practice, these requirements necessitate very complex and very precise flight software with no room for error. One of flight software's responsibilities is to be able to boot up and check the state of all devices on the spacecraft after the wake up process. This boot up and initialization is crucial to the mission success since any misbehavior of different devices needs to be handled through the flight software. I have created a test toolkit that allows the FSW team to exhaustively test the flight software under variety of different unexpected scenarios and validate that flight software can handle any situation after booting up. The test includes initializing different devices on spacecraft to different configurations and validate at the end of the flight software boot up that the flight software has initialized those devices to what they are suppose to be in that particular scenario.

  17. Supporting the Use of CERT (registered trademark) Secure Coding Standards in DoD Acquisitions

    DTIC Science & Technology

    2012-07-01

    Capability Maturity Model IntegrationSM (CMMI®) [Davis 2009]. SM Team Software Process, TSP, and Capability Maturity Model Integration are service...STP Software Test Plan TEP Test and Evaluation Plan TSP Team Software Process V & V verification and validation CMU/SEI-2012-TN-016 | 47...Supporting the Use of CERT® Secure Coding Standards in DoD Acquisitions Tim Morrow ( Software Engineering Institute) Robert Seacord ( Software

  18. Tools for Embedded Computing Systems Software

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.

  19. Firing Room Remote Application Software Development

    NASA Technical Reports Server (NTRS)

    Liu, Kan

    2015-01-01

    The Engineering and Technology Directorate (NE) at National Aeronautics and Space Administration (NASA) Kennedy Space Center (KSC) is designing a new command and control system for the checkout and launch of Space Launch System (SLS) and future rockets. The purposes of the semester long internship as a remote application software developer include the design, development, integration, and verification of the software and hardware in the firing rooms, in particular with the Mobile Launcher (ML) Launch Accessories (LACC) subsystem. In addition, a software test verification procedure document was created to verify and checkout LACC software for Launch Equipment Test Facility (LETF) testing.

  20. [Reporting echocardiography exams with the G8-Cardio ANMCO software].

    PubMed

    Badano, L P; Marchesini, A; Pizzuti, A; Mantero, A; Cianflone, D; Neri, E; Caira, P; Tubaro, M

    2001-03-01

    The availability of a common computerized program for echocardiographic study archiving and reporting at national and/or international level could make it possible to standardize the echo reports of different echocardiographic laboratories, and to use the wealth of data thus obtainable with echocardiography, and to exploit its capillary territorial distribution, with the aim of collecting echocardiographic data in a standard format for epidemiological, scientific and administrative purposes. To develop such a software, an ad hoc joint National Association of Hospital Cardiologists and Italian Society of Echocardiography task force worked in conjunction with the Italian Branch of Agilent Technologies to standardize the phraseology of accepted echocardiographic terms and of the quantitative parameters derived from transthoracic and transesophageal echocardiographic examination at rest as well as during exercise and pharmacological stress, and to develop an ad hoc software. This echocardiographic study archiving and reporting program is part of the whole G8-Cardio ANMCO software developed to computerize the whole cardiological chart. The software has been developed by Agilent Technologies to provide a fast, easy-access and easy to use report generator for the non-computer specialist using DBMS Oracle 7.3 database and Power Builder 5.0 to develop a user-friendly interface. The number of qualitative and quantitative variables contained in the program is 733 for echocardiography at rest, while it depends on the stressor and on the length of the examination for the stress echo (dipyridamole 214-384, dobutamine 236-406, exercise 198-392). The program was tested and refined in our laboratory between November 1999 and May 2000. During this time period, 291 resting and 56 stress echocardiographic studies were reported and recorded in a database. On average, each resting echocardiographic study lasting 10 +/- 4 (range 5-17) min was recorded using 50 +/- 11 (range 33-67) variables and 41,566 bytes of hard-disk memory space. Stress echocardiographic studies, each lasting 7 +/- 5 (range 5-21) min, were recorded using 143 +/- 74 (range 38-194) variables and 38,531 bytes of hard-disk memory space. To our knowledge this software represents the first experience of a common computerized program for echo archiving and reporting carried out at national level.

Top