NASA Astrophysics Data System (ADS)
Shao, Hongbing
Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.
2016-01-01
Now more than ever, scientific results are dependent on sophisticated software and analysis. Why should we trust code written by others? How do you ensure your own code produces sensible results? How do you make sure it continues to do so as you update, modify, and add functionality? Software testing is an integral part of code validation and writing tests should be a requirement for any software project. I will talk about Python-based tools that make managing and running tests much easier and explore some statistics for projects hosted on GitHub that contain tests.
Statistics of software vulnerability detection in certification testing
NASA Astrophysics Data System (ADS)
Barabanov, A. V.; Markov, A. S.; Tsirlov, V. L.
2018-05-01
The paper discusses practical aspects of introduction of the methods to detect software vulnerability in the day-to-day activities of the accredited testing laboratory. It presents the approval results of the vulnerability detection methods as part of the study of the open source software and the software that is a test object of the certification tests under information security requirements, including software for communication networks. Results of the study showing the allocation of identified vulnerabilities by types of attacks, country of origin, programming languages used in the development, methods for detecting vulnerability, etc. are given. The experience of foreign information security certification systems related to the detection of certified software vulnerabilities is analyzed. The main conclusion based on the study is the need to implement practices for developing secure software in the development life cycle processes. The conclusions and recommendations for the testing laboratories on the implementation of the vulnerability analysis methods are laid down.
Smith, M; Murphy, D; Laxmisan, A; Sittig, D; Reis, B; Esquivel, A; Singh, H
2013-01-01
Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider's prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA's EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility's "test" EHR system, thus demonstrating technical compatibility. To address the factors involved in missed test results, we developed a software prototype to account for technical, usability, organizational, and workflow needs. Our evaluation has shown the feasibility of the prototype as a means of facilitating better follow-up for cancer-related abnormal test results.
Smith, M.; Murphy, D.; Laxmisan, A.; Sittig, D.; Reis, B.; Esquivel, A.; Singh, H.
2013-01-01
Summary Background Abnormal test results do not always receive timely follow-up, even when providers are notified through electronic health record (EHR)-based alerts. High workload, alert fatigue, and other demands on attention disrupt a provider’s prospective memory for tasks required to initiate follow-up. Thus, EHR-based tracking and reminding functionalities are needed to improve follow-up. Objectives The purpose of this study was to develop a decision-support software prototype enabling individual and system-wide tracking of abnormal test result alerts lacking follow-up, and to conduct formative evaluations, including usability testing. Methods We developed a working prototype software system, the Alert Watch And Response Engine (AWARE), to detect abnormal test result alerts lacking documented follow-up, and to present context-specific reminders to providers. Development and testing took place within the VA’s EHR and focused on four cancer-related abnormal test results. Design concepts emphasized mitigating the effects of high workload and alert fatigue while being minimally intrusive. We conducted a multifaceted formative evaluation of the software, addressing fit within the larger socio-technical system. Evaluations included usability testing with the prototype and interview questions about organizational and workflow factors. Participants included 23 physicians, 9 clinical information technology specialists, and 8 quality/safety managers. Results Evaluation results indicated that our software prototype fit within the technical environment and clinical workflow, and physicians were able to use it successfully. Quality/safety managers reported that the tool would be useful in future quality assurance activities to detect patients who lack documented follow-up. Additionally, we successfully installed the software on the local facility’s “test” EHR system, thus demonstrating technical compatibility. Conclusion To address the factors involved in missed test results, we developed a software prototype to account for technical, usability, organizational, and workflow needs. Our evaluation has shown the feasibility of the prototype as a means of facilitating better follow-up for cancer-related abnormal test results. PMID:24155789
Testing Scientific Software: A Systematic Literature Review
Kanewala, Upulee; Bieman, James M.
2014-01-01
Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798
Field Test of Route Planning Software for Lunar Polar Missions
NASA Astrophysics Data System (ADS)
Horchler, A. D.; Cunningham, C.; Jones, H. L.; Arnett, D.; Fang, E.; Amoroso, E.; Otten, N.; Kitchell, F.; Holst, I.; Rock, G.; Whittaker, W.
2017-10-01
A novel field test paradigm has been developed to demonstrate and validate route planning software in the stark low-angled light and sweeping shadows a rover would experience at the poles of the Moon. Software, ConOps, and test results are presented.
Modular Rocket Engine Control Software (MRECS)
NASA Technical Reports Server (NTRS)
Tarrant, C.; Crook, J.
1998-01-01
The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for advanced engine control systems that will result in lower software maintenance (operations) costs. It effectively accommodates software requirement changes that occur due to hardware technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives, benefits, and status of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishments are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software architecture, reuse software, and reduced software reverification time related to software changes. MRECS was recently modified to support a Space Shuttle Main Engine (SSME) hot-fire test. Cold Flow and Flight Readiness Testing were completed before the test was cancelled. Currently, the program is focused on supporting NASA MSFC in accomplishing development testing of the Fastrac Engine, part of NASA's Low Cost Technologies (LCT) Program. MRECS will be used for all engine development testing.
NASA Astrophysics Data System (ADS)
Hart, D. M.; Merchant, B. J.; Abbott, R. E.
2012-12-01
The Component Evaluation project at Sandia National Laboratories supports the Ground-based Nuclear Explosion Monitoring program by performing testing and evaluation of the components that are used in seismic and infrasound monitoring systems. In order to perform this work, Component Evaluation maintains a testing facility called the FACT (Facility for Acceptance, Calibration, and Testing) site, a variety of test bed equipment, and a suite of software tools for analyzing test data. Recently, Component Evaluation has successfully integrated several improvements to its software analysis tools and test bed equipment that have substantially improved our ability to test and evaluate components. The software tool that is used to analyze test data is called TALENT: Test and AnaLysis EvaluatioN Tool. TALENT is designed to be a single, standard interface to all test configuration, metadata, parameters, waveforms, and results that are generated in the course of testing monitoring systems. It provides traceability by capturing everything about a test in a relational database that is required to reproduce the results of that test. TALENT provides a simple, yet powerful, user interface to quickly acquire, process, and analyze waveform test data. The software tool has also been expanded recently to handle sensors whose output is proportional to rotation angle, or rotation rate. As an example of this new processing capability, we show results from testing the new ATA ARS-16 rotational seismometer. The test data was collected at the USGS ASL. Four datasets were processed: 1) 1 Hz with increasing amplitude, 2) 4 Hz with increasing amplitude, 3) 16 Hz with increasing amplitude and 4) twenty-six discrete frequencies between 0.353 Hz to 64 Hz. The results are compared to manufacture-supplied data sheets.
Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing
NASA Astrophysics Data System (ADS)
Srivastava, Praveen Ranjan; Pareek, Deepak
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.
Taking advantage of ground data systems attributes to achieve quality results in testing software
NASA Technical Reports Server (NTRS)
Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.
1994-01-01
During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.
IHE cross-enterprise document sharing for imaging: interoperability testing software
2010-01-01
Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241
Software development predictors, error analysis, reliability models and software metric analysis
NASA Technical Reports Server (NTRS)
Basili, Victor
1983-01-01
The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.
NASA Astrophysics Data System (ADS)
Pellegrin, F.; Jeram, B.; Haucke, J.; Feyrin, S.
2016-07-01
The paper describes the introduction of a new automatized build and test infrastructure, based on the open-source software Jenkins1, into the ESO Very Large Telescope control software to replace the preexisting in-house solution. A brief introduction to software quality practices is given, a description of the previous solution, the limitations of it and new upcoming requirements. Modifications required to adapt the new system are described, how these were implemented to current software and the results obtained. An overview on how the new system may be used in future projects is also presented.
NASA Technical Reports Server (NTRS)
Hebert, Phillip W., Sr.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Hughes, Mark S.
2012-01-01
The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition systems (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis development and deployment.
MATTS- A Step Towards Model Based Testing
NASA Astrophysics Data System (ADS)
Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.
2016-08-01
In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.
NASA Technical Reports Server (NTRS)
Soderstrom, Tomas J.; Krall, Laura A.; Hope, Sharon A.; Zupke, Brian S.
1994-01-01
A Telos study of 40 recent subsystem deliveries into the DSN at JPL found software interface testing to be the single most expensive and error-prone activity, and the study team suggested creating an automated software interface test tool. The resulting Software Interface Verifier (SIV), which was funded by NASA/JPL and created by Telos, employed 92 percent software reuse to quickly create an initial version which incorporated early user feedback. SIV is now successfully used by developers for interface prototyping and unit testing, by test engineers for formal testing, and by end users for non-intrusive data flow tests in the operational environment. Metrics, including cost, are included. Lessons learned include the need for early user training. SIV is ported to many platforms and can be successfully used or tailored by other NASA groups.
Acquisition Handbook - Update. Comprehensive Approach to Reusable Defensive Software (CARDS)
1994-03-25
designs, and implementation components (source code, test plans, procedures and results, and system/software documentation). This handbook provides a...activities where software components are acquired, evaluated, tested and sometimes modified. In addition to serving as a facility for the acquisition and...systems from such components [1]. Implementation components are at the lowest level and consist of: specifications; detailed designs; code, test
Tools for Embedded Computing Systems Software
NASA Technical Reports Server (NTRS)
1978-01-01
A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.
Adaptive Integration of Nonsmooth Dynamical Systems
2017-10-11
controlled time stepping method to interactively design running robots. [1] John Shepherd, Samuel Zapolsky, and Evan M. Drumwright, “Fast multi-body...software like this to test software running on my robots. Started working in simulation after attempting to use software like this to test software... running on my robots. The libraries that produce these beautiful results have failed at simulating robotic manipulation. Postulate: It is easier to
Using Automation to Improve the Flight Software Testing Process
NASA Technical Reports Server (NTRS)
ODonnell, James R., Jr.; Andrews, Stephen F.; Morgenstern, Wendy M.; Bartholomew, Maureen O.; McComas, David C.; Bauer, Frank H. (Technical Monitor)
2001-01-01
One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, attitude control, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on previous missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the perceived benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.
Using Automation to Improve the Flight Software Testing Process
NASA Technical Reports Server (NTRS)
ODonnell, James R., Jr.; Morgenstern, Wendy M.; Bartholomew, Maureen O.
2001-01-01
One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, knowledge of attitude control, and attitude control hardware, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on other missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.
NASA Technical Reports Server (NTRS)
Hebert, Phillip W., Sr.; Hughes, Mark S.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Marshall, PeggL.; Duncan, Michael E.; Morris, Jon A.; Franzl, Richard W.
2012-01-01
The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition system (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis' development and deployment.
NASA Astrophysics Data System (ADS)
1981-03-01
Support documentation for a second generation heliostat project is presented. Flowcharts of control software are included. Numerical and graphic test results are provided. Project management information is also provided.
The NOvA software testing framework
NASA Astrophysics Data System (ADS)
Tamsett, M.; C Group
2015-12-01
The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study vε appearance in a vμ beam. NOvA has already produced more than one million Monte Carlo and detector generated files amounting to more than 1 PB in size. This data is divided between a number of parallel streams such as far and near detector beam spills, cosmic ray backgrounds, a number of data-driven triggers and over 20 different Monte Carlo configurations. Each of these data streams must be processed through the appropriate steps of the rapidly evolving, multi-tiered, interdependent NOvA software framework. In total there are greater than 12 individual software tiers, each of which performs a different function and can be configured differently depending on the input stream. In order to regularly test and validate that all of these software stages are working correctly NOvA has designed a powerful, modular testing framework that enables detailed validation and benchmarking to be performed in a fast, efficient and accessible way with minimal expert knowledge. The core of this system is a novel series of python modules which wrap, monitor and handle the underlying C++ software framework and then report the results to a slick front-end web-based interface. This interface utilises modern, cross-platform, visualisation libraries to render the test results in a meaningful way. They are fast and flexible, allowing for the easy addition of new tests and datasets. In total upwards of 14 individual streams are regularly tested amounting to over 70 individual software processes, producing over 25 GB of output files. The rigour enforced through this flexible testing framework enables NOvA to rapidly verify configurations, results and software and thus ensure that data is available for physics analysis in a timely and robust manner.
Testing of Safety-Critical Software Embedded in an Artificial Heart
NASA Astrophysics Data System (ADS)
Cha, Sungdeok; Jeong, Sehun; Yoo, Junbeom; Kim, Young-Gab
Software is being used more frequently to control medical devices such as artificial heart or robotic surgery system. While much of software safety issues in such systems are similar to other safety-critical systems (e.g., nuclear power plants), domain-specific properties may warrant development of customized techniques to demonstrate fitness of the system on patients. In this paper, we report results of a preliminary analysis done on software controlling a Hybrid Ventricular Assist Device (H-VAD) developed by Korea Artificial Organ Centre (KAOC). It is a state-of-the-art artificial heart which completed animal testing phase. We performed software testing in in-vitro experiments and animal experiments. An abnormal behaviour, never detected during extensive in-vitro analysis and animal testing, was found.
An experiment in software reliability: Additional analyses using data from automated replications
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Lauterbach, Linda A.
1988-01-01
A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.
Exploring the Use of a Test Automation Framework
NASA Technical Reports Server (NTRS)
Cervantes, Alex
2009-01-01
It is known that software testers, more often than not, lack the time needed to fully test the delivered software product within the time period allotted to them. When problems in the implementation phase of a development project occur, it normally causes the software delivery date to slide. As a result, testers either need to work longer hours, or supplementary resources need to be added to the test team in order to meet aggressive test deadlines. One solution to this problem is to provide testers with a test automation framework to facilitate the development of automated test solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hang Bae
A reliability testing was performed for the software of Shutdown(SDS) Computers for Wolsong Nuclear Power Plants Units 2, 3 and 4. profiles to the SDS Computers and compared the outputs with the predicted results generated by the oracle. Test softwares were written to execute the test automatically. Random test profiles were generated using analysis code. 11 refs., 1 fig.
SLS Flight Software Testing: Using a Modified Agile Software Testing Approach
NASA Technical Reports Server (NTRS)
Bolton, Albanie T.
2016-01-01
NASA's Space Launch System (SLS) is an advanced launch vehicle for a new era of exploration beyond earth's orbit (BEO). The world's most powerful rocket, SLS, will launch crews of up to four astronauts in the agency's Orion spacecraft on missions to explore multiple deep-space destinations. Boeing is developing the SLS core stage, including the avionics that will control vehicle during flight. The core stage will be built at NASA's Michoud Assembly Facility (MAF) in New Orleans, LA using state-of-the-art manufacturing equipment. At the same time, the rocket's avionics computer software is being developed here at Marshall Space Flight Center in Huntsville, AL. At Marshall, the Flight and Ground Software division provides comprehensive engineering expertise for development of flight and ground software. Within that division, the Software Systems Engineering Branch's test and verification (T&V) team uses an agile test approach in testing and verification of software. The agile software test method opens the door for regular short sprint release cycles. The idea or basic premise behind the concept of agile software development and testing is that it is iterative and developed incrementally. Agile testing has an iterative development methodology where requirements and solutions evolve through collaboration between cross-functional teams. With testing and development done incrementally, this allows for increased features and enhanced value for releases. This value can be seen throughout the T&V team processes that are documented in various work instructions within the branch. The T&V team produces procedural test results at a higher rate, resolves issues found in software with designers at an earlier stage versus at a later release, and team members gain increased knowledge of the system architecture by interfacing with designers. SLS Flight Software teams want to continue uncovering better ways of developing software in an efficient and project beneficial manner. Through agile testing, there has been increased value through individuals and interactions over processes and tools, improved customer collaboration, and improved responsiveness to changes through controlled planning. The presentation will describe agile testing methodology as taken with the SLS FSW Test and Verification team at Marshall Space Flight Center.
Writing executable assertions to test flight software
NASA Technical Reports Server (NTRS)
Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.
1984-01-01
An executable assertion is a logical statement about the variables or a block of code. If there is no error during execution, the assertion statement results in a true value. Executable assertions can be used for dynamic testing of software. They can be employed for validation during the design phase, and exception and error detection during the operation phase. The present investigation is concerned with the problem of writing executable assertions, taking into account the use of assertions for testing flight software. They can be employed for validation during the design phase, and for exception handling and error detection during the operation phase The digital flight control system and the flight control software are discussed. The considered system provides autopilot and flight director modes of operation for automatic and manual control of the aircraft during all phases of flight. Attention is given to techniques for writing and using assertions to test flight software, an experimental setup to test flight software, and language features to support efficient use of assertions.
Detection and avoidance of errors in computer software
NASA Technical Reports Server (NTRS)
Kinsler, Les
1989-01-01
The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.
NASA Technical Reports Server (NTRS)
Lange, R. Connor
2012-01-01
Ever since Explorer-1, the United States' first Earth satellite, was developed and launched in 1958, JPL has developed many more spacecraft, including landers and orbiters. While these spacecraft vary greatly in their missions, capabilities,and destination, they all have something in common. All of the components of these spacecraft had to be comprehensively tested. While thorough testing is important to mitigate risk, it is also a very expensive and time consuming process. Thankfully,since virtually all of the software testing procedures for SMAP are computer controlled, these procedures can be automated. Most people testing SMAP flight software (FSW) would only need to write tests that exercise specific requirements and then check the filtered results to verify everything occurred as planned. This gives developers the ability to automatically launch tests on the testbed, distill the resulting logs into only the important information, generate validation documentation, and then deliver the documentation to management. With many of the steps in FSW testing automated, developers can use their limited time more effectively and can validate SMAP FSW modules quicker and test them more rigorously. As a result of the various benefits of automating much of the testing process, management is considering this automated tools use in future FSW validation efforts.
Software for Automated Testing of Mission-Control Displays
NASA Technical Reports Server (NTRS)
OHagan, Brian
2004-01-01
MCC Display Cert Tool is a set of software tools for automated testing of computerterminal displays in spacecraft mission-control centers, including those of the space shuttle and the International Space Station. This software makes it possible to perform tests that are more thorough, take less time, and are less likely to lead to erroneous results, relative to tests performed manually. This software enables comparison of two sets of displays to report command and telemetry differences, generates test scripts for verifying telemetry and commands, and generates a documentary record containing display information, including version and corrective-maintenance data. At the time of reporting the information for this article, work was continuing to add a capability for validation of display parameters against a reconfiguration file.
Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction
Venkatesan, R.
2016-01-01
Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets. PMID:27738649
Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction.
Kumudha, P; Venkatesan, R
Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets.
Top Down Implementation Plan for system performance test software
NASA Technical Reports Server (NTRS)
Jacobson, G. N.; Spinak, A.
1982-01-01
The top down implementation plan used for the development of system performance test software during the Mark IV-A era is described. The plan is based upon the identification of the hierarchical relationship of the individual elements of the software design, the development of a sequence of functionally oriented demonstrable steps, the allocation of subroutines to the specific step where they are first required, and objective status reporting. The results are: determination of milestones, improved managerial visibility, better project control, and a successful software development.
Estimation and enhancement of real-time software reliability through mutation analysis
NASA Technical Reports Server (NTRS)
Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.
1992-01-01
A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.
Tu, Xiao-Ming; Zhang, Zuo-Heng; Wan, Cheng; Zheng, Yu; Xu, Jin-Mei; Zhang, Yuan-Yuan; Luo, Jian-Ping; Wu, Hai-Wei
2012-12-01
To develop a software that can be used to standardize optical density to normalize the procedures and results of standardization in order to effectively solve several problems generated during standardization of in-direct ELISA results. The software was designed based on the I-STOD method with operation settings to solve the problems that one might encounter during the standardization. Matlab GUI was used as a tool for the development. The software was tested with the results of the detection of sera of persons from schistosomiasis japonica endemic areas. I-STOD V1.0 (WINDOWS XP/WIN 7, 0.5 GB) was successfully developed to standardize optical density. A serial of serum samples from schistosomiasis japonica endemic areas were used to examine the operational effects of I-STOD V1.0 software. The results indicated that the software successfully overcame several problems including reliability of standard curve, applicable scope of samples and determination of dilution for samples outside the scope, so that I-STOD was performed more conveniently and the results of standardization were more consistent. I-STOD V1.0 is a professional software based on I-STOD. It can be easily operated and can effectively standardize the testing results of in-direct ELISA.
A methodology for producing reliable software, volume 1
NASA Technical Reports Server (NTRS)
Stucki, L. G.; Moranda, P. B.; Foshee, G.; Kirchoff, M.; Omre, R.
1976-01-01
An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software.
2010-01-01
Symantec Server Antivirus 1 1 1 1 2 7 8 8 Service Passwords 0 10 4 4 4 10 5 5 Banner Needs 0 0 0 0 0 0 0 0 Unauthorized Software 0 1 0 1 4 1 4 1... software needed to manage and operate systems in the testing rooms. Systems in the testing rooms were made to resemble shipboard Navy systems as closely...i.e., work- station and server software , routing and switching, operating systems, and so forth). This training was also designed to provide
IHE cross-enterprise document sharing for imaging: interoperability testing software.
Noumeir, Rita; Renaud, Bérubé
2010-09-21
With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.
Hardware and Software Integration to Support Real-Time Space Link Emulation
NASA Technical Reports Server (NTRS)
Murawski, Robert; Bhasin, Kul; Bittner, David; Sweet, Aaron; Coulter, Rachel; Schwab, Devin
2012-01-01
Prior to operational use, communications hardware and software must be thoroughly tested and verified. In space-link communications, field testing equipment can be prohibitively expensive and cannot test to non-ideal situations. In this paper, we show how software and hardware emulation tools can be used to accurately model the characteristics of a satellite communication channel in a lab environment. We describe some of the challenges associated with developing an emulation lab and present results to demonstrate the channel modeling. We then show how network emulation software can be used to extend a hardware emulation model without requiring additional network and channel simulation hardware.
Hardware and Software Integration to Support Real-Time Space-Link Emulation
NASA Technical Reports Server (NTRS)
Murawski, Robert; Bhasin, Kul; Bittner, David
2012-01-01
Prior to operational use, communications hardware and software must be thoroughly tested and verified. In space-link communications, field testing equipment can be prohibitively expensive and cannot test to non-ideal situations. In this paper, we show how software and hardware emulation tools can be used to accurately model the characteristics of a satellite communication channel in a lab environment. We describe some of the challenges associated with developing an emulation lab and present results to demonstrate the channel modeling. We then show how network emulation software can be used to extend a hardware emulation model without requiring additional network and channel simulation hardware.
The use of applied software for the professional training of students studying humanities
NASA Astrophysics Data System (ADS)
Sadchikova, A. S.; Rodin, M. M.
2017-01-01
Research practice is an integral part of humanities students' training process. In this regard the training process is to include modern information techniques of the training process of students studying humanities. This paper examines the most popular applied software products used for data processing in social science. For testing purposes we selected the most commonly preferred professional packages: MS Excel, IBM SPSS Statistics, STATISTICA, STADIA. Moreover the article contains testing results of a specialized software Prikladnoy Sotsiolog that is applicable for the preparation stage of the research. The specialised software were tested during one term in groups of students studying humanities.
Benchmark dose risk assessment software (BMDS) was designed by EPA to generate dose-response curves and facilitate the analysis, interpretation and synthesis of toxicological data. Partial results of QA/QC testing of the EPA benchmark dose software (BMDS) are presented. BMDS pr...
Ffuzz: Towards full system high coverage fuzz testing on binary executables.
Zhang, Bin; Ye, Jiaxi; Bi, Xing; Feng, Chao; Tang, Chaojing
2018-01-01
Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool-Ffuzz-on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently.
Small-scale fixed wing airplane software verification flight test
NASA Astrophysics Data System (ADS)
Miller, Natasha R.
The increased demand for micro Unmanned Air Vehicles (UAV) driven by military requirements, commercial use, and academia is creating a need for the ability to quickly and accurately conduct low Reynolds Number aircraft design. There exist several open source software programs that are free or inexpensive that can be used for large scale aircraft design, but few software programs target the realm of low Reynolds Number flight. XFLR5 is an open source, free to download, software program that attempts to take into consideration viscous effects that occur at low Reynolds Number in airfoil design, 3D wing design, and 3D airplane design. An off the shelf, remote control airplane was used as a test bed to model in XFLR5 and then compared to flight test collected data. Flight test focused on the stability modes of the 3D plane, specifically the phugoid mode. Design and execution of the flight tests were accomplished for the RC airplane using methodology from full scale military airplane test procedures. Results from flight test were not conclusive in determining the accuracy of the XFLR5 software program. There were several sources of uncertainty that did not allow for a full analysis of the flight test results. An off the shelf drone autopilot was used as a data collection device for flight testing. The precision and accuracy of the autopilot is unknown. Potential future work should investigate flight test methods for small scale UAV flight.
Designing Control System Application Software for Change
NASA Technical Reports Server (NTRS)
Boulanger, Richard
2001-01-01
The Unified Modeling Language (UML) was used to design the Environmental Systems Test Stand (ESTS) control system software. The UML was chosen for its ability to facilitate a clear dialog between software designer and customer, from which requirements are discovered and documented in a manner which transposes directly to program objects. Applying the UML to control system software design has resulted in a baseline set of documents from which change and effort of that change can be accurately measured. As the Environmental Systems Test Stand evolves, accurate estimates of the time and effort required to change the control system software will be made. Accurate quantification of the cost of software change can be before implementation, improving schedule and budget accuracy.
System Testing of Ground Cooling System Components
NASA Technical Reports Server (NTRS)
Ensey, Tyler Steven
2014-01-01
This internship focused primarily upon software unit testing of Ground Cooling System (GCS) components, one of the three types of tests (unit, integrated, and COTS/regression) utilized in software verification. Unit tests are used to test the software of necessary components before it is implemented into the hardware. A unit test determines that the control data, usage procedures, and operating procedures of a particular component are tested to determine if the program is fit for use. Three different files are used to make and complete an efficient unit test. These files include the following: Model Test file (.mdl), Simulink SystemTest (.test), and autotest (.m). The Model Test file includes the component that is being tested with the appropriate Discrete Physical Interface (DPI) for testing. The Simulink SystemTest is a program used to test all of the requirements of the component. The autotest tests that the component passes Model Advisor and System Testing, and puts the results into proper files. Once unit testing is completed on the GCS components they can then be implemented into the GCS Schematic and the software of the GCS model as a whole can be tested using integrated testing. Unit testing is a critical part of software verification; it allows for the testing of more basic components before a model of higher fidelity is tested, making the process of testing flow in an orderly manner.
[Confirming the Utility of RAISUS Antifungal Susceptibility Testing by New-Software].
Ono, Tomoko; Suematsu, Hiroyuki; Sawamura, Haruki; Yamagishi, Yuka; Mikamo, Hiroshige
2017-08-15
Clinical and Laboratory Standards Institute (CLSI) methods for susceptibility tests of yeast are used in Japan. On the other hand, the methods have some disadvantage; 1) reading at 24 and 48 h, 2) using unclear scale, approximately 50% inhibition, to determine MICs, 3) calculating trailing growth and paradoxical effects. These makes it difficult to test the susuceptibility for yeasts. Old software of RAISUS, Ver. 6.0 series, resolved problem 1) and 2) but did not resolve problem 3). Recently, new software of RAISUS, Ver. 7.0 series, resolved problem 3). We confirmed that using the new software made it clear whether all these issue were settled or not. Eighty-four Candida isolated from Aichi Medical University was used in this study. We compared the MICs obtained by using RAISUS antifungal susceptibility testing of yeasts RSMY1, RSMY1, with those obtained by using ASTY. The concordance rates (±four-fold of MICs) between the MICs obtained by using ASTY and RSMY1 with the new software were more than 90%, except for miconazole (MCZ). The rate of MCZ was low, but MICs obtained by using CLSI methods and Yeast-like Fungus DP 'EIKEN' methods, E-DP, showed equivalent MICs of RSMY1 using the new software. The frequency of skip effects on RSMY1 using the new software markedly decreased relative to RSMY1 using the old software. In case of showing trailing growth, the new software of RAISUS made it possible to choice the correct MICs and to put up the sign of trailing growth on the result screen. New software of RAISUS enhances its usability and the accuracy of MICs. Using automatic instrument to determine MICs is useful to obtain objective results easily.
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1992-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault density components so that the testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents an alternative approach for constructing such models that is intended to fulfill specific software engineering needs (i.e. dealing with partial/incomplete information and creating models that are easy to interpret). Our approach to classification is as follows: (1) to measure the software system to be considered; and (2) to build multivariate stochastic models for prediction. We present experimental results obtained by classifying FORTRAN components developed at the NASA/GSFC into two fault density classes: low and high. Also we evaluate the accuracy of the model and the insights it provides into the software process.
ACUTE TO CHRONIC ESTIMATION SOFTWARE FOR WINDOWS
Chronic No-Observed Effect Concentrations (NOEC) are commonly determined by either using acute-to-chronic ratios or by performing an ANOVA on chronic test data; both require lengthy and expensive chronic test results. Acute-to-Chronic Estimation (ACE) software was developed to p...
Improving Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2012-02-01
New test procedure evaluates quality and accuracy of energy analysis tools for the residential building retrofit market. Reducing the energy use of existing homes in the United States offers significant energy-saving opportunities, which can be identified through building simulation software tools that calculate optimal packages of efficiency measures. To improve the accuracy of energy analysis for residential buildings, the National Renewable Energy Laboratory's (NREL) Buildings Research team developed the Building Energy Simulation Test for Existing Homes (BESTEST-EX), a method for diagnosing and correcting errors in building energy audit software and calibration procedures. BESTEST-EX consists of building physics and utility billmore » calibration test cases, which software developers can use to compare their tools simulation findings to reference results generated with state-of-the-art simulation tools. Overall, the BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX is helping software developers identify and correct bugs in their software, as well as develop and test utility bill calibration procedures.« less
Ground Systems Development Environment (GSDE) interface requirements analysis
NASA Technical Reports Server (NTRS)
Church, Victor E.; Philips, John; Hartenstein, Ray; Bassman, Mitchell; Ruskin, Leslie; Perez-Davila, Alfredo
1991-01-01
A set of procedural and functional requirements are presented for the interface between software development environments and software integration and test systems used for space station ground systems software. The requirements focus on the need for centralized configuration management of software as it is transitioned from development to formal, target based testing. This concludes the GSDE Interface Requirements study. A summary is presented of findings concerning the interface itself, possible interface and prototyping directions for further study, and results of the investigation of the Cronus distributed applications environment.
Practical Issues in Implementing Software Reliability Measurement
NASA Technical Reports Server (NTRS)
Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.
1999-01-01
Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.
The software system development for the TAMU real-time fan beam scatterometer data processors
NASA Technical Reports Server (NTRS)
Clark, B. V.; Jean, B. R.
1980-01-01
A software package was designed and written to process in real-time any one quadrature channel pair of radar scatterometer signals form the NASA L- or C-Band radar scatterometer systems. The software was successfully tested in the C-Band processor breadboard hardware using recorded radar and NERDAS (NASA Earth Resources Data Annotation System) signals as the input data sources. The processor development program and the overall processor theory of operation and design are described. The real-time processor software system is documented and the results of the laboratory software tests, and recommendations for the efficient application of the data processing capabilities are presented.
AXAF-1 High Resolution Assembly Image Model and Comparison with X-Ray Ground Test Image
NASA Technical Reports Server (NTRS)
Zissa, David E.
1999-01-01
The x-ray ground test of the AXAF-I High Resolution Mirror Assembly was completed in 1997 at the X-ray Calibration Facility at Marshall Space Flight Center. Mirror surface measurements by HDOS, alignment results from Kodak, and predicted gravity distortion in the horizontal test configuration are being used to model the x-ray test image. The Marshall Space Flight Center (MSFC) image modeling serves as a cross check with Smithsonian Astrophysical observatory modeling. The MSFC image prediction software has evolved from the MSFC model of the x-ray test of the largest AXAF-I mirror pair in 1991. The MSFC image modeling software development is being assisted by the University of Alabama in Huntsville. The modeling process, modeling software, and image prediction will be discussed. The image prediction will be compared with the x-ray test results.
Model-Based Development of Automotive Electronic Climate Control Software
NASA Astrophysics Data System (ADS)
Kakade, Rupesh; Murugesan, Mohan; Perugu, Bhupal; Nair, Mohanan
With increasing complexity of software in today's products, writing and maintaining thousands of lines of code is a tedious task. Instead, an alternative methodology must be employed. Model-based development is one candidate that offers several benefits and allows engineers to focus on the domain of their expertise than writing huge codes. In this paper, we discuss the application of model-based development to the electronic climate control software of vehicles. The back-to-back testing approach is presented that ensures flawless and smooth transition from legacy designs to the model-based development. Simulink report generator to create design documents from the models is presented along with its usage to run the simulation model and capture the results into the test report. Test automation using model-based development tool that support the use of unique set of test cases for several testing levels and the test procedure that is independent of software and hardware platform is also presented.
Software component quality evaluation
NASA Technical Reports Server (NTRS)
Clough, A. J.
1991-01-01
The paper describes a software inspection process that can be used to evaluate the quality of software components. Quality criteria, process application, independent testing of the process and proposed associated tool support are covered. Early results indicate that this technique is well suited for assessing software component quality in a standardized fashion. With automated machine assistance to facilitate both the evaluation and selection of software components, such a technique should promote effective reuse of software components.
Software production methodology tested project
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1976-01-01
The history and results of a 3 1/2-year study in software development methodology are reported. The findings of this study have become the basis for DSN software development guidelines and standard practices. The article discusses accomplishments, discoveries, problems, recommendations and future directions.
Application of software technology to automatic test data analysis
NASA Technical Reports Server (NTRS)
Stagner, J. R.
1991-01-01
The verification process for a major software subsystem was partially automated as part of a feasibility demonstration. The methods employed are generally useful and applicable to other types of subsystems. The effort resulted in substantial savings in test engineer analysis time and offers a method for inclusion of automatic verification as a part of regression testing.
2005 8th Annual Systems Engineering Conference. Volume 4, Thursday
2005-10-27
requirements, allocation , and utilization statistics Operations Decisions Acquisition Decisions Resource Management — Integrated Requirements/ Allocation ...Quality Improvement Consultants, Inc. “Automated Software Testing Increases Test Quality and Coverage Resulting in Improved Software Reliability.”, Mr...Steven Ligon, SAIC The Return of Discipline, Ms. Jacqueline Townsend, Air Force Materiel Command Track 4 - Net Centric Operations: Testing Net-Centric
Manyak, Kristin A.; Abdenour, Thomas E.; Rauh, Mitchell J.; Baweja, Harsimran S.
2016-01-01
Background As recently dictated by the American Medical Society, balance testing is an important component in the clinical evaluation of concussion. Despite this, previous research on the efficacy of balance testing for concussion diagnosis suggests low sensitivity (∼30%), based primarily on the popular Balance Error Scoring System (BESS). The Balance Tracking System (BTrackS, Balance Tracking Systems Inc., San Diego, CA, USA) consists of a force plate (BTrackS Balance Plate) and software (BTrackS Sport Balance) which can quickly (<2 min) perform concussion balance testing with gold standard accuracy. Purpose The present study aimed to determine the sensitivity of the BTrackS Balance Plate and Sports Balance Software for concussion diagnosis. Study Design Cross-Sectional Study Methods Preseason baseline balance testing of 519 healthy Division I college athletes playing sports with a relatively high risk for concussions was performed with the BTrackS Balance Test. Testing was administered by certified athletic training staff using the BTrackS Balance Plate and Sport Balance software. Of the baselined athletes, 25 later experienced a concussion during the ensuing sport season. Post-injury balance testing was performed on these concussed athletes within 48 of injury and the sensitivity of the BTrackS Balance Plate and Sport Balance software was estimated based on the number of athletes showing a balance decline according to the criteria specified in the Sport Balance software. This criteria is based on the minimal detectable change statistic with a 90% confidence level (i.e. 90% specificity). Results Of 25 athletes who experienced concussions, 16 had balance declines relative to baseline testing results according to the BTrackS Sport Balance software criteria. This corresponds to an estimated concussion sensitivity of 64%, which is twice as great as that reported previously for the BESS. Conclusions The BTrackS Balance Plate and Sport Balance software has the greatest concussion sensitivity of any balance testing instrument reported to date. Level of Evidence Level 2 (Individual cross sectional diagnostic study) PMID:27104048
Team Oriented Robotic Exploration Task on Scorpion and K9 Platforms
NASA Technical Reports Server (NTRS)
Kirchner, Frank
2003-01-01
This final report describes the achievements that have been made in the project over the complete period of performance. The technical progress highlights the different areas of work in terms of Progress in Mechatronics, Sensor integration, Software Development. User Interfaces, Behavior Development and Experimental Results and System Testing. The different areas are: Mechatronics, Sensor integration, Software development, Experimental results and Basic System Testing, Behaviors Development and Advanced System Testing, User Interface and Wireless Communication.
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1993-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.
Power, Avionics and Software - Phase 1.0:. [Subsystem Integration Test Report
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Sands, Obed S.; Bakula, Casey J.; Oldham, Daniel R.; Wright, Ted; Bradish, Martin A.; Klebau, Joseph M.
2014-01-01
This report describes Power, Avionics and Software (PAS) 1.0 subsystem integration testing and test results that occurred in August and September of 2013. This report covers the capabilities of each PAS assembly to meet integration test objectives for non-safety critical, non-flight, non-human-rated hardware and software development. This test report is the outcome of the first integration of the PAS subsystem and is meant to provide data for subsequent designs, development and testing of the future PAS subsystems. The two main objectives were to assess the ability of the PAS assemblies to exchange messages and to perform audio testing of both inbound and outbound channels. This report describes each test performed, defines the test, the data, and provides conclusions and recommendations.
Absorbing Software Testing into the Scrum Method
NASA Astrophysics Data System (ADS)
Tuomikoski, Janne; Tervonen, Ilkka
In this paper we study, how to absorb software testing into the Scrum method. We conducted the research as an action research during the years 2007-2008 with three iterations. The result showed that testing can and even should be absorbed to the Scrum method. The testing team was merged into the Scrum teams. The teams can now deliver better working software in a shorter time, because testing keeps track of the progress of the development. Also the team spirit is higher, because the Scrum team members are committed to the same goal. The biggest change from test manager’s point of view was the organized Product Owner Team. Test manager don’t have testing team anymore, and in the future all the testing tasks have to be assigned through the Product Backlog.
Test/score/report: Simulation techniques for automating the test process
NASA Technical Reports Server (NTRS)
Hageman, Barbara H.; Sigman, Clayton B.; Koslosky, John T.
1994-01-01
A Test/Score/Report capability is currently being developed for the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) system which will automate testing of the Goddard Space Flight Center (GSFC) Payload Operations Control Center (POCC) and Mission Operations Center (MOC) software in three areas: telemetry decommutation, spacecraft command processing, and spacecraft memory load and dump processing. Automated computer control of the acceptance test process is one of the primary goals of a test team. With the proper simulation tools and user interface, the task of acceptance testing, regression testing, and repeatability of specific test procedures of a ground data system can be a simpler task. Ideally, the goal for complete automation would be to plug the operational deliverable into the simulator, press the start button, execute the test procedure, accumulate and analyze the data, score the results, and report the results to the test team along with a go/no recommendation to the test team. In practice, this may not be possible because of inadequate test tools, pressures of schedules, limited resources, etc. Most tests are accomplished using a certain degree of automation and test procedures that are labor intensive. This paper discusses some simulation techniques that can improve the automation of the test process. The TASS system tests the POCC/MOC software and provides a score based on the test results. The TASS system displays statistics on the success of the POCC/MOC system processing in each of the three areas as well as event messages pertaining to the Test/Score/Report processing. The TASS system also provides formatted reports documenting each step performed during the tests and the results of each step. A prototype of the Test/Score/Report capability is available and currently being used to test some POCC/MOC software deliveries. When this capability is fully operational it should greatly reduce the time necessary to test a POCC/MOC software delivery, as well as improve the quality of the test process.
[Development of ophthalmologic software for handheld devices].
Grottone, Gustavo Teixeira; Pisa, Ivan Torres; Grottone, João Carlos; Debs, Fernando; Schor, Paulo
2006-01-01
The formulas for calculation of intraocular lenses have evolved since the first theoretical formulas by Fyodorov. Among the second generation formulas, the SRK-I formula has a simple calculation, taking into account a calculation that only involved anteroposterior length, IOL constant and average keratometry. With the evolution of those formulas, complexicity increased making the reconfiguration of parameters in special situations impracticable. In this way the production and development of software for such a purpose, can help surgeons to recalculate those values if needed. To idealize, develop and test a Brazilian software for calculation of IOL dioptric power for handheld computers. For the development and programming of software for calculation of IOL, we used PocketC program (OrbWorks Concentrated Software, USA). We compared the results collected from a gold-standard device (Ultrascan/Alcon Labs) with the simulation of 100 fictitious patients, using the same IOL parameters. The results were grouped for ULTRASCAN data and SOFTWARE data. Using SRK/T formula the range of those parameters included a keratometry varying between 35 and 55D, axial length between 20 and 28 mm, IOL constants of 118.7, 118.3 and 115.8. Using Wilcoxon test, it was shown that the groups do not differ (p=0.314). We had a variation in the Ultrascan sample between 11.82 and 27.97. In the tested program sample the variation was practically similar (11.83-27.98). The average of the Ultrascan group was 20.93. The software group had a similar average. The standard deviation of the samples was also similar (4.53). The precision of IOL software for handheld devices was similar to that of the standard devices using the SRK/T formula. The software worked properly, was steady without bugs in tested models of operational system.
NASA Astrophysics Data System (ADS)
Kristianti, Y.; Prabawanto, S.; Suhendra, S.
2017-09-01
This study aims to examine the ability of critical thinking and students who attain learning mathematics with learning model ASSURE assisted Autograph software. The design of this study was experimental group with pre-test and post-test control group. The experimental group obtained a mathematics learning with ASSURE-assisted model Autograph software and the control group acquired the mathematics learning with the conventional model. The data are obtained from the research results through critical thinking skills tests. This research was conducted at junior high school level with research population in one of junior high school student in Subang Regency of Lesson Year 2016/2017 and research sample of class VIII student in one of junior high school in Subang Regency for 2 classes. Analysis of research data is administered quantitatively. Quantitative data analysis was performed on the normalized gain level between the two sample groups using a one-way anova test. The results show that mathematics learning with ASSURE assisted model Autograph software can improve the critical thinking ability of junior high school students. Mathematical learning using ASSURE-assisted model Autograph software is significantly better in improving the critical thinking skills of junior high school students compared with conventional models.
Software for MR image overlay guided needle insertions: the clinical translation process
NASA Astrophysics Data System (ADS)
Ungi, Tamas; U-Thainual, Paweena; Fritz, Jan; Iordachita, Iulian I.; Flammang, Aaron J.; Carrino, John A.; Fichtinger, Gabor
2013-03-01
PURPOSE: Needle guidance software using augmented reality image overlay was translated from the experimental phase to support preclinical and clinical studies. Major functional and structural changes were needed to meet clinical requirements. We present the process applied to fulfill these requirements, and selected features that may be applied in the translational phase of other image-guided surgical navigation systems. METHODS: We used an agile software development process for rapid adaptation to unforeseen clinical requests. The process is based on iterations of operating room test sessions, feedback discussions, and software development sprints. The open-source application framework of 3D Slicer and the NA-MIC kit provided sufficient flexibility and stable software foundations for this work. RESULTS: All requirements were addressed in a process with 19 operating room test iterations. Most features developed in this phase were related to workflow simplification and operator feedback. CONCLUSION: Efficient and affordable modifications were facilitated by an open source application framework and frequent clinical feedback sessions. Results of cadaver experiments show that software requirements were successfully solved after a limited number of operating room tests.
RELAP-7 Software Verification and Validation Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling
This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less
A Model for Assessing the Liability of Seemingly Correct Software
NASA Technical Reports Server (NTRS)
Voas, Jeffrey M.; Voas, Larry K.; Miller, Keith W.
1991-01-01
Current research on software reliability does not lend itself to quantitatively assessing the risk posed by a piece of life-critical software. Black-box software reliability models are too general and make too many assumptions to be applied confidently to assessing the risk of life-critical software. We present a model for assessing the risk caused by a piece of software; this model combines software testing results and Hamlet's probable correctness model. We show how this model can assess software risk for those who insure against a loss that can occur if life-critical software fails.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
Ffuzz: Towards full system high coverage fuzz testing on binary executables
2018-01-01
Bugs and vulnerabilities in binary executables threaten cyber security. Current discovery methods, like fuzz testing, symbolic execution and manual analysis, both have advantages and disadvantages when exercising the deeper code area in binary executables to find more bugs. In this paper, we designed and implemented a hybrid automatic bug finding tool—Ffuzz—on top of fuzz testing and selective symbolic execution. It targets full system software stack testing including both the user space and kernel space. Combining these two mainstream techniques enables us to achieve higher coverage and avoid getting stuck both in fuzz testing and symbolic execution. We also proposed two key optimizations to improve the efficiency of full system testing. We evaluated the efficiency and effectiveness of our method on real-world binary software and 844 memory corruption vulnerable programs in the Juliet test suite. The results show that Ffuzz can discover software bugs in the full system software stack effectively and efficiently. PMID:29791469
Evaluation of Visualization Software
NASA Technical Reports Server (NTRS)
Globus, Al; Uselton, Sam
1995-01-01
Visualization software is widely used in scientific and engineering research. But computed visualizations can be very misleading, and the errors are easy to miss. We feel that the software producing the visualizations must be thoroughly evaluated and the evaluation process as well as the results must be made available. Testing and evaluation of visualization software is not a trivial problem. Several methods used in testing other software are helpful, but these methods are (apparently) often not used. When they are used, the description and results are generally not available to the end user. Additional evaluation methods specific to visualization must also be developed. We present several useful approaches to evaluation, ranging from numerical analysis of mathematical portions of algorithms to measurement of human performance while using visualization systems. Along with this brief survey, we present arguments for the importance of evaluations and discussions of appropriate use of some methods.
Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.
Villarrubia, J S; Tondare, V N; Vladár, A E
2016-01-01
The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.
Publishing Platform for Scientific Software - Lessons Learned
NASA Astrophysics Data System (ADS)
Hammitzsch, Martin; Fritzsch, Bernadette; Reusser, Dominik; Brembs, Björn; Deinzer, Gernot; Loewe, Peter; Fenner, Martin; van Edig, Xenia; Bertelmann, Roland; Pampel, Heinz; Klump, Jens; Wächter, Joachim
2015-04-01
Scientific software has become an indispensable commodity for the production, processing and analysis of empirical data but also for modelling and simulation of complex processes. Software has a significant influence on the quality of research results. For strengthening the recognition of the academic performance of scientific software development, for increasing its visibility and for promoting the reproducibility of research results, concepts for the publication of scientific software have to be developed, tested, evaluated, and then transferred into operations. For this, the publication and citability of scientific software have to fulfil scientific criteria by means of defined processes and the use of persistent identifiers, similar to data publications. The SciForge project is addressing these challenges. Based on interviews a blueprint for a scientific software publishing platform and a systematic implementation plan has been designed. In addition, the potential of journals, software repositories and persistent identifiers have been evaluated to improve the publication and dissemination of reusable software solutions. It is important that procedures for publishing software as well as methods and tools for software engineering are reflected in the architecture of the platform, in order to improve the quality of the software and the results of research. In addition, it is necessary to work continuously on improving specific conditions that promote the adoption and sustainable utilization of scientific software publications. Among others, this would include policies for the development and publication of scientific software in the institutions but also policies for establishing the necessary competencies and skills of scientists and IT personnel. To implement the concepts developed in SciForge a combined bottom-up / top-down approach is considered that will be implemented in parallel in different scientific domains, e.g. in earth sciences, climate research and the life sciences. Based on the developed blueprints a scientific software publishing platform will be iteratively implemented, tested, and evaluated. Thus the platform should be developed continuously on the basis of gained experiences and results. The platform services will be extended one by one corresponding to the requirements of the communities. Thus the implemented platform for the publication of scientific software can be improved and stabilized incrementally as a tool with software, science, publishing, and user oriented features.
A taxonomy and discussion of software attack technologies
NASA Astrophysics Data System (ADS)
Banks, Sheila B.; Stytz, Martin R.
2005-03-01
Software is a complex thing. It is not an engineering artifact that springs forth from a design by simply following software coding rules; creativity and the human element are at the heart of the process. Software development is part science, part art, and part craft. Design, architecture, and coding are equally important activities and in each of these activities, errors may be introduced that lead to security vulnerabilities. Therefore, inevitably, errors enter into the code. Some of these errors are discovered during testing; however, some are not. The best way to find security errors, whether they are introduced as part of the architecture development effort or coding effort, is to automate the security testing process to the maximum extent possible and add this class of tools to the tools available, which aids in the compilation process, testing, test analysis, and software distribution. Recent technological advances, improvements in computer-generated forces (CGFs), and results in research in information assurance and software protection indicate that we can build a semi-intelligent software security testing tool. However, before we can undertake the security testing automation effort, we must understand the scope of the required testing, the security failures that need to be uncovered during testing, and the characteristics of the failures. Therefore, we undertook the research reported in the paper, which is the development of a taxonomy and a discussion of software attacks generated from the point of view of the security tester with the goal of using the taxonomy to guide the development of the knowledge base for the automated security testing tool. The representation for attacks and threat cases yielded by this research captures the strategies, tactics, and other considerations that come into play during the planning and execution of attacks upon application software. The paper is organized as follows. Section one contains an introduction to our research and a discussion of the motivation for our work. Section two contains a presents our taxonomy of software attacks and a discussion of the strategies employed and general weaknesses exploited for each attack. Section three contains a summary and suggestions for further research.
Modular Rocket Engine Control Software (MRECS)
NASA Technical Reports Server (NTRS)
Tarrant, Charlie; Crook, Jerry
1997-01-01
The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for a generic, advanced engine control system that will result in lower software maintenance (operations) costs. It effectively accommodates software requirements changes that occur due to hardware. technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives and benefits of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishment are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software, architecture, reuse software, and reduced software reverification time related to software changes. Currently, the program is focused on supporting MSFC in accomplishing a Space Shuttle Main Engine (SSME) hot-fire test at Stennis Space Center and the Low Cost Boost Technology (LCBT) Program.
Performance testing of 3D point cloud software
NASA Astrophysics Data System (ADS)
Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.
2013-10-01
LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.
Testing Scientific Software: A Systematic Literature Review.
Kanewala, Upulee; Bieman, James M
2014-10-01
Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.
Automation Hooks Architecture Trade Study for Flexible Test Orchestration
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.
2010-01-01
We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.
BrightStat.com: free statistics online.
Stricker, Daniel
2008-10-01
Powerful software for statistical analysis is expensive. Here I present BrightStat, a statistical software running on the Internet which is free of charge. BrightStat's goals, its main capabilities and functionalities are outlined. Three different sample runs, a Friedman test, a chi-square test, and a step-wise multiple regression are presented. The results obtained by BrightStat are compared with results computed by SPSS, one of the global leader in providing statistical software, and VassarStats, a collection of scripts for data analysis running on the Internet. Elementary statistics is an inherent part of academic education and BrightStat is an alternative to commercial products.
A progress report on a NASA research program for embedded computer systems software
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Senn, E. H.; Will, R. W.; Straeter, T. A.
1979-01-01
The paper presents the results of the second stage of the Multipurpose User-oriented Software Technology (MUST) program. Four primary areas of activities are discussed: programming environment, HAL/S higher-order programming language support, the Integrated Verification and Testing System (IVTS), and distributed system language research. The software development environment is provided by the interactive software invocation system. The higher-order programming language (HOL) support chosen for consideration is HAL/S mainly because at the time it was one of the few HOLs with flight computer experience and it is the language used on the Shuttle program. The overall purpose of IVTS is to provide a 'user-friendly' software testing system which is highly modular, user controlled, and cooperative in nature.
An empirical study of flight control software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a laboratory experiment in flight control software reliability are reported. The experiment tests a small sample of implementations of a pitch axis control law for a PA28 aircraft with over 14 million pitch commands with varying levels of additive input and feedback noise. The testing which uses the method of n-version programming for error detection surfaced four software faults in one implementation of the control law. The small number of detected faults precluded the conduct of the error burst analyses. The pitch axis problem provides data for use in constructing a model in the prediction of the reliability of software in systems with feedback. The study is undertaken to find means to perform reliability evaluations of flight control software.
NASA Technical Reports Server (NTRS)
Stinnett, W. G.
1980-01-01
The modifications, additions, and testing results for a version of the Deep Space Station command software, generated for support of the Voyager Saturn encounter, are discussed. The software update requirements included efforts to: (1) recode portions of the software to permit recovery of approximately 2000 words of memory; (2) correct five Voyager Ground data System liens; (3) provide capability to automatically turn off the command processor assembly local printer during periods of low activity; and (4) correct anomalies existing in the software.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091
Validation of software for calculating the likelihood ratio for parentage and kinship.
Drábek, J
2009-03-01
Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.
Software platform virtualization in chemistry research and university teaching
2009-01-01
Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997
Integrated Optical Design Analysis (IODA): New Test Data and Modeling Features
NASA Technical Reports Server (NTRS)
Moore, Jim; Troy, Ed; Patrick, Brian
2003-01-01
A general overview of the capabilities of the IODA ("Integrated Optical Design Analysis") exchange of data and modeling results between thermal, structures, optical design, and testing engineering disciplines. This presentation focuses on new features added to the software that allow measured test data to be imported into the IODA environment for post processing or comparisons with pretest model predictions. software is presented. IODA promotes efficient
Product-oriented Software Certification Process for Software Synthesis
NASA Technical Reports Server (NTRS)
Nelson, Stacy; Fischer, Bernd; Denney, Ewen; Schumann, Johann; Richardson, Julian; Oh, Phil
2004-01-01
The purpose of this document is to propose a product-oriented software certification process to facilitate use of software synthesis and formal methods. Why is such a process needed? Currently, software is tested until deemed bug-free rather than proving that certain software properties exist. This approach has worked well in most cases, but unfortunately, deaths still occur due to software failure. Using formal methods (techniques from logic and discrete mathematics like set theory, automata theory and formal logic as opposed to continuous mathematics like calculus) and software synthesis, it is possible to reduce this risk by proving certain software properties. Additionally, software synthesis makes it possible to automate some phases of the traditional software development life cycle resulting in a more streamlined and accurate development process.
The design, deployment, and testing of kriging models in GEOframe with SIK-0.9.8
NASA Astrophysics Data System (ADS)
Bancheri, Marialaura; Serafin, Francesco; Bottazzi, Michele; Abera, Wuletawu; Formetta, Giuseppe; Rigon, Riccardo
2018-06-01
This work presents a software package for the interpolation of climatological variables, such as temperature and precipitation, using kriging techniques. The purposes of the paper are (1) to present a geostatistical software that is easy to use and easy to plug in to a hydrological model; (2) to provide a practical example of an accurately designed software from the perspective of reproducible research; and (3) to demonstrate the goodness of the results of the software and so have a reliable alternative to other, more traditional tools. A total of 11 types of theoretical semivariograms and four types of kriging were implemented and gathered into Object Modeling System-compliant components. The package provides real-time optimization for semivariogram and kriging parameters. The software was tested using a year's worth of hourly temperature readings and a rain storm event (11 h) recorded in 2008 and retrieved from 97 meteorological stations in the Isarco River basin, Italy. For both the variables, good interpolation results were obtained and then compared to the results from the R package gstat.
A Framework of the Use of Information in Software Testing
ERIC Educational Resources Information Center
Kaveh, Payman
2010-01-01
With the increasing role that software systems play in our daily lives, software quality has become extremely important. Software quality is impacted by the efficiency of the software testing process. There are a growing number of software testing methodologies, models, and initiatives to satisfy the need to improve software quality. The main…
ERIC Educational Resources Information Center
Sieverts, Eric G.; And Others
1993-01-01
Reports on tests evaluating nine microcomputer software packages designed for information storage and retrieval: BRS-Search, dtSearch, InfoBank, Micro-OPC, Q&A, STN-PFS, Strix, TINman, and ZYindex. Tables and narrative evaluations detail results related to security, hardware, user features, search capability, indexing, input, maintenance of files,…
CHIME: A Metadata-Based Distributed Software Development Environment
2005-01-01
structures by using typography , graphics , and animation. The Software Im- mersion in our conceptual model for CHIME can be seen as a form of Software...Even small- to medium-sized development efforts may involve hundreds of artifacts -- design documents, change requests, test cases and results, code...for managing and organizing information from all phases of the software lifecycle. CHIME is designed around an XML-based metadata architecture, in
Software Engineering Laboratory Ada performance study: Results and implications
NASA Technical Reports Server (NTRS)
Booth, Eric W.; Stark, Michael E.
1992-01-01
The SEL is an organization sponsored by NASA/GSFC to investigate the effectiveness of software engineering technologies applied to the development of applications software. The SEL was created in 1977 and has three organizational members: NASA/GSFC, Systems Development Branch; The University of Maryland, Computer Sciences Department; and Computer Sciences Corporation, Systems Development Operation. The goals of the SEL are as follows: (1) to understand the software development process in the GSFC environments; (2) to measure the effect of various methodologies, tools, and models on this process; and (3) to identify and then to apply successful development practices. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that include the Ada Performance Study Report. This paper describes the background of Ada in the Flight Dynamics Division (FDD), the objectives and scope of the Ada Performance Study, the measurement approach used, the performance tests performed, the major test results, and the implications for future FDD Ada development efforts.
Robot-operated quality control station based on the UTT method
NASA Astrophysics Data System (ADS)
Burghardt, Andrzej; Kurc, Krzysztof; Szybicki, Dariusz; Muszyńska, Magdalena; Nawrocki, Jacek
2017-03-01
This paper presents a robotic test stand for the ultrasonic transmission tomography (UTT) inspection of stator vane thickness. The article presents the method of the test stand design in Autodesk Robot Structural Analysis Professional 2013 software suite. The performance of the designed test stand solution was simulated in the RobotStudio software suite. The operating principle of the test stand measurement system is presented with a specific focus on the measurement strategy. The results of actual wall thickness measurements performed on stator vanes are presented.
Round Robin Fatigue Crack Growth Testing Results
2006-11-01
testing was accomplished, in accordance with ASTM E647, using two different capacity SATEC frames-a 20 kip test frame for the 7075-T6 panels and a 55 kip...Equipment and Setup a. SATEC b. 20 kip (7075-T6); 55 kip (2024-T351) c. Test control hardware/software i. Hardware: Teststar Ilm ii. Software: Station...5c. WROGRK M UN LEMBERTNME 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) S. PERFORMING ORGANIZATION REPORT NUMBER Center for A44rcraft
Software Reliability Analysis of NASA Space Flight Software: A Practical Experience
Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac
2017-01-01
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255
Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.
Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac
2016-01-01
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.
NASA Technical Reports Server (NTRS)
Lawrence, Stella
1991-01-01
The object of this project was to develop and calibrate quantitative models for predicting the quality of software. Reliable flight and supporting ground software is a highly important factor in the successful operation of the space shuttle program. The models used in the present study consisted of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software). There are ten models in SMERFS. For a first run, the results obtained in modeling the cumulative number of failures versus execution time showed fairly good results for our data. Plots of cumulative software failures versus calendar weeks were made and the model results were compared with the historical data on the same graph. If the model agrees with actual historical behavior for a set of data then there is confidence in future predictions for this data. Considering the quality of the data, the models have given some significant results, even at this early stage. With better care in data collection, data analysis, recording of the fixing of failures and CPU execution times, the models should prove extremely helpful in making predictions regarding the future pattern of failures, including an estimate of the number of errors remaining in the software and the additional testing time required for the software quality to reach acceptable levels. It appears that there is no one 'best' model for all cases. It is for this reason that the aim of this project was to test several models. One of the recommendations resulting from this study is that great care must be taken in the collection of data. When using a model, the data should satisfy the model assumptions.
Will They Report It? Ethical Attitude of Graduate Software Engineers in Reporting Bad News
ERIC Educational Resources Information Center
Sajeev, A. S. M.; Crnkovic, Ivica
2012-01-01
Hiding critical information has resulted in disastrous failures of some major software projects. This paper investigates, using a subset of Keil's test, how graduates (70% of them with work experience) from different cultural backgrounds who are enrolled in a postgraduate course on global software development would handle negative information that…
ERIC Educational Resources Information Center
Kalender, Ilker
2012-01-01
catcher is a software program designed to compute the [omega] index, a common statistical index for the identification of collusions (cheating) among examinees taking an educational or psychological test. It requires (a) responses and (b) ability estimations of individuals, and (c) item parameters to make computations and outputs the results of…
A tool to include gamma analysis software into a quality assurance program.
Agnew, Christina E; McGarry, Conor K
2016-03-01
To provide a tool to enable gamma analysis software algorithms to be included in a quality assurance (QA) program. Four image sets were created comprising two geometric images to independently test the distance to agreement (DTA) and dose difference (DD) elements of the gamma algorithm, a clinical step and shoot IMRT field and a clinical VMAT arc. The images were analysed using global and local gamma analysis with 2 in-house and 8 commercially available software encompassing 15 software versions. The effect of image resolution on gamma pass rates was also investigated. All but one software accurately calculated the gamma passing rate for the geometric images. Variation in global gamma passing rates of 1% at 3%/3mm and over 2% at 1%/1mm was measured between software and software versions with analysis of appropriately sampled images. This study provides a suite of test images and the gamma pass rates achieved for a selection of commercially available software. This image suite will enable validation of gamma analysis software within a QA program and provide a frame of reference by which to compare results reported in the literature from various manufacturers and software versions. Copyright © 2015. Published by Elsevier Ireland Ltd.
Executable assertions and flight software
NASA Technical Reports Server (NTRS)
Mahmood, A.; Andrews, D. M.; Mccluskey, E. J.
1984-01-01
Executable assertions are used to test flight control software. The techniques used for testing flight software; however, are different from the techniques used to test other kinds of software. This is because of the redundant nature of flight software. An experimental setup for testing flight software using executable assertions is described. Techniques for writing and using executable assertions to test flight software are presented. The error detection capability of assertions is studied and many examples of assertions are given. The issues of placement and complexity of assertions and the language features to support efficient use of assertions are discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1208, ``Software Unit Testing for Digital Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1208 is proposed...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Software Unit Testing for Digital Computer Software... revised regulatory guide (RG), revision 1 of RG 1.171, ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.'' This RG endorses American National Standards...
NASA Astrophysics Data System (ADS)
Le Bras, Ronan; Kushida, Noriyuki; Mialle, Pierrick; Tomuta, Elena; Arora, Nimar
2017-04-01
The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing a Bayesian method and software to perform the key step of automatic association of seismological, hydroacoustic, and infrasound (SHI) parametric data. In our preliminary testing in the CTBTO, NET_VISA shows much better performance than its currently operating automatic association module, with a rate for automatic events matching the analyst-reviewed events increased by 10%, signifying that the percentage of missed events is lowered by 40%. Initial tests involving analysts also showed that the new software will complete the automatic bulletins of the CTBTO by adding previously missed events. Because products by the CTBTO are also widely distributed to its member States as well as throughout the seismological community, the introduction of a new technology must be carried out carefully, and the first step of operational integration is to first use NET-VISA results within the interactive analysts' software so that the analysts can check the robustness of the Bayesian approach. We report on the latest results both on the progress for automatic processing and for the initial introduction of NET-VISA results in the analyst review process
Model-Based Verification and Validation of Spacecraft Avionics
NASA Technical Reports Server (NTRS)
Khan, M. Omair; Sievers, Michael; Standley, Shaun
2012-01-01
Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.
Virtual test: A student-centered software to measure student's critical thinking on human disease
NASA Astrophysics Data System (ADS)
Rusyati, Lilit; Firman, Harry
2016-02-01
The study "Virtual Test: A Student-Centered Software to Measure Student's Critical Thinking on Human Disease" is descriptive research. The background is importance of computer-based test that use element and sub element of critical thinking. Aim of this study is development of multiple choices to measure critical thinking that made by student-centered software. Instruments to collect data are (1) construct validity sheet by expert judge (lecturer and medical doctor) and professional judge (science teacher); and (2) test legibility sheet by science teacher and junior high school student. Participants consisted of science teacher, lecturer, and medical doctor as validator; and the students as respondent. Result of this study are describe about characteristic of virtual test that use to measure student's critical thinking on human disease, analyze result of legibility test by students and science teachers, analyze result of expert judgment by science teachers and medical doctor, and analyze result of trial test of virtual test at junior high school. Generally, result analysis shown characteristic of multiple choices to measure critical thinking was made by eight elements and 26 sub elements that developed by Inch et al.; complete by relevant information; and have validity and reliability more than "enough". Furthermore, specific characteristic of multiple choices to measure critical thinking are information in form science comic, table, figure, article, and video; correct structure of language; add source of citation; and question can guide student to critical thinking logically.
Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L
2018-01-01
Background Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test–driven development and automated regression testing promotes reliability. Test–driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a “safety net” for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and “living” design documentation. Rapid-cycle development or “agile” methods are being successfully applied to CDS development. The agile practice of automated test–driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as “executable requirements.” Objective We aimed to establish feasibility of acceptance test–driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Methods Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory’s expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. Results We used test–driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the “executable requirements” are shown prior to building the CDS alert, during build, and after successful build. Conclusions Automated acceptance test–driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test–driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. PMID:29653922
West, A G; Goldsmith, G R; Matimati, I; Dawson, T E
2011-08-30
Previous studies have demonstrated the potential for large errors to occur when analyzing waters containing organic contaminants using isotope ratio infrared spectroscopy (IRIS). In an attempt to address this problem, IRIS manufacturers now provide post-processing spectral analysis software capable of identifying samples with the types of spectral interference that compromises their stable isotope analysis. Here we report two independent tests of this post-processing spectral analysis software on two IRIS systems, OA-ICOS (Los Gatos Research Inc.) and WS-CRDS (Picarro Inc.). Following a similar methodology to a previous study, we cryogenically extracted plant leaf water and soil water and measured the δ(2)H and δ(18)O values of identical samples by isotope ratio mass spectrometry (IRMS) and IRIS. As an additional test, we analyzed plant stem waters and tap waters by IRMS and IRIS in an independent laboratory. For all tests we assumed that the IRMS value represented the "true" value against which we could compare the stable isotope results from the IRIS methods. Samples showing significant deviations from the IRMS value (>2σ) were considered to be contaminated and representative of spectral interference in the IRIS measurement. Over the two studies, 83% of plant species were considered contaminated on OA-ICOS and 58% on WS-CRDS. Post-analysis, spectra were analyzed using the manufacturer's spectral analysis software, in order to see if the software correctly identified contaminated samples. In our tests the software performed well, identifying all the samples with major errors. However, some false negatives indicate that user evaluation and testing of the software are necessary. Repeat sampling of plants showed considerable variation in the discrepancies between IRIS and IRMS. As such, we recommend that spectral analysis of IRIS data must be incorporated into standard post-processing routines. Furthermore, we suggest that the results from spectral analysis be included when reporting stable isotope data from IRIS. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Cameron, Jonathan M.; Myint, Steven
2013-01-01
This software runs a suite of arbitrary software tests spanning various software languages and types of tests (unit level, system level, or file comparison tests). The dtest utility can be set to automate periodic testing of large suites of software, as well as running individual tests. It supports distributing multiple tests over multiple CPU cores, if available. The dtest tool is a utility program (written in Python) that scans through a directory (and its subdirectories) and finds all directories that match a certain pattern and then executes any tests in that directory as described in simple configuration files.
15 CFR 995.27 - Format validation software testing.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 15 Commerce and Foreign Trade 3 2013-01-01 2013-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...
15 CFR 995.27 - Format validation software testing.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 15 Commerce and Foreign Trade 3 2014-01-01 2014-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...
15 CFR 995.27 - Format validation software testing.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 15 Commerce and Foreign Trade 3 2012-01-01 2012-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...
15 CFR 995.27 - Format validation software testing.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Format validation software testing... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying, as far as reasonable and practicable, that CEVAD's data testing software performs the checks, as...
Shuttle avionics software development trials: Tribulations and successes, the backup flight system
NASA Technical Reports Server (NTRS)
Chevers, E. S.
1985-01-01
The development and verification of the Backup Flight System software (BFS) is discussed. The approach taken for the BFS was to develop a very simple and straightforward software program and then test it in every conceivable manner. The result was a program that contained approximately 12,000 full words including ground checkout and the built in test program for the computer. To perform verification, a series of tests was defined using the actual flight type hardware and simulated flight conditions. Then simulated flights were flown and detailed performance analysis was conducted. The intent of most BFS tests was to demonstrate that a stable flightpath could be obtained after engagement from an anomalous initial condition. The extention of the BFS to meet the requirements of the orbital flight test phase is also described.
Improved Ant Algorithms for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi
2014-01-01
Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391
Development of the disable software reporting system on the basis of the neural network
NASA Astrophysics Data System (ADS)
Gavrylenko, S.; Babenko, O.; Ignatova, E.
2018-04-01
The PE structure of malicious and secure software is analyzed, features are highlighted, binary sign vectors are obtained and used as inputs for training the neural network. A software model for detecting malware based on the ART-1 neural network was developed, optimal similarity coefficients were found, and testing was performed. The obtained research results showed the possibility of using the developed system of identifying malicious software in computer systems protection systems
Simulation Testing of Embedded Flight Software
NASA Technical Reports Server (NTRS)
Shahabuddin, Mohammad; Reinholtz, William
2004-01-01
Virtual Real Time (VRT) is a computer program for testing embedded flight software by computational simulation in a workstation, in contradistinction to testing it in its target central processing unit (CPU). The disadvantages of testing in the target CPU include the need for an expensive test bed, the necessity for testers and programmers to take turns using the test bed, and the lack of software tools for debugging in a real-time environment. By virtue of its architecture, most of the flight software of the type in question is amenable to development and testing on workstations, for which there is an abundance of commercially available debugging and analysis software tools. Unfortunately, the timing of a workstation differs from that of a target CPU in a test bed. VRT, in conjunction with closed-loop simulation software, provides a capability for executing embedded flight software on a workstation in a close-to-real-time environment. A scale factor is used to convert between execution time in VRT on a workstation and execution on a target CPU. VRT includes high-resolution operating- system timers that enable the synchronization of flight software with simulation software and ground software, all running on different workstations.
Bonekamp, S; Ghosh, P; Crawford, S; Solga, S F; Horska, A; Brancati, F L; Diehl, A M; Smith, S; Clark, J M
2008-01-01
To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Feature evaluation and test-retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. A random sample of 15 obese adults with type 2 diabetes. Axial T1-weighted spin echo images centered at vertebral bodies of L2-L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test-retest reliability. Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test-retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our results using SliceOmatic, Analyze or NIHImage were comparable and could be used interchangeably. Newly developed fully automated approaches should be compared to one of the examined software packages.
Modular, Autonomous Command and Data Handling Software with Built-In Simulation and Test
NASA Technical Reports Server (NTRS)
Cuseo, John
2012-01-01
The spacecraft system that plays the greatest role throughout the program lifecycle is the Command and Data Handling System (C&DH), along with the associated algorithms and software. The C&DH takes on this role as cost driver because it is the brains of the spacecraft and is the element of the system that is primarily responsible for the integration and interoperability of all spacecraft subsystems. During design and development, many activities associated with mission design, system engineering, and subsystem development result in products that are directly supported by the C&DH, such as interfaces, algorithms, flight software (FSW), and parameter sets. A modular system architecture has been developed that provides a means for rapid spacecraft assembly, test, and integration. This modular C&DH software architecture, which can be targeted and adapted to a wide variety of spacecraft architectures, payloads, and mission requirements, eliminates the current practice of rewriting the spacecraft software and test environment for every mission. This software allows missionspecific software and algorithms to be rapidly integrated and tested, significantly decreasing time involved in the software development cycle. Additionally, the FSW includes an Onboard Dynamic Simulation System (ODySSy) that allows the C&DH software to support rapid integration and test. With this solution, the C&DH software capabilities will encompass all phases of the spacecraft lifecycle. ODySSy is an on-board simulation capability built directly into the FSW that provides dynamic built-in test capabilities as soon as the FSW image is loaded onto the processor. It includes a six-degrees- of-freedom, high-fidelity simulation that allows complete closed-loop and hardware-in-the-loop testing of a spacecraft in a ground processing environment without any additional external stimuli. ODySSy can intercept and modify sensor inputs using mathematical sensor models, and can intercept and respond to actuator commands. ODySSy integration is unique in that it allows testing of actual mission sequences on the flight vehicle while the spacecraft is in various stages of assembly, test, and launch operations all without any external support equipment or simulators. The ODySSy component of the FSW significantly decreases the time required for integration and test by providing an automated, standardized, and modular approach to integrated avionics and component interface and functional verification. ODySSy further provides the capability for on-orbit support in the form of autonomous mission planning and fault protection.
Shaikh, Masood Ali
2017-09-01
Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.
NASA Astrophysics Data System (ADS)
Takahashi, Masakazu; Fukue, Yoshinori
This paper proposes a Retrospective Computerized System Validation (RCSV) method for Drug Manufacturing Software (DMSW) that relates to drug production considering software modification. Because DMSW that is used for quality management and facility control affects big impact to quality of drugs, regulatory agency required proofs of adequacy for DMSW's functions and performance based on developed documents and test results. Especially, the work that explains adequacy for previously developed DMSW based on existing documents and operational records is called RCSV. When modifying RCSV conducted DMSW, it was difficult to secure consistency between developed documents and test results for modified DMSW parts and existing documents and operational records for non-modified DMSW parts. This made conducting RCSV difficult. In this paper, we proposed (a) definition of documents architecture, (b) definition of descriptive items and levels in the documents, (c) management of design information using database, (d) exhaustive testing, and (e) integrated RCSV procedure. As a result, we could conduct adequate RCSV securing consistency.
Numerical simulation of mechanical properties tests of tungsten mud waste geopolymer
NASA Astrophysics Data System (ADS)
Paszek, Natalia; Krystek, Małgorzata
2018-03-01
Geopolymers are believed to become in the future an environmental friendly alternative for the concrete. The low CO2 emission during the production process and the possibility of ecological management of the industrial wastes are mentioned as main advantages of geopolymers. The main drawback, causing problems with application of geopolymers as a building material is the lack of the theoretical material model. Indicated problem is being solved now by the group of scientists from the Silesian University of Technology. The series of laboratory tests are carried out within the European research project REMINE. The paper introduces the numerical analyses of tungsten mud waste geopolymer samples which have been performed in the Atena software on the basis of the laboratory tests. Numerical models of bended and compressed samples of different shapes are presented in the paper. The results obtained in Atena software were compared with results obtained in Abaqus and Mafem3D software.
Rover Attitude and Pointing System Simulation Testbed
NASA Technical Reports Server (NTRS)
Vanelli, Charles A.; Grinblat, Jonathan F.; Sirlin, Samuel W.; Pfister, Sam
2009-01-01
The MER (Mars Exploration Rover) Attitude and Pointing System Simulation Testbed Environment (RAPSSTER) provides a simulation platform used for the development and test of GNC (guidance, navigation, and control) flight algorithm designs for the Mars rovers, which was specifically tailored to the MERs, but has since been used in the development of rover algorithms for the Mars Science Laboratory (MSL) as well. The software provides an integrated simulation and software testbed environment for the development of Mars rover attitude and pointing flight software. It provides an environment that is able to run the MER GNC flight software directly (as opposed to running an algorithmic model of the MER GNC flight code). This improves simulation fidelity and confidence in the results. Further more, the simulation environment allows the user to single step through its execution, pausing, and restarting at will. The system also provides for the introduction of simulated faults specific to Mars rover environments that cannot be replicated in other testbed platforms, to stress test the GNC flight algorithms under examination. The software provides facilities to do these stress tests in ways that cannot be done in the real-time flight system testbeds, such as time-jumping (both forwards and backwards), and introduction of simulated actuator faults that would be difficult, expensive, and/or destructive to implement in the real-time testbeds. Actual flight-quality codes can be incorporated back into the development-test suite of GNC developers, closing the loop between the GNC developers and the flight software developers. The software provides fully automated scripting, allowing multiple tests to be run with varying parameters, without human supervision.
Software for Testing Electroactive Structural Components
NASA Technical Reports Server (NTRS)
Moses, Robert W.; Fox, Robert L.; Dimery, Archie D.; Bryant, Robert G.; Shams, Qamar
2003-01-01
A computer program generates a graphical user interface that, in combination with its other features, facilitates the acquisition and preprocessing of experimental data on the strain response, hysteresis, and power consumption of a multilayer composite-material structural component containing one or more built-in sensor(s) and/or actuator(s) based on piezoelectric materials. This program runs in conjunction with Lab-VIEW software in a computer-controlled instrumentation system. For a test, a specimen is instrumented with appliedvoltage and current sensors and with strain gauges. Once the computational connection to the test setup has been made via the LabVIEW software, this program causes the test instrumentation to step through specified configurations. If the user is satisfied with the test results as displayed by the software, the user activates an icon on a front-panel display, causing the raw current, voltage, and strain data to be digitized and saved. The data are also put into a spreadsheet and can be plotted on a graph. Graphical displays are saved in an image file for future reference. The program also computes and displays the power and the phase angle between voltage and current.
Serdar, Muhittin A; Turan, Mustafa; Cihan, Murat
2008-06-01
Laboratory specialists currently need to access scientific-based information at anytime and anywhere. A considerable period of time and too much effort are required to access this information through existing accumulated data. Personal digital assistants (PDA) are supposed to provide an effective solution with commercial software for this problem. In this study, 11 commercial software products (UpToDate, ePocrates, Inforetrive, Pepid, eMedicine, FIRST Consult, and 5 laboratory e-books released by Skyscape and/or Isilo) were selected and the benefits of their use were evaluated by seven laboratory specialists. The assessment of the software was performed based on the number of the tests included, the software content of detailed information for each test-like process, method, interpretation of results, reference ranges, critical values, interferences, equations, pathophysiology, supplementary technical details such as sample collection principles, and additional information such as linked references, evidence-based data, test cost, etc. In terms of technique, the following items are considered: the amount of memory required to run the software, the graphical user interface, which is a user-friendly instrument, and the frequency of new and/or up-date releases. There is still no perfect program, as we have anticipated. Interpretation of laboratory results may require software with an integrated program. However, methodological data are mostly not included in the software evaluated. It seems that these shortcomings will be fixed in the near future, and PDAs and relevant medical applications will also become indispensable for all physicians including laboratory specialists in the field of training/education and in patient care.
Flight Software for the LADEE Mission
NASA Technical Reports Server (NTRS)
Cannon, Howard N.
2015-01-01
The Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft was launched on September 6, 2013, and completed its mission on April 17, 2014 with a directed impact to the Lunar Surface. Its primary goals were to examine the lunar atmosphere, measure lunar dust, and to demonstrate high rate laser communications. The LADEE mission was a resounding success, achieving all mission objectives, much of which can be attributed to careful planning and preparation. This paper discusses some of the highlights from the mission, and then discusses the techniques used for developing the onboard Flight Software. A large emphasis for the Flight Software was to develop it within tight schedule and cost constraints. To accomplish this, the Flight Software team leveraged heritage software, used model based development techniques, and utilized an automated test infrastructure. This resulted in the software being delivered on time and within budget. The resulting software was able to meet all system requirements, and had very problems in flight.
Reference datasets for bioequivalence trials in a two-group parallel design.
Fuglsang, Anders; Schütz, Helmut; Labes, Detlew
2015-03-01
In order to help companies qualify and validate the software used to evaluate bioequivalence trials with two parallel treatment groups, this work aims to define datasets with known results. This paper puts a total 11 datasets into the public domain along with proposed consensus obtained via evaluations from six different software packages (R, SAS, WinNonlin, OpenOffice Calc, Kinetica, EquivTest). Insofar as possible, datasets were evaluated with and without the assumption of equal variances for the construction of a 90% confidence interval. Not all software packages provide functionality for the assumption of unequal variances (EquivTest, Kinetica), and not all packages can handle datasets with more than 1000 subjects per group (WinNonlin). Where results could be obtained across all packages, one showed questionable results when datasets contained unequal group sizes (Kinetica). A proposal is made for the results that should be used as validation targets.
POWERLIB: SAS/IML Software for Computing Power in Multivariate Linear Models
Johnson, Jacqueline L.; Muller, Keith E.; Slaughter, James C.; Gurka, Matthew J.; Gribbin, Matthew J.; Simpson, Sean L.
2014-01-01
The POWERLIB SAS/IML software provides convenient power calculations for a wide range of multivariate linear models with Gaussian errors. The software includes the Box, Geisser-Greenhouse, Huynh-Feldt, and uncorrected tests in the “univariate” approach to repeated measures (UNIREP), the Hotelling Lawley Trace, Pillai-Bartlett Trace, and Wilks Lambda tests in “multivariate” approach (MULTIREP), as well as a limited but useful range of mixed models. The familiar univariate linear model with Gaussian errors is an important special case. For estimated covariance, the software provides confidence limits for the resulting estimated power. All power and confidence limits values can be output to a SAS dataset, which can be used to easily produce plots and tables for manuscripts. PMID:25400516
Early experiences building a software quality prediction model
NASA Technical Reports Server (NTRS)
Agresti, W. W.; Evanco, W. M.; Smith, M. C.
1990-01-01
Early experiences building a software quality prediction model are discussed. The overall research objective is to establish a capability to project a software system's quality from an analysis of its design. The technical approach is to build multivariate models for estimating reliability and maintainability. Data from 21 Ada subsystems were analyzed to test hypotheses about various design structures leading to failure-prone or unmaintainable systems. Current design variables highlight the interconnectivity and visibility of compilation units. Other model variables provide for the effects of reusability and software changes. Reported results are preliminary because additional project data is being obtained and new hypotheses are being developed and tested. Current multivariate regression models are encouraging, explaining 60 to 80 percent of the variation in error density of the subsystems.
Holló, Gábor; Shu-Wei, Hsu; Naghizadeh, Farzaneh
2016-06-01
To compare the current (6.3) and a novel software version (6.12) of the RTVue-100 optical coherence tomograph (RTVue-OCT) for ganglion cell complex (GCC) and retinal nerve fiber layer thickness (RNFLT) image segmentation and detection of glaucoma in high myopia. RNFLT and GCC scans were acquired with software version 6.3 of the RTVue-OCT on 51 highly myopic eyes (spherical refractive error ≤-6.0 D) of 51 patients, and were analyzed with both the software versions. Twenty-two eyes were nonglaucomatous, 13 were ocular hypertensive and 16 eyes had glaucoma. No difference was seen for any RNFLT, and average GCC parameter between the software versions (paired t test, P≥0.084). Global loss volume was significantly lower (more normal) with version 6.12 than with version 6.3 (Wilcoxon signed-rank test, P<0.001). The percentage agreement (κ) between the clinical (normal and ocular hypertensive vs. glaucoma) and the software-provided classifications (normal and borderline vs. outside normal limits) were 0.3219 and 0.4442 for average RNFLT, and 0.2926 and 0.4977 for average GCC with versions 1 and 2, respectively (McNemar symmetry test, P≥0.289). No difference in average RNFLT and GCC classification (McNemar symmetry test, P≥0.727) and the number of eyes with at least 1 segmentation error (P≥0.109) was found between the software versions, respectively. Although GCC segmentation was improved with software version 6.12 compared with the current version in highly myopic eyes, this did not result in a significant change of the average RNFLT and GCC values, and did not significantly improve the software-provided classification for glaucoma.
Developing high-quality educational software.
Johnson, Lynn A; Schleyer, Titus K L
2003-11-01
The development of effective educational software requires a systematic process executed by a skilled development team. This article describes the core skills required of the development team members for the six phases of successful educational software development. During analysis, the foundation of product development is laid including defining the audience and program goals, determining hardware and software constraints, identifying content resources, and developing management tools. The design phase creates the specifications that describe the user interface, the sequence of events, and the details of the content to be displayed. During development, the pieces of the educational program are assembled. Graphics and other media are created, video and audio scripts written and recorded, the program code created, and support documentation produced. Extensive testing by the development team (alpha testing) and with students (beta testing) is conducted. Carefully planned implementation is most likely to result in a flawless delivery of the educational software and maintenance ensures up-to-date content and software. Due to the importance of the sixth phase, evaluation, we have written a companion article on it that follows this one. The development of a CD-ROM product is described including the development team, a detailed description of the development phases, and the lessons learned from the project.
Development of an automated asbestos counting software based on fluorescence microscopy.
Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio
2015-01-01
An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.
NASA Astrophysics Data System (ADS)
Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.
2018-03-01
Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.
Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects
ERIC Educational Resources Information Center
Buffardi, Kevin John
2014-01-01
Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's…
Educational Software Acquisition for Microcomputers.
ERIC Educational Resources Information Center
Erikson, Warren; Turban, Efraim
1985-01-01
Examination of issues involved in acquiring appropriate microcomputer software for higher education focuses on the following points: developing your own software; finding commercially available software; using published evaluations; pre-purchase testing; customizing and adapting commercial software; post-purchase testing; and software use. A…
1999-01-01
published in December of 1998. In addition, Mr. Drake is the author of a theme article entitled: "Measuring Software Quality: A Case Study...and services may run on different platforms in differing combinations , • Partial application failure (e.g., a client running, service down) is...result in a combined utility function that is some aggregation of the underlying utility functions. The benefit a client receives from a service
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2002-01-01
The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.
NASA Technical Reports Server (NTRS)
Kranz, Timothy L.
2002-01-01
The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.
Maintaining the Health of Software Monitors
NASA Technical Reports Server (NTRS)
Person, Suzette; Rungta, Neha
2013-01-01
Software health management (SWHM) techniques complement the rigorous verification and validation processes that are applied to safety-critical systems prior to their deployment. These techniques are used to monitor deployed software in its execution environment, serving as the last line of defense against the effects of a critical fault. SWHM monitors use information from the specification and implementation of the monitored software to detect violations, predict possible failures, and help the system recover from faults. Changes to the monitored software, such as adding new functionality or fixing defects, therefore, have the potential to impact the correctness of both the monitored software and the SWHM monitor. In this work, we describe how the results of a software change impact analysis technique, Directed Incremental Symbolic Execution (DiSE), can be applied to monitored software to identify the potential impact of the changes on the SWHM monitor software. The results of DiSE can then be used by other analysis techniques, e.g., testing, debugging, to help preserve and improve the integrity of the SWHM monitor as the monitored software evolves.
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
Guidance and Control Software Project Data - Volume 3: Verification Documents
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J. (Editor)
2008-01-01
The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the verification documents from the GCS project. Volume 3 contains four appendices: A. Software Verification Cases and Procedures for the Guidance and Control Software Project; B. Software Verification Results for the Pluto Implementation of the Guidance and Control Software; C. Review Records for the Pluto Implementation of the Guidance and Control Software; and D. Test Results Logs for the Pluto Implementation of the Guidance and Control Software.
Factors That Affect Software Testability
NASA Technical Reports Server (NTRS)
Voas, Jeffrey M.
1991-01-01
Software faults that infrequently affect software's output are dangerous. When a software fault causes frequent software failures, testing is likely to reveal the fault before the software is releases; when the fault remains undetected during testing, it can cause disaster after the software is installed. A technique for predicting whether a particular piece of software is likely to reveal faults within itself during testing is found in [Voas91b]. A piece of software that is likely to reveal faults within itself during testing is said to have high testability. A piece of software that is not likely to reveal faults within itself during testing is said to have low testability. It is preferable to design software with higher testabilities from the outset, i.e., create software with as high of a degree of testability as possible to avoid the problems of having undetected faults that are associated with low testability. Information loss is a phenomenon that occurs during program execution that increases the likelihood that a fault will remain undetected. In this paper, I identify two brad classes of information loss, define them, and suggest ways of predicting the potential for information loss to occur. We do this in order to decrease the likelihood that faults will remain undetected during testing.
Lean and Efficient Software: Whole-Program Optimization of Executables
2013-01-03
staffing for the project Implementing the necessary infrastructure ( testing, performance evaluation, needed support software, bug and issue...in the SOW The result of the planning discussions is shown in the milestone table (section 6). In addition, we selected appropriate engineering
Design and implementation of Skype USB user gateway software
NASA Astrophysics Data System (ADS)
Qi, Yang
2017-08-01
With the widespread application of VoIP, the client with private protocol becomes more and more popular. Skype is one of the representatives. How to connect Skype with PSTN just by Skype client has gradually become hot. This paper design and implement the software based on a kind of USB User Gateway. With the software Skype user can freely communicate with PSTN phone. FSM is designed as the core of the software, and Skype control is separated by the USB Gateway control. In this way, the communication becomes more flexible and efficient. In the actual user testing, the software obtains good results.
A methodology for testing fault-tolerant software
NASA Technical Reports Server (NTRS)
Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.
1985-01-01
A methodology for testing fault tolerant software is presented. There are problems associated with testing fault tolerant software because many errors are masked or corrected by voters, limiter, or automatic channel synchronization. This methodology illustrates how the same strategies used for testing fault tolerant hardware can be applied to testing fault tolerant software. For example, one strategy used in testing fault tolerant hardware is to disable the redundancy during testing. A similar testing strategy is proposed for software, namely, to move the major emphasis on testing earlier in the development cycle (before the redundancy is in place) thus reducing the possibility that undetected errors will be masked when limiters and voters are added.
Computer Administering of the Psychological Investigations: Set-Relational Representation
NASA Astrophysics Data System (ADS)
Yordzhev, Krasimir
Computer administering of a psychological investigation is the computer representation of the entire procedure of psychological assessments - test construction, test implementation, results evaluation, storage and maintenance of the developed database, its statistical processing, analysis and interpretation. A mathematical description of psychological assessment with the aid of personality tests is discussed in this article. The set theory and the relational algebra are used in this description. A relational model of data, needed to design a computer system for automation of certain psychological assessments is given. Some finite sets and relation on them, which are necessary for creating a personality psychological test, are described. The described model could be used to develop real software for computer administering of any psychological test and there is full automation of the whole process: test construction, test implementation, result evaluation, storage of the developed database, statistical implementation, analysis and interpretation. A software project for computer administering personality psychological tests is suggested.
A high order approach to flight software development and testing
NASA Technical Reports Server (NTRS)
Steinbacher, J.
1981-01-01
The use of a software development facility is discussed as a means of producing a reliable and maintainable ECS software system, and as a means of providing efficient use of the ECS hardware test facility. Principles applied to software design are given, including modularity, abstraction, hiding, and uniformity. The general objectives of each phase of the software life cycle are also given, including testing, maintenance, code development, and requirement specifications. Software development facility tools are summarized, and tool deficiencies recognized in the code development and testing phases are considered. Due to limited lab resources, the functional simulation capabilities may be indispensable in the testing phase.
A rule-based software test data generator
NASA Technical Reports Server (NTRS)
Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II
1991-01-01
Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.
Software and mathematical support of Kazakhstani star tracker
NASA Astrophysics Data System (ADS)
Akhmedov, D.; Yelubayev, S.; Ten, V.; Bopeyev, T.; Alipbayev, K.; Sukhenko, A.
2016-10-01
Currently the specialists of Kazakhstan have been developing the star tracker that is further planned to use on Kazakhstani satellites of various purposes. At the first stage it has been developed the experimental model of star tracker that has following characteristics: field of view 20°, update frequency 2 Hz, exclusion angle 40°, accuracy of attitude determination of optical axis/around optical axis 15/50 arcsec. Software and mathematical support are the most high technology parts of star tracker. The results of software and mathematical support development of experimental model of Kazakhstani star tracker are represented in this article. In particular, there are described the main mathematical models and algorithms that have been used as a basis for program units of preliminary image processing of starry sky, stars identification and star tracker attitude determination. The results of software and mathematical support testing with the help of program simulation complex using various configurations of defects including image sensor noises, point spread function modeling, optical system distortion up to 2% are presented. Analysis of testing results has shown that accuracy of attitude determination of star tracker is within the permissible range
NASA Technical Reports Server (NTRS)
Clark, David A.
1998-01-01
In light of the escalation of terrorism, the Department of Defense spearheaded the development of new antiterrorist software for all Government agencies by issuing a Broad Agency Announcement to solicit proposals. This Government-wide competition resulted in a team that includes NASA Lewis Research Center's Computer Services Division, who will develop the graphical user interface (GUI) and test it in their usability lab. The team launched a program entitled Joint Sphere of Security (JSOS), crafted a design architecture (see the following figure), and is testing the interface. This software system has a state-ofthe- art, object-oriented architecture, with a main kernel composed of the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS will be used as the software "breadboard" for assembling the components of explosions, such as blast and collapse simulations.
Orbit determination for ISRO satellite missions
NASA Astrophysics Data System (ADS)
Rao, Ch. Sreehari; Sinha, S. K.
Indian Space Research Organisation (ISRO) has been successful in using the in-house developed orbit determination and prediction software for satellite missions of Bhaskara, Rohini and APPLE. Considering the requirements of satellite missions, software packages are developed, tested and their accuracies are assessed. Orbit determination packages developed are SOIP, for low earth orbits of Bhaskara and Rohini missions, ORIGIN and ODPM, for orbits related to all phases of geo-stationary missions and SEGNIP, for drift and geo-stationary orbits. Software is tested and qualified using tracking data of SIGNE-3, D5-B, OTS, SYMPHONIE satellites with the help of software available with CNES, ESA and DFVLR. The results match well with those available from these agencies. These packages have supported orbit determination successfully throughout the mission life for all ISRO satellite missions. Member-Secretary
A Generic Software Architecture For Prognostics
NASA Technical Reports Server (NTRS)
Teubert, Christopher; Daigle, Matthew J.; Sankararaman, Shankar; Goebel, Kai; Watkins, Jason
2017-01-01
Prognostics is a systems engineering discipline focused on predicting end-of-life of components and systems. As a relatively new and emerging technology, there are few fielded implementations of prognostics, due in part to practitioners perceiving a large hurdle in developing the models, algorithms, architecture, and integration pieces. As a result, no open software frameworks for applying prognostics currently exist. This paper introduces the Generic Software Architecture for Prognostics (GSAP), an open-source, cross-platform, object-oriented software framework and support library for creating prognostics applications. GSAP was designed to make prognostics more accessible and enable faster adoption and implementation by industry, by reducing the effort and investment required to develop, test, and deploy prognostics. This paper describes the requirements, design, and testing of GSAP. Additionally, a detailed case study involving battery prognostics demonstrates its use.
Modern software approaches applied to a Hydrological model: the GEOtop Open-Source Software Project
NASA Astrophysics Data System (ADS)
Cozzini, Stefano; Endrizzi, Stefano; Cordano, Emanuele; Bertoldi, Giacomo; Dall'Amico, Matteo
2017-04-01
The GEOtop hydrological scientific package is an integrated hydrological model that simulates the heat and water budgets at and below the soil surface. It describes the three-dimensional water flow in the soil and the energy exchange with the atmosphere, considering the radiative and turbulent fluxes. Furthermore, it reproduces the highly non-linear interactions between the water and energy balance during soil freezing and thawing, and simulates the temporal evolution of snow cover, soil temperature and moisture. The core components of the package were presented in the 2.0 version (Endrizzi et al, 2014), which was released as Free Software Open-source project. However, despite the high scientific quality of the project, a modern software engineering approach was still missing. Such weakness hindered its scientific potential and its use both as a standalone package and, more importantly, in an integrate way with other hydrological software tools. In this contribution we present our recent software re-engineering efforts to create a robust and stable scientific software package open to the hydrological community, easily usable by researchers and experts, and interoperable with other packages. The activity takes as a starting point the 2.0 version, scientifically tested and published. This version, together with several test cases based on recent published or available GEOtop applications (Cordano and Rigon, 2013, WRR, Kollet et al, 2016, WRR) provides the baseline code and a certain number of referenced results as benchmark. Comparison and scientific validation can then be performed for each software re-engineering activity performed on the package. To keep track of any single change the package is published on its own github repository geotopmodel.github.io/geotop/ under GPL v3.0 license. A Continuous Integration mechanism by means of Travis-CI has been enabled on the github repository on master and main development branches. The usage of CMake configuration tool and the suite of tests (easily manageable by means of ctest tools) greatly reduces the burden of the installation and allows us to enhance portability on different compilers and Operating system platforms. The package was also complemented by several software tools which provide web-based visualization of results based on R plugins, in particular "shiny" (Chang at al, 2016), "geotopbricks" and "geotopOptim2" (Cordano et al, 2016) packages, which allow rapid and efficient scientific validation of new examples and tests. The software re-engineering activities are still under development. However, our first results are promising enough to eventually reach a robust and stable software project that manages in a flexible way a complex state-of-the-art hydrological model like GEOtop and integrates it into wider workflows.
NASA Technical Reports Server (NTRS)
Markos, H.
1978-01-01
Status of the computer programs dealing with space shuttle orbiter avionics is reported. Specific topics covered include: delivery status; SSW software; SM software; DL software; GNC software; level 3/4 testing; level 5 testing; performance analysis, SDL readiness for entry first article configuration inspection; and verification assessment.
Production Maintenance Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason Gabler, David Skinner
2005-11-01
PMI is a XML framework for formulating tests of software and software environments which operate in a relatively push button manner, i.e., can be automated, and that provide results that are readily consumable/publishable via RSS. Insofar as possible the tests are carried out in manner congruent with real usage. PMI drives shell scripts via a perl program which is charge of timing, validating each test, and controlling the flow through sets of tests. Testing in PMI is built up hierarchically. A suite of tests may start by testing basic functionalities (file system is writable, compiler is found and functions, shellmore » environment behaves as expected, etc.) and work up to large more complicated activities (execution of parallel code, file transfers, etc.) At each step in this hierarchy a failure leads to generation of a text message or RSS that can be tagged as to who should be notified of the failure. There are two functionalities that PMI has been directed at. 1) regular and automated testing of multi user environments and 2) version-wise testing of new software releases prior to their deployment in a production mode.« less
Translating expert system rules into Ada code with validation and verification
NASA Technical Reports Server (NTRS)
Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam
1991-01-01
The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.
2007-07-31
David L. Iverson of NASA Ames Research Center, Moffett Field, California (in foreground) led development of computer software to monitor the conditions of the gyroscopes that keep the International Space Station (ISS) properly oriented in space as the ISS orbits Earth. Also, Charles Lee is pictured. During its develoment, researchers used the software to analyze archived gyroscope records. In these tests, users noticed problems with the gyroscopes long before the current systems flagged glitches. Testers trained using several months of normal space station gyroscope data collected by the International Space Station Mission Control Center at NASA Johnson Space Center, Houston. Promising tests results convinced officials to start using the software in 2007.
Software modifications to the Demonstration Advanced Avionics Systems (DAAS)
NASA Technical Reports Server (NTRS)
Nedell, B. F.; Hardy, G. H.
1984-01-01
Critical information required for the design of integrated avionics suitable for generation aviation is applied towards software modifications for the Demonstration Advanced Avionics System (DAAS). The program emphasizes the use of data busing, distributed microprocessors, shared electronic displays and data entry devices, and improved functional capability. A demonstration advanced avionics system (DAAS) is designed, built, and flight tested in a Cessna 402, twin engine, general aviation aircraft. Software modifications are made to DAAS at Ames concurrent with the flight test program. The changes are the result of the experience obtained with the system at Ames, and the comments of the pilots who evaluated the system.
Performance testing of LiDAR exploitation software
NASA Astrophysics Data System (ADS)
Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.
2013-04-01
Mobile LiDAR systems are being used widely in recent years for many applications in the field of geoscience. One of most important limitations of this technology is the large computational requirements involved in data processing. Several software solutions for data processing are available in the market, but users are often unknown about the methodologies to verify their performance accurately. In this work a methodology for LiDAR software performance testing is presented and six different suites are studied: QT Modeler, AutoCAD Civil 3D, Mars 7, Fledermaus, Carlson and TopoDOT (all of them in x64). Results depict as QTModeler, TopoDOT and AutoCAD Civil 3D allow the loading of large datasets, while Fledermaus, Mars7 and Carlson do not achieve these powerful performance. AutoCAD Civil 3D needs large loading time in comparison with the most powerful softwares such as QTModeler and TopoDOT. Carlson suite depicts the poorest results among all the softwares under study, where point clouds larger than 5 million points cannot be loaded and loading time is very large in comparison with the other suites even for the smaller datasets. AutoCAD Civil 3D, Carlson and TopoDOT show more threads than other softwares like QTModeler, Mars7 and Fledermaus.
Progress in the Development of a Prototype Reuse Enablement System
NASA Astrophysics Data System (ADS)
Marshall, J. J.; Downs, R. R.; Gilliam, L. J.; Wolfe, R. E.
2008-12-01
An important part of promoting software reuse is to ensure that reusable software assets are readily available to the software developers who want to use them. Through dialogs with the community, the NASA Earth Science Data Systems Software Reuse Working Group has learned that the lack of a centralized, domain- specific software repository or catalog system addressing the needs of the Earth science community is a major barrier to software reuse within the community. The Working Group has proposed the creation of such a reuse enablement system, which would provide capabilities for contributing and obtaining reusable software, to remove this barrier. The Working Group has recommended the development of a Reuse Enablement System to NASA and has performed a trade study to review systems with similar capabilities and to identify potential platforms for the proposed system. This was followed by an architecture study to determine an expeditious and cost-effective solution for this system. A number of software packages and systems were examined through both creating prototypes and examining existing systems that use the same software packages and systems. Based on the results of the architecture study, the Working Group developed a prototype of the proposed system using the recommended software package, through an iterative process of identifying needed capabilities and improving the system to provide those capabilities. Policies for the operation and maintenance of the system are being established for the system, and the identification of system policies also has contributed to the development process. Additionally, a test plan is being developed for formal testing of the prototype, to ensure that it meets all of the requirements previously developed by the Working Group. This poster summarizes the results of our work to date, focusing on the most recent activities.
Evaluation of Open-Source Hard Real Time Software Packages
NASA Technical Reports Server (NTRS)
Mattei, Nicholas S.
2004-01-01
Reliable software is, at times, hard to find. No piece of software can be guaranteed to work in every situation that may arise during its use here at Glenn Research Center or in space. The job of the Software Assurance (SA) group in the Risk Management Office is to rigorously test the software in an effort to ensure it matches the contract specifications. In some cases the SA team also researches new alternatives for selected software packages. This testing and research is an integral part of the department of Safety and Mission Assurance. Real Time operation in reference to a computer system is a particular style of handing the timing and manner with which inputs and outputs are handled. A real time system executes these commands and appropriate processing within a defined timing constraint. Within this definition there are two other classifications of real time systems: hard and soft. A soft real time system is one in which if the particular timing constraints are not rigidly met there will be no critical results. On the other hand, a hard real time system is one in which if the timing constraints are not met the results could be catastrophic. An example of a soft real time system is a DVD decoder. If the particular piece of data from the input is not decoded and displayed to the screen at exactly the correct moment nothing critical will become of it, the user may not even notice it. However, a hard real time system is needed to control the timing of fuel injections or steering on the Space Shuttle; a delay of even a fraction of a second could be catastrophic in such a complex system. The current real time system employed by most NASA projects is Wind River's VxWorks operating system. This is a proprietary operating system that can be configured to work with many of NASA s needs and it provides very accurate and reliable hard real time performance. The down side is that since it is a proprietary operating system it is also costly to implement. The prospect of replacing this somewhat costly implementation is the focus of one of the SA group s current research projects. The explosion of open source software in the last ten years has led to the development of a multitude of software solutions which were once only produced by major corporations. The benefits of these open projects include faster release and bug patching cycles as well as inexpensive if not free software solutions. The main packages for hard real time solutions under Linux are Real Time Application Interface (RTAI) and two varieties of Real Time Linux (RTL), RTLFree and RTLPro. During my time here at NASA I have been testing various hard real time solutions operating as layers on the Linux Operating System. All testing is being run on an Intel SBC 2590 which is a common embedded hardware platform. The test plan was provided to me by the Software Assurance group at the start of my internship and my job has been to test the systems by developing and executing the test cases on the hardware. These tests are constructed so that the Software Assurance group can get hard test data for a comparison between the open source and proprietary implementations of hard real time solutions.
Hardware Fault Simulator for Microprocessors
NASA Technical Reports Server (NTRS)
Hess, L. M.; Timoc, C. C.
1983-01-01
Breadboarded circuit is faster and more thorough than software simulator. Elementary fault simulator for AND gate uses three gates and shaft register to simulate stuck-at-one or stuck-at-zero conditions at inputs and output. Experimental results showed hardware fault simulator for microprocessor gave faster results than software simulator, by two orders of magnitude, with one test being applied every 4 microseconds.
An experiment in software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.
Building Energy Simulation Test for Existing Homes (BESTEST-EX) (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, R.; Neymark, J.; Polly, B.
2011-12-01
This presentation discusses the goals of NREL Analysis Accuracy R&D; BESTEST-EX goals; what BESTEST-EX is; how it works; 'Building Physics' cases; 'Building Physics' reference results; 'utility bill calibration' cases; limitations and potential future work. Goals of NREL Analysis Accuracy R&D are: (1) Provide industry with the tools and technical information needed to improve the accuracy and consistency of analysis methods; (2) Reduce the risks associated with purchasing, financing, and selling energy efficiency upgrades; and (3) Enhance software and input collection methods considering impacts on accuracy, cost, and time of energy assessments. BESTEST-EX Goals are: (1) Test software predictions of retrofitmore » energy savings in existing homes; (2) Ensure building physics calculations and utility bill calibration procedures perform up to a minimum standard; and (3) Quantify impact of uncertainties in input audit data and occupant behavior. BESTEST-EX is a repeatable procedure that tests how well audit software predictions compare to the current state of the art in building energy simulation. There is no direct truth standard. However, reference software have been subjected to validation testing, including comparisons with empirical data.« less
Federal COBOL Compiler Testing Service Compiler Validation Request Information.
1977-05-09
background of the Federal COBOL Compiler Testing Service which was set up by a memorandum of agreement between the National Bureau of Standards and the...Federal Standard, and the requirement of COBOL compiler validation in the procurement process. It also contains a list of all software products...produced by the software Development Division in support of the FCCTS as well as the Validation Summary Reports produced as a result of discharging the
Hardware in-the-Loop Demonstration of Real-Time Orbit Determination in High Earth Orbits
NASA Technical Reports Server (NTRS)
Moreau, Michael; Naasz, Bo; Leitner, Jesse; Carpenter, J. Russell; Gaylor, Dave
2005-01-01
This paper presents results from a study conducted at Goddard Space Flight Center (GSFC) to assess the real-time orbit determination accuracy of GPS-based navigation in a number of different high Earth orbital regimes. Measurements collected from a GPS receiver (connected to a GPS radio frequency (RF) signal simulator) were processed in a navigation filter in real-time, and resulting errors in the estimated states were assessed. For the most challenging orbit simulated, a 12 hour Molniya orbit with an apogee of approximately 39,000 km, mean total position and velocity errors were approximately 7 meters and 3 mm/s respectively. The study also makes direct comparisons between the results from the above hardware in-the-loop tests and results obtained by processing GPS measurements generated from software simulations. Care was taken to use the same models and assumptions in the generation of both the real-time and software simulated measurements, in order that the real-time data could be used to help validate the assumptions and models used in the software simulations. The study makes use of the unique capabilities of the Formation Flying Test Bed at GSFC, which provides a capability to interface with different GPS receivers and to produce real-time, filtered orbit solutions even when less than four satellites are visible. The result is a powerful tool for assessing onboard navigation performance in a wide range of orbital regimes, and a test-bed for developing software and procedures for use in real spacecraft applications.
Evaluation of features to support safety and quality in general practice clinical software
2011-01-01
Background Electronic prescribing is now the norm in many countries. We wished to find out if clinical software systems used by general practitioners in Australia include features (functional capabilities and other characteristics) that facilitate improved patient safety and care, with a focus on quality use of medicines. Methods Seven clinical software systems used in general practice were evaluated. Fifty software features that were previously rated as likely to have a high impact on safety and/or quality of care in general practice were tested and are reported here. Results The range of results for the implementation of 50 features across the 7 clinical software systems was as follows: 17-31 features (34-62%) were fully implemented, 9-13 (18-26%) partially implemented, and 9-20 (18-40%) not implemented. Key findings included: Access to evidence based drug and therapeutic information was limited. Decision support for prescribing was available but varied markedly between systems. During prescribing there was potential for medicine mis-selection in some systems, and linking a medicine with its indication was optional. The definition of 'current medicines' versus 'past medicines' was not always clear. There were limited resources for patients, and some medicines lists for patients were suboptimal. Results were provided to the software vendors, who were keen to improve their systems. Conclusions The clinical systems tested lack some of the features expected to support patient safety and quality of care. Standards and certification for clinical software would ensure that safety features are present and that there is a minimum level of clinical functionality that clinicians could expect to find in any system.
A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures
NASA Technical Reports Server (NTRS)
Moore, Ashley
2005-01-01
The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.
A toolbox and record for scientific models
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1994-01-01
Computational science presents a host of challenges for the field of knowledge-based software design. Scientific computation models are difficult to construct. Models constructed by one scientist are easily misapplied by other scientists to problems for which they are not well-suited. Finally, models constructed by one scientist are difficult for others to modify or extend to handle new types of problems. Construction of scientific models actually involves much more than the mechanics of building a single computational model. In the course of developing a model, a scientist will often test a candidate model against experimental data or against a priori expectations. Test results often lead to revisions of the model and a consequent need for additional testing. During a single model development session, a scientist typically examines a whole series of alternative models, each using different simplifying assumptions or modeling techniques. A useful scientific software design tool must support these aspects of the model development process as well. In particular, it should propose and carry out tests of candidate models. It should analyze test results and identify models and parts of models that must be changed. It should determine what types of changes can potentially cure a given negative test result. It should organize candidate models, test data, and test results into a coherent record of the development process. Finally, it should exploit the development record for two purposes: (1) automatically determining the applicability of a scientific model to a given problem; (2) supporting revision of a scientific model to handle a new type of problem. Existing knowledge-based software design tools must be extended in order to provide these facilities.
HDTS 2017.1 Testing and Verification Document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiteside, T.
2017-12-01
This report is a continuation of the series of Hunter Dose Tracking System (HDTS) Quality Assurance documents including (Foley and Powell, 2010; Dixon, 2012; Whiteside, 2017b). In this report we have created a suite of automated test cases and a system to analyze the results of those tests as well as documented the methodology to ensure the field system performs within specifications. The software test cases cover all of the functions and interactions of functions that are practical to test. With the developed framework, if software defects are discovered, it will be easy to create one or more test casesmore » to reproduce the defect and ensure that code changes correct the defect.« less
[Quality assurance of a virtual simulation software: application to IMAgo and SIMAgo (ISOgray)].
Isambert, A; Beaudré, A; Ferreira, I; Lefkopoulos, D
2007-06-01
Virtual simulation process is often used to prepare three dimensional conformal radiation therapy treatments. As the quality of the treatment is widely dependent on this step, it is mandatory to perform extensive controls on this software before clinical use. The tests presented in this work have been carried out on the treatment planning system ISOgray (DOSIsoft), including the delineation module IMAgo and the virtual simulation module SIMAgo. According to our experience, the most relevant controls of international protocols have been selected. These tests mainly focused on measuring and delineation tools, virtual simulation functionalities, and have been performed with three phantoms: the Quasar Multi-Purpose Body Phantom, the Quasar MLC Beam Geometry Phantom (Modus Medical Devices Inc.) and a phantom developed at Hospital Tenon. No major issues have been identified while performing the tests. These controls have emphasized the necessity for the user to consider with a critical eye the results displayed by a virtual simulation software. The contrast of visualisation, the slice thickness, the calculation and display mode of 3D structures used by the software are many factors of uncertainties. A virtual simulation software quality assurance procedure has been written and applied on a set of CT images. Similar tests have to be performed periodically and at minimum at each change of major version.
Overview of software development at the parabolic dish test site
NASA Technical Reports Server (NTRS)
Miyazono, C. K.
1985-01-01
The development history of the data acquisition and data analysis software is discussed. The software development occurred between 1978 and 1984 in support of solar energy module testing at the Jet Propulsion Laboratory's Parabolic Dish Test Site, located within Edwards Test Station. The development went through incremental stages, starting with a simple single-user BASIC set of programs, and progressing to the relative complex multi-user FORTRAN system that was used until the termination of the project. Additional software in support of testing is discussed including software in support of a meteorological subsystem and the Test Bed Concentrator Control Console interface. Conclusions and recommendations for further development are discussed.
MoniQA: a general approach to monitor quality assurance
NASA Astrophysics Data System (ADS)
Jacobs, J.; Deprez, T.; Marchal, G.; Bosmans, H.
2006-03-01
MoniQA ("Monitor Quality Assurance") is a new, non-commercial, independent quality assurance software application developed in our medical physics team. It is a complete Java TM - based modular environment for the evaluation of radiological viewing devices and it thus fits in the global quality assurance network of our (film less) radiology department. The purpose of the software tool is to guide the medical physicist through an acceptance protocol and the radiologist through a constancy check protocol by presentation of the necessary test patterns and by automated data collection. Data are then sent to a central management system for further analysis. At the moment more than 55 patterns have been implemented, which can be grouped in schemes to implement protocols (i.e. AAPMtg18, DIN and EUREF). Some test patterns are dynamically created and 'drawn' on the viewing device with random parameters as is the case in a recently proposed new pattern for constancy testing. The software is installed on 35 diagnostic stations (70 monitors) in a film less radiology department. Learning time was very limited. A constancy check -with the new pattern that assesses luminance decrease, resolution problems and geometric distortion- takes only 2 minutes and 28 seconds per monitor. The modular approach of the software allows the evaluation of new or emerging test patterns. We will report on the software and its usability: practicality of the constancy check tests in our hospital and on the results from acceptance tests of viewing stations for digital mammography.
NASA Astrophysics Data System (ADS)
Frailis, M.; Maris, M.; Zacchei, A.; Morisset, N.; Rohlfs, R.; Meharga, M.; Binko, P.; Türler, M.; Galeotta, S.; Gasparo, F.; Franceschi, E.; Butler, R. C.; D'Arcangelo, O.; Fogliani, S.; Gregorio, A.; Lowe, S. R.; Maggio, G.; Malaspina, M.; Mandolesi, N.; Manzato, P.; Pasian, F.; Perrotta, F.; Sandri, M.; Terenzi, L.; Tomasi, M.; Zonca, A.
2009-12-01
The Level 1 of the Planck LFI Data Processing Centre (DPC) is devoted to the handling of the scientific and housekeeping telemetry. It is a critical component of the Planck ground segment which has to strictly commit to the project schedule to be ready for the launch and flight operations. In order to guarantee the quality necessary to achieve the objectives of the Planck mission, the design and development of the Level 1 software has followed the ESA Software Engineering Standards. A fundamental step in the software life cycle is the Verification and Validation of the software. The purpose of this work is to show an example of procedures, test development and analysis successfully applied to a key software project of an ESA mission. We present the end-to-end validation tests performed on the Level 1 of the LFI-DPC, by detailing the methods used and the results obtained. Different approaches have been used to test the scientific and housekeeping data processing. Scientific data processing has been tested by injecting signals with known properties directly into the acquisition electronics, in order to generate a test dataset of real telemetry data and reproduce as much as possible nominal conditions. For the HK telemetry processing, validation software have been developed to inject known parameter values into a set of real housekeeping packets and perform a comparison with the corresponding timelines generated by the Level 1. With the proposed validation and verification procedure, where the on-board and ground processing are viewed as a single pipeline, we demonstrated that the scientific and housekeeping processing of the Planck-LFI raw data is correct and meets the project requirements.
Designing Test Suites for Software Interactions Testing
2004-01-01
the annual cost of insufficient software testing methods and tools in the United States is between 22.2 to 59.5 billion US dollars [13, 14]. This study...10 (2004), 1–29. [21] Cheng, C., Dumitrescu, A., and Schroeder , P. Generating small com- binatorial test suites to cover input-output relationships... Proceedings of the Conference on the Future of Software Engineering (May 2000), pp. 61 – 72. [51] Hartman, A. Software and hardware testing using
Basit, Mujeeb A; Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L
2018-04-13
Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test-driven development and automated regression testing promotes reliability. Test-driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a "safety net" for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and "living" design documentation. Rapid-cycle development or "agile" methods are being successfully applied to CDS development. The agile practice of automated test-driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as "executable requirements." We aimed to establish feasibility of acceptance test-driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory's expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. We used test-driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the "executable requirements" are shown prior to building the CDS alert, during build, and after successful build. Automated acceptance test-driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test-driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. ©Mujeeb A Basit, Krystal L Baldwin, Vaishnavi Kannan, Emily L Flahaven, Cassandra J Parks, Jason M Ott, Duwayne L Willett. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.04.2018.
SU-E-T-103: Development and Implementation of Web Based Quality Control Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Studinski, R; Taylor, R; Angers, C
Purpose: Historically many radiation medicine programs have maintained their Quality Control (QC) test results in paper records or Microsoft Excel worksheets. Both these approaches represent significant logistical challenges, and are not predisposed to data review and approval. It has been our group's aim to develop and implement web based software designed not just to record and store QC data in a centralized database, but to provide scheduling and data review tools to help manage a radiation therapy clinics Equipment Quality control program. Methods: The software was written in the Python programming language using the Django web framework. In order tomore » promote collaboration and validation from other centres the code was made open source and is freely available to the public via an online source code repository. The code was written to provide a common user interface for data entry, formalize the review and approval process, and offer automated data trending and process control analysis of test results. Results: As of February 2014, our installation of QAtrack+ has 180 tests defined in its database and has collected ∼22 000 test results, all of which have been reviewed and approved by a physicist via QATrack+'s review tools. These results include records for quality control of Elekta accelerators, CT simulators, our brachytherapy programme, TomoTherapy and Cyberknife units. Currently at least 5 other centres are known to be running QAtrack+ clinically, forming the start of an international user community. Conclusion: QAtrack+ has proven to be an effective tool for collecting radiation therapy QC data, allowing for rapid review and trending of data for a wide variety of treatment units. As free and open source software, all source code, documentation and a bug tracker are available to the public at https://bitbucket.org/tohccmedphys/qatrackplus/.« less
Sousa, Luiz Cláudio Demes da Mata; Filho, Herton Luiz Alves Sales; Von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; Neto, Pedro de Alcântara dos Santos; de Castro, José Adail Fonseca; do Monte, Semíramis Jamil Hadad
2011-12-01
The global challenge for solid organ transplantation programs is to distribute organs to the highly sensitized recipients. The purpose of this work is to describe and test the functionality of the EpHLA software, a program that automates the analysis of acceptable and unacceptable HLA epitopes on the basis of the HLAMatchmaker algorithm. HLAMatchmaker considers small configurations of polymorphic residues referred to as eplets as essential components of HLA-epitopes. Currently, the analyses require the creation of temporary files and the manual cut and paste of laboratory tests results between electronic spreadsheets, which is time-consuming and prone to administrative errors. The EpHLA software was developed in Object Pascal programming language and uses the HLAMatchmaker algorithm to generate histocompatibility reports. The automated generation of reports requires the integration of files containing the results of laboratory tests (HLA typing, anti-HLA antibody signature) and public data banks (NMDP, IMGT). The integration and the access to this data were accomplished by means of the framework called eDAFramework. The eDAFramework was developed in Object Pascal and PHP and it provides data access functionalities for software developed in these languages. The tool functionality was successfully tested in comparison to actual, manually derived reports of patients from a renal transplantation program with related donors. We successfully developed software, which enables the automated definition of the epitope specificities of HLA antibodies. This new tool will benefit the management of recipient/donor pairs selection for highly sensitized patients. Copyright © 2011 Elsevier B.V. All rights reserved.
A Practical Approach to Modified Condition/Decision Coverage
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Veerhusem, Dan S.
2001-01-01
Testing of software intended for safety-critical applications in commercial transport aircraft must achieve modified condition/decision coverage (MC/DC) of the software structure. This requirement causes anxiety for many within the aviation software community. Results of a survey of the aviation software industry indicate that many developers believe that meeting the MC/DC requirement is difficult, and the cost is exorbitant. Some of the difficulties stem, no doubt, from the scant information available on the subject. This paper provides a practical 5-step approach for assessing MC/DC for aviation software products, and an analysis of some types of errors expected to be caught when MC/DC is achieved1.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172
Automation of electromagnetic compatability (EMC) test facilities
NASA Technical Reports Server (NTRS)
Harrison, C. A.
1986-01-01
Efforts to automate electromagnetic compatibility (EMC) test facilities at Marshall Space Flight Center are discussed. The present facility is used to accomplish a battery of nine standard tests (with limited variations) deigned to certify EMC of Shuttle payload equipment. Prior to this project, some EMC tests were partially automated, but others were performed manually. Software was developed to integrate all testing by means of a desk-top computer-controller. Near real-time data reduction and onboard graphics capabilities permit immediate assessment of test results. Provisions for disk storage of test data permit computer production of the test engineer's certification report. Software flexibility permits variation in the tests procedure, the ability to examine more closely those frequency bands which indicate compatibility problems, and the capability to incorporate additional test procedures.
Validating New Software for Semiautomated Liver Volumetry--Better than Manual Measurement?
Noschinski, L E; Maiwald, B; Voigt, P; Wiltberger, G; Kahn, T; Stumpp, P
2015-09-01
This prospective study compared a manual program for liver volumetry with semiautomated software. The hypothesis was that the semiautomated software would be faster, more accurate and less dependent on the evaluator's experience. Ten patients undergoing hemihepatectomy were included in this IRB approved study after written informed consent. All patients underwent a preoperative abdominal 3-phase CT scan, which was used for whole liver volumetry and volume prediction for the liver part to be resected. Two different types of software were used: 1) manual method: borders of the liver had to be defined per slice by the user; 2) semiautomated software: automatic identification of liver volume with manual assistance for definition of Couinaud segments. Measurements were done by six observers with different experience levels. Water displacement volumetry immediately after partial liver resection served as the gold standard. The resected part was examined with a CT scan after displacement volumetry. Volumetry of the resected liver scan showed excellent correlation to water displacement volumetry (manual: ρ = 0.997; semiautomated software: ρ = 0.995). The difference between the predicted volume and the real volume was significantly smaller with the semiautomated software than with the manual method (33% vs. 57%, p = 0.002). The semiautomated software was almost four times faster for volumetry of the whole liver (manual: 6:59 ± 3:04 min; semiautomated: 1:47 ± 1:11 min). Both methods for liver volumetry give an estimated liver volume close to the real one. The tested semiautomated software is faster, more accurate in predicting the volume of the resected liver part, gives more reproducible results and is less dependent on the user's experience. Both tested types of software allow exact volumetry of resected liver parts. Preoperative prediction can be performed more accurately with the semiautomated software. The semiautomated software is nearly four times faster than the tested manual program and less dependent on the user's experience. © Georg Thieme Verlag KG Stuttgart · New York.
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
ETICS: the international software engineering service for the grid
NASA Astrophysics Data System (ADS)
Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.
2008-07-01
The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G M
2005-05-03
For a number of years I had the pleasure of teaching Testing Seminars all over the world and meeting and learning from others in our field. Over a twelve year period, I always asked the following questions to Software Developers, Test Engineers, and Managers who took my two or three day seminar on Software Testing: 'When was the first time you heard the word test'? 'Where were you when you first heard the word test'? 'Who said the word test'? 'How did the word test make you feel'? Most of the thousands of responses were similar to 'It was mymore » third grade teacher at school, and I felt nervous and afraid'. Now there were a few exceptions like 'It was my third grade teacher, and I was happy and excited to show how smart I was'. But by and large, my informal survey found that 'testing' is a word to which most people attach negative meanings, based on its historical context. So why is this important to those of us in the software development business? Because I have found that a preponderance of software developers do not get real excited about hearing that the software they just wrote is going to be 'tested' by the Test Group. Typical reactions I have heard over the years run from: 'I'm sure there is nothing wrong with the software, so go ahead and test it, better you find defects than our customers'. to these extremes: 'There is no need to test my software because there is nothing wrong with it'. 'You are not qualified to test my software because you don't know as much as I do about it'. 'If any Test Engineers come into our office again to test our software we will throw them through the third floor window'. So why is there such a strong negative reaction to testing? It is primitive. It goes back to grade school for many of us. It is a negative word that congers up negative emotions. In other words, 'test' is a four letter word. How many of us associate 'Joy' with 'Test'? Not many. It is hard for most of us to reprogram associations learned at an early age. So what can we do about it (short of hypnotic therapy for software developers)? Well one concept I have used (and still use) is to not call testing 'testing'. Call it something else. Ever wonder why most of the Independent Software Testing groups are called Software Quality Assurance groups? Now you know. Software Quality Assurance is not such a negatively charged phrase, even though Software Quality Assurance is much more than simply testing. It was a real blessing when the concept of Validation and Verification came about for software. Now I define Validation to mean assuring that the product produced does the right thing (usually what the customer wants it to do), and verification means that the product was built the right way (in accordance with some good design principles and practices). So I have deliberately called the System Test Group the Verification and Validation Group, or V&V Group, as a way of avoiding the negative image problem. I remember once having a conversation with a developer colleague who said, in the heat of battle, that it was fine to V&V his code, just don't test it! Once again V&V includes many things besides testing, but it just doesn't sound like an onerous thing to do to software. In my current job, working at a highly regarded national laboratory with world renowned physicists, I have again encountered the negativity about testing software. Except here they don't take kindly to Software Quality Assurance or Software Verification and Validation either. After all, software is just a trivial tool to automate algorithms that implement physics models. Testing, SQA, and V&V take time and get in the way of completing ground breaking science experiments. So I have again had to change the name of software testing to something less negative in the physics world. I found (the hard way) that if I requested more time to do software experimentation, the physicist's resistance melted. And so the conversation continues, 'We have time to run more software experiments. Just don't waste any time testing the software'! In case the concept of not calling testing 'testing' appeals to you, and there may be an opportunity for you to take the sting out of the name at your place of employment, I have compiled a table of things that testing could be called besides 'testing'. Of course we can embellish this by adding some good sounding prefixes and suffixes also. To come up with alternate names for testing, pick a word from columns A, B, and C in the table below. For instance Unified Acceptance Trials (A2,B7,C3) or Tailored Observational Demonstration (A6,B5,C5) or Agile Criteria Scoring (A3,B8,C8) or Rapid Requirement Proof (A1,B9,C7) or Satisfaction Assurance (B10,C1). You can probably think of some additional combinations appropriate for your industry.« less
ERIC Educational Resources Information Center
Clarke, Peter J.; Davis, Debra; King, Tariq M.; Pava, Jairo; Jones, Edward L.
2014-01-01
As software becomes more ubiquitous and complex, the cost of software bugs continues to grow at a staggering rate. To remedy this situation, there needs to be major improvement in the knowledge and application of software validation techniques. Although there are several software validation techniques, software testing continues to be one of the…
RESULTS OF QA/QC TESTING OF EPA BENCHMARK DOSE SOFTWARE VERSION 1.2
EPA is developing benchmark dose software (BMDS) to support cancer and non-cancer dose-response assessments. Following the recent public review of BMDS version 1.1b, EPA developed a Hill model for evaluating continuous data, and improved the user interface and Multistage, Polyno...
Experience with Ada on the F-18 High Alpha Research Vehicle Flight Test Program
NASA Technical Reports Server (NTRS)
Regenie, Victoria A.; Earls, Michael; Le, Jeanette; Thomson, Michael
1992-01-01
Considerable experience was acquired with Ada at the NASA Dryden Flight Research Facility during the on-going High Alpha Technology Program. In this program, an F-18 aircraft was highly modified by the addition of thrust-vectoring vanes to the airframe. In addition, substantial alteration was made in the original quadruplex flight control system. The result is the High Alpha Research Vehicle. An additional research flight control computer was incorporated in each of the four channels. Software for the research flight control computer was written in Ada. To date, six releases of this software have been flown. This paper provides a detailed description of the modifications to the research flight control system. Efficient ground-testing of the software was accomplished by using simulations that used the Ada for portions of their software. These simulations are also described. Modifying and transferring the Ada for flight software to the software simulation configuration has allowed evaluation of this language. This paper also discusses such significant issues in using Ada as portability, modifiability, and testability as well as documentation requirements.
Experience with Ada on the F-18 High Alpha Research Vehicle flight test program
NASA Technical Reports Server (NTRS)
Regenie, Victoria A.; Earls, Michael; Le, Jeanette; Thomson, Michael
1994-01-01
Considerable experience has been acquired with Ada at the NASA Dryden Flight Research Facility during the on-going High Alpha Technology Program. In this program, an F-18 aircraft has been highly modified by the addition of thrust-vectoring vanes to the airframe. In addition, substantial alteration was made in the original quadruplex flight control system. The result is the High Alpha Research Vehicle. An additional research flight control computer was incorporated in each of the four channels. Software for the research flight control computer was written Ada. To date, six releases of this software have been flown. This paper provides a detailed description of the modifications to the research flight control system. Efficient ground-testing of the software was accomplished by using simulations that used the Ada for portions of their software. These simulations are also described. Modifying and transferring the Ada flight software to the software simulation configuration has allowed evaluation of this language. This paper also discusses such significant issues in using Ada as portability, modifiability, and testability as well as documentation requirements.
NASA Technical Reports Server (NTRS)
Haas, Lin; Massey, Christopher; Baraban, Dmitri
2003-01-01
This paper presents the Global Positioning System (GPS) navigation results from the Communications and Navigation Demonstration on Shuttle (CANDOS) experiment flown on STS-107. This experiment was the initial flight of a Low Power Transceiver (LPT) that featured high capacity space- space and space-ground communications and GPS- based navigation capabilities. The LPT also hosted the GPS Enhanced Orbit Determination Experiment (GEODE) orbit determination software. All CANDOS test data were recovered during the mission using LPT communications links via the Tracking and Data Relay Satellite System (TDRSS). An overview of the LPT s navigation software and the GPS experiment timeline is presented, along with comparisons of test results to the NASA Johnson Space Center (JSC) real-time ground navigation vectors and Best Estimate of Trajectory (BET).
Path generation algorithm for UML graphic modeling of aerospace test software
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao
2018-03-01
Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.
NASA Technical Reports Server (NTRS)
Wolf, Stephen W. D.
1988-01-01
The Wall Adjustment Strategy (WAS) software provides successful on-line control of the 2-D flexible walled test section of the Langley 0.3-m Transonic Cryogenic Tunnel. This software package allows the level of operator intervention to be regulated as necessary for research and production type 2-D testing using and Adaptive Wall Test Section (AWTS). The software is designed to accept modification for future requirements, such as 3-D testing, with a minimum of complexity. The WAS software described is an attempt to provide a user friendly package which could be used to control any flexible walled AWTS. Control system constraints influence the details of data transfer, not the data type. Then this entire software package could be used in different control systems, if suitable interface software is available. A complete overview of the software highlights the data flow paths, the modular architecture of the software and the various operating and analysis modes available. A detailed description of the software modules includes listings of the code. A user's manual is provided to explain task generation, operating environment, user options and what to expect at execution.
Design and Analysis of Tooth Impact Test Rig for Spur Gear
NASA Astrophysics Data System (ADS)
Ghazali, Wafiuddin Bin Md; Aziz, Ismail Ali Bin Abdul; Daing Idris, Daing Mohamad Nafiz Bin; Ismail, Nurazima Binti; Sofian, Azizul Helmi Bin
2016-02-01
This paper is about the design and analysis of a prototype of tooth impact test rig for spur gear. The test rig was fabricated and analysis was conducted to study its’ limitation and capabilities. The design of the rig is analysed to ensure that there will be no problem occurring during the test and reliable data can be obtained. From the result of the analysis, the maximum amount of load that can be applied, the factor of safety of the machine, the stresses on the test rig parts were determined. This is important in the design consideration of the test rig. The materials used for the fabrication of the test rig were also discussed and analysed. MSC Nastran Patran software was used to analyse the model, which was designed by using SolidWorks 2014 software. Based from the results, there were limitations found from the initial design and the test rig design needs to be improved in order for the test rig to operate properly.
Integrating personalized medical test contents with XML and XSL-FO.
Toddenroth, Dennis; Dugas, Martin; Frankewitsch, Thomas
2011-03-01
In 2004 the adoption of a modular curriculum at the medical faculty in Muenster led to the introduction of centralized examinations based on multiple-choice questions (MCQs). We report on how organizational challenges of realizing faculty-wide personalized tests were addressed by implementation of a specialized software module to automatically generate test sheets from individual test registrations and MCQ contents. Key steps of the presented method for preparing personalized test sheets are (1) the compilation of relevant item contents and graphical media from a relational database with database queries, (2) the creation of Extensible Markup Language (XML) intermediates, and (3) the transformation into paginated documents. The software module by use of an open source print formatter consistently produced high-quality test sheets, while the blending of vectorized textual contents and pixel graphics resulted in efficient output file sizes. Concomitantly the module permitted an individual randomization of item sequences to prevent illicit collusion. The automatic generation of personalized MCQ test sheets is feasible using freely available open source software libraries, and can be efficiently deployed on a faculty-wide scale.
NASA Technical Reports Server (NTRS)
Tamayo, Tak Chai
1987-01-01
Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.
Integrated testing and verification system for research flight software
NASA Technical Reports Server (NTRS)
Taylor, R. N.
1979-01-01
The MUST (Multipurpose User-oriented Software Technology) program is being developed to cut the cost of producing research flight software through a system of software support tools. An integrated verification and testing capability was designed as part of MUST. Documentation, verification and test options are provided with special attention on real-time, multiprocessing issues. The needs of the entire software production cycle were considered, with effective management and reduced lifecycle costs as foremost goals.
NASA Astrophysics Data System (ADS)
Ligori, S.; Corcione, L.; Capobianco, V.; Bonino, D.; Sirri, G.; Fornari, F.; Giacomini, F.; Patrizii, L.; Valenziano, L.; Travaglini, R.; Colodro, C.; Bortoletto, F.; Bonoli, C.; Chiarusi, T.; Margiotta, A.; Mauri, N.; Pasqualini, L.; Spurio, M.; Tenti, M.; Dal Corso, F.; Dusini, S.; Laudisio, F.; Sirignano, C.; Stanco, L.; Ventura, S.; Auricchio, N.; Balestra, A.; Franceschi, E.; Morgante, G.; Trifoglio, M.; Medinaceli, E.; Guizzo, G. P.; Debei, S.; Stephen, J. B.
2016-07-01
In this paper we describe the detailed design of the application software (ASW) of the instrument control unit (ICU) of NISP, the Near-Infrared Spectro-Photometer of the Euclid mission. This software is based on a real-time operating system (RTEMS) and will interface with all the subunits of NISP, as well as the command and data management unit (CDMU) of the spacecraft for telecommand and housekeeping management. We briefly review the main requirements driving the design and the architecture of the software that is approaching the Critical Design Review level. The interaction with the data processing unit (DPU), which is the intelligent subunit controlling the detector system, is described in detail, as well as the concept for the implementation of the failure detection, isolation and recovery (FDIR) algorithms. The first version of the software is under development on a Breadboard model produced by AIRBUS/CRISA. We describe the results of the tests and the main performances and budgets.
Application of LANDSAT system for improving methodology for inventory and classification of wetlands
NASA Technical Reports Server (NTRS)
Gilmer, D. S. (Principal Investigator)
1976-01-01
The author has identified the following significant results. A newly developed software system for generating statistics on surface water features was tested using LANDSAT data acquired previous to 1975. This software test provided a satisfactory evaluation of the system and also allowed expansion of data base on prairie water features. The software system recognizes water on the basis of a classification algorithm. This classification is accomplished by level thresholding a single near infrared data channel. After each pixel is classified as water or nonwater, the software system then recognizes ponds or lakes as sets of contiguous pixels or as single isolated pixels in the case of very small ponds. Pixels are considered to be contiguous if they are adjacent between successive scan lines. After delineating each water feature, the software system then assigns the feature a position based upon a geographic grid system and calculates the feature's planimetric area, its perimeter, and a parameter known as the shape factor.
Wilson, Anna J; Revkin, Susannah K; Cohen, David; Cohen, Laurent; Dehaene, Stanislas
2006-01-01
Background In a companion article [1], we described the development and evaluation of software designed to remediate dyscalculia. This software is based on the hypothesis that dyscalculia is due to a "core deficit" in number sense or in its access via symbolic information. Here we review the evidence for this hypothesis, and present results from an initial open-trial test of the software in a sample of nine 7–9 year old children with mathematical difficulties. Methods Children completed adaptive training on numerical comparison for half an hour a day, four days a week over a period of five-weeks. They were tested before and after intervention on their performance in core numerical tasks: counting, transcoding, base-10 comprehension, enumeration, addition, subtraction, and symbolic and non-symbolic numerical comparison. Results Children showed specific increases in performance on core number sense tasks. Speed of subitizing and numerical comparison increased by several hundred msec. Subtraction accuracy increased by an average of 23%. Performance on addition and base-10 comprehension tasks did not improve over the period of the study. Conclusion Initial open-trial testing showed promising results, and suggested that the software was successful in increasing number sense over the short period of the study. However these results need to be followed up with larger, controlled studies. The issues of transfer to higher-level tasks, and of the best developmental time window for intervention also need to be addressed. PMID:16734906
15 CFR 995.27 - Format validation software testing.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Format validation software testing... CERTIFICATION REQUIREMENTS FOR NOAA HYDROGRAPHIC PRODUCTS AND SERVICES CERTIFICATION REQUIREMENTS FOR... of NOAA ENC Products § 995.27 Format validation software testing. Tests shall be performed verifying...
The Design of Software for Three-Phase Induction Motor Test System
NASA Astrophysics Data System (ADS)
Haixiang, Xu; Fengqi, Wu; Jiai, Xue
2017-11-01
The design and development of control system software is important to three-phase induction motor test equipment, which needs to be completely familiar with the test process and the control procedure of test equipment. In this paper, the software is developed according to the national standard (GB/T1032-2005) about three-phase induction motor test method by VB language. The control system and data analysis software and the implement about motor test system are described individually, which has the advantages of high automation and high accuracy.
PySE: Software for extracting sources from radio images
NASA Astrophysics Data System (ADS)
Carbone, D.; Garsden, H.; Spreeuw, H.; Swinbank, J. D.; van der Horst, A. J.; Rowlinson, A.; Broderick, J. W.; Rol, E.; Law, C.; Molenaar, G.; Wijers, R. A. M. J.
2018-04-01
PySE is a Python software package for finding and measuring sources in radio telescope images. The software was designed to detect sources in the LOFAR telescope images, but can be used with images from other radio telescopes as well. We introduce the LOFAR Telescope, the context within which PySE was developed, the design of PySE, and describe how it is used. Detailed experiments on the validation and testing of PySE are then presented, along with results of performance testing. We discuss some of the current issues with the algorithms implemented in PySE and their interaction with LOFAR images, concluding with the current status of PySE and its future development.
A digital flight control system verification laboratory
NASA Technical Reports Server (NTRS)
De Feo, P.; Saib, S.
1982-01-01
A NASA/FAA program has been established for the verification and validation of digital flight control systems (DFCS), with the primary objective being the development and analysis of automated verification tools. In order to enhance the capabilities, effectiveness, and ease of using the test environment, software verification tools can be applied. Tool design includes a static analyzer, an assertion generator, a symbolic executor, a dynamic analysis instrument, and an automated documentation generator. Static and dynamic tools are integrated with error detection capabilities, resulting in a facility which analyzes a representative testbed of DFCS software. Future investigations will ensue particularly in the areas of increase in the number of software test tools, and a cost effectiveness assessment.
NASA Astrophysics Data System (ADS)
Brouwer, Albert; Brown, David; Tomuta, Elena
2017-04-01
To detect nuclear explosions, waveform data from over 240 SHI stations world-wide flows into the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), located in Vienna, Austria. A complex pipeline of software applications processes this data in numerous ways to form event hypotheses. The software codebase comprises over 2 million lines of code, reflects decades of development, and is subject to frequent enhancement and revision. Since processing must run continuously and reliably, software changes are subjected to thorough testing before being put into production. To overcome the limitations and cost of manual testing, the Continuous Automated Testing System (CATS) has been created. CATS provides an isolated replica of the IDC processing environment, and is able to build and test different versions of the pipeline software directly from code repositories that are placed under strict configuration control. Test jobs are scheduled automatically when code repository commits are made. Regressions are reported. We present the CATS design choices and test methods. Particular attention is paid to how the system accommodates the individual testing of strongly interacting software components that lack test instrumentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mundy, D; Tryggestad, E; Beltran, C
Purpose: To develop daily and monthly quality assurance (QA) programs in support of a new spot-scanning proton treatment facility using a combination of commercial and custom equipment and software. Emphasis was placed on efficiency and evaluation of key quality parameters. Methods: The daily QA program was developed to test output, spot size and position, proton beam energy, and image guidance using the Sun Nuclear Corporation rf-DQA™3 device and Atlas QA software. The program utilizes standard Atlas linear accelerator tests repurposed for proton measurements and a custom jig for indexing the device to the treatment couch. The monthly QA program wasmore » designed to test mechanical performance, image quality, radiation quality, isocenter coincidence, and safety features. Many of these tests are similar to linear accelerator QA counterparts, but many require customized test design and equipment. Coincidence of imaging, laser marker, mechanical, and radiation isocenters, for instance, is verified using a custom film-based device devised and manufactured at our facility. Proton spot size and position as a function of energy are verified using a custom spot pattern incident on film and analysis software developed in-house. More details concerning the equipment and software developed for monthly QA are included in the supporting document. Thresholds for daily and monthly tests were established via perturbation analysis, early experience, and/or proton system specifications and associated acceptance test results. Results: The periodic QA program described here has been in effect for approximately 9 months and has proven efficient and sensitive to sub-clinical variations in treatment delivery characteristics. Conclusion: Tools and professional guidelines for periodic proton system QA are not as well developed as their photon and electron counterparts. The program described here efficiently evaluates key quality parameters and, while specific to the needs of our facility, could be readily adapted to other proton centers.« less
Analysis of key technologies for virtual instruments metrology
NASA Astrophysics Data System (ADS)
Liu, Guixiong; Xu, Qingui; Gao, Furong; Guan, Qiuju; Fang, Qiang
2008-12-01
Virtual instruments (VIs) require metrological verification when applied as measuring instruments. Owing to the software-centered architecture, metrological evaluation of VIs includes two aspects: measurement functions and software characteristics. Complexity of software imposes difficulties on metrological testing of VIs. Key approaches and technologies for metrology evaluation of virtual instruments are investigated and analyzed in this paper. The principal issue is evaluation of measurement uncertainty. The nature and regularity of measurement uncertainty caused by software and algorithms can be evaluated by modeling, simulation, analysis, testing and statistics with support of powerful computing capability of PC. Another concern is evaluation of software features like correctness, reliability, stability, security and real-time of VIs. Technologies from software engineering, software testing and computer security domain can be used for these purposes. For example, a variety of black-box testing, white-box testing and modeling approaches can be used to evaluate the reliability of modules, components, applications and the whole VI software. The security of a VI can be assessed by methods like vulnerability scanning and penetration analysis. In order to facilitate metrology institutions to perform metrological verification of VIs efficiently, an automatic metrological tool for the above validation is essential. Based on technologies of numerical simulation, software testing and system benchmarking, a framework for the automatic tool is proposed in this paper. Investigation on implementation of existing automatic tools that perform calculation of measurement uncertainty, software testing and security assessment demonstrates the feasibility of the automatic framework advanced.
UWB Tracking Software Development
NASA Technical Reports Server (NTRS)
Gross, Julia; Arndt, Dickey; Ngo, Phong; Phan, Chau; Dusl, John; Ni, Jianjun; Rafford, Melinda
2006-01-01
An Ultra-Wideband (UWB) two-cluster Angle of Arrival (AOA) tracking prototype system is currently being developed and tested at NASA Johnson Space Center for space exploration applications. This talk discusses the software development efforts for this UWB two-cluster AOA tracking system. The role the software plays in this system is to take waveform data from two UWB radio receivers as an input, feed this input into an AOA tracking algorithm, and generate the target position as an output. The architecture of the software (Input/Output Interface and Algorithm Core) will be introduced in this talk. The development of this software has three phases. In Phase I, the software is mostly Matlab driven and calls C++ socket functions to provide the communication links to the radios. This is beneficial in the early stage when it is necessary to frequently test changes in the algorithm. Phase II of the development is to have the software mostly C++ driven and call a Matlab function for the AOA tracking algorithm. This is beneficial in order to send the tracking results to other systems and also to improve the tracking update rate of the system. The third phase is part of future work and is to have the software completely C++ driven with a graphics user interface. This software design enables the fine resolution tracking of the UWB two-cluster AOA tracking system.
Objectives and first results of the NASA SETI sky survey field tests at Goldstone
NASA Technical Reports Server (NTRS)
Gulkis, S.; Klein, M. J.; Olsen, E. T.; Crow, R. B.; Gosline, R. M.; Downs, G. S.; Quirk, M. P.; Lokshin, A.; Solomon, J.
1986-01-01
Field tests of SETI (Search for Extraterrestrial Intelligence) prototype hardware and software began in March 1985 at Goldstone. With emphasis on the sky survey component of the NASA SETI search strategy, the article describes the survey characteristics, the detection strategy, and preliminary results of system tests.
Mining dynamic noteworthy functions in software execution sequences.
Zhang, Bing; Huang, Guoyan; Wang, Yuqian; He, Haitao; Ren, Jiadong
2017-01-01
As the quality of crucial entities can directly affect that of software, their identification and protection become an important premise for effective software development, management, maintenance and testing, which thus contribute to improving the software quality and its attack-defending ability. Most analysis and evaluation on important entities like codes-based static structure analysis are on the destruction of the actual software running. In this paper, from the perspective of software execution process, we proposed an approach to mine dynamic noteworthy functions (DNFM)in software execution sequences. First, according to software decompiling and tracking stack changes, the execution traces composed of a series of function addresses were acquired. Then these traces were modeled as execution sequences and then simplified so as to get simplified sequences (SFS), followed by the extraction of patterns through pattern extraction (PE) algorithm from SFS. After that, evaluating indicators inner-importance and inter-importance were designed to measure the noteworthiness of functions in DNFM algorithm. Finally, these functions were sorted by their noteworthiness. Comparison and contrast were conducted on the experiment results from two traditional complex network-based node mining methods, namely PageRank and DegreeRank. The results show that the DNFM method can mine noteworthy functions in software effectively and precisely.
1977-05-01
C31) programs; (4) simulator/ trainer programs ; and (5) automatic test equipment software. Each of these five types of software represents a problem...coded in the same source language, say JOVIAL, then source—language statements would be a better measure, since that would automatically compensate...whether done at no (visible) cost or by renegotiation of the contract. Fig. 2.3 illustrates these with solid lines. It is conjec- tured that the change
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; Sadlier, Ronald J
We show how to extend the paradigm of software-defined communication to include quantum communication systems. We introduce the decomposition of a quantum communication terminal into layers separating the concerns of the hardware, software, and middleware. We provide detailed descriptions of how each component operates and we include results of an implementation of the super-dense coding protocol. We argue that the versatility of software-defined quantum communication test beds can be useful for exploring new regimes in communication and rapidly prototyping new systems.
Area navigation implementation for a microcomputer-based LORAN-C receiver
NASA Technical Reports Server (NTRS)
Oguri, F.
1983-01-01
Engineering performed to make LORAN-C a more useful and practical navigation system for general aviation is described. Development of new software, and implementation of this software on a (MOS6502) microcomputer to provide high quality practical area navigation information directly to the pilot and considered. Flight tests were performed specifically to examine the efficacy of this new software. Final results were exceptionally good and clearly demonstrate the merits of this new LORAN-C area navigation system.
SU-F-T-493: An Investigation Into the Feasibility of Using PipsPro Software with Film for Linac QA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Underwood, R
2016-06-15
Purpose: To determine the feasibility of using radiochromic and radiographic film with Pipspro software for quality assurance of linear accelerators with no on-board imaging. Methods: The linear accelerator being used is a Varian Clinac 21EX. All IGRT is performed using the BrainLab ExacTrac system. Because of the lack of on board imaging, certain monthly and annual TG-142 quality assurance tests are more difficult to perform and analyze to a high degree of accuracy. Pipspro was not designed to be used with hard film, and to our knowledge its use with film had not been investigated. The film used will bemore » GafChromic EBT3 film and Kodak EDR2 film, scanned with an Epson V700 scanner. The following routine tests will be attempted: MLC picket fence, light vs. radiation field coincidence, starshots, and MLC transmission. Results: The only tests that gave accurate and reliable results were the couch, gantry, and collimator starshots. Typical MV and kV images are acquired with a much higher level of contrast between the irradiated and non-irradiated areas when compared to film. Pipspro relies on this level of contrast to be able to automatically detect the fiducial points from its phantom devices, leaf edges for picket fence and transmission tests, and jaw edges for light vs. radiation field tests. Because of this, certain tests gave erroneous results and others were not able to be performed in the software at all, with either type of film. The number of monitor units delivered to the film, the experimental setup, and the scan settings was not able to rectify the problem. Conclusion: For linear accelerators with no on-board imaging, it is not recommended to use hard film with PipsPro to perform TG-142 quality assurance tests. Other software or methods should instead be investigated.« less
Software for computerised analysis of cardiotocographic traces.
Romano, M; Bifulco, P; Ruffo, M; Improta, G; Clemente, F; Cesarelli, M
2016-02-01
Despite the widespread use of cardiotocography in foetal monitoring, the evaluation of foetal status suffers from a considerable inter and intra-observer variability. In order to overcome the main limitations of visual cardiotocographic assessment, computerised methods to analyse cardiotocographic recordings have been recently developed. In this study, a new software for automated analysis of foetal heart rate is presented. It allows an automatic procedure for measuring the most relevant parameters derivable from cardiotocographic traces. Simulated and real cardiotocographic traces were analysed to test software reliability. In artificial traces, we simulated a set number of events (accelerations, decelerations and contractions) to be recognised. In the case of real signals, instead, results of the computerised analysis were compared with the visual assessment performed by 18 expert clinicians and three performance indexes were computed to gain information about performances of the proposed software. The software showed preliminary performance we judged satisfactory in that the results matched completely the requirements, as proved by tests on artificial signals in which all simulated events were detected from the software. Performance indexes computed in comparison with obstetricians' evaluations are, on the contrary, not so satisfactory; in fact they led to obtain the following values of the statistical parameters: sensitivity equal to 93%, positive predictive value equal to 82% and accuracy equal to 77%. Very probably this arises from the high variability of trace annotation carried out by clinicians. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Sustaining Software-Intensive Systems
2006-05-01
2.2 Multi- Service Operational Test and Evaluation .......................................4 2.3 Stable Software Baseline...or equivalent document • completed Multi- Service Operational Test and Evaluation (MOT&E) for the potential production software package (or OT&E if...not multi- service ) • stable software production baseline • complete and current software documentation • Authority to Operate (ATO) for an
X-29A flight control system performance during flight test
NASA Technical Reports Server (NTRS)
Chin, J.; Chacon, V.; Gera, J.
1987-01-01
An account is given of flight control system performance results for the X-29A forward-swept wing 'Advanced Technology Demonstrator' fighter aircraft, with attention to its software and hardware components' achievement of the requisite levels of system stability and desirable aircraft handling qualities. The Automatic Camber Control Logic is found to be well integrated with the stability loop of the aircraft. A number of flight test support software programs developed by NASA facilitated monitoring of the X-29A's stability in real time, and allowed the test team to clear the envelope with confidence.
Summary of Documentation for DYNA3D-ParaDyn's Software Quality Assurance Regression Test Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zywicz, Edward
The Software Quality Assurance (SQA) regression test suite for DYNA3D (Zywicz and Lin, 2015) and ParaDyn (DeGroot, et al., 2015) currently contains approximately 600 problems divided into 21 suites, and is a required component of ParaDyn’s SQA plan (Ferencz and Oliver, 2013). The regression suite allows developers to ensure that software modifications do not unintentionally alter the code response. The entire regression suite is run prior to permanently incorporating any software modification or addition. When code modifications alter test problem results, the specific cause must be determined and fully understood before the software changes and revised test answers can bemore » incorporated. The regression suite is executed on LLNL platforms using a Python script and an associated data file. The user specifies the DYNA3D or ParaDyn executable, number of processors to use, test problems to run, and other options to the script. The data file details how each problem and its answer extraction scripts are executed. For each problem in the regression suite there exists an input deck, an eight-processor partition file, an answer file, and various extraction scripts. These scripts assemble a temporary answer file in a specific format from the simulation results. The temporary and stored answer files are compared to a specific level of numerical precision, and when differences are detected the test problem is flagged as failed. Presently, numerical results are stored and compared to 16 digits. At this accuracy level different processor types, compilers, number of partitions, etc. impact the results to various degrees. Thus, for consistency purposes the regression suite is run with ParaDyn using 8 processors on machines with a specific processor type (currently the Intel Xeon E5530 processor). For non-parallel regression problems, i.e., the two XFEM problems, DYNA3D is used instead. When environments or platforms change, executables using the current source code and the new resource are created and the regression suite is run. If differences in answers arise, the new answers are retained provided that the differences are inconsequential. This bootstrap approach allows the test suite answers to evolve in a controlled manner with a high level of confidence. Developers also run the entire regression suite with (serial) DYNA3D. While these results normally differ from the stored (parallel) answers, abnormal termination or wildly different values are strong indicators of potential issues.« less
Test Driven Development of a Parameterized Ice Sheet Component
NASA Astrophysics Data System (ADS)
Clune, T.
2011-12-01
Test driven development (TDD) is a software development methodology that offers many advantages over traditional approaches including reduced development and maintenance costs, improved reliability, and superior design quality. Although TDD is widely accepted in many software communities, the suitability to scientific software is largely undemonstrated and warrants a degree of skepticism. Indeed, numerical algorithms pose several challenges to unit testing in general, and TDD in particular. Among these challenges are the need to have simple, non-redundant closed-form expressions to compare against the results obtained from the implementation as well as realistic error estimates. The necessity for serial and parallel performance raises additional concerns for many scientific applicaitons. In previous work I demonstrated that TDD performed well for the development of a relatively simple numerical model that simulates the growth of snowflakes, but the results were anecdotal and of limited relevance to far more complex software components typical of climate models. This investigation has now been extended by successfully applying TDD to the implementation of a substantial portion of a new parameterized ice sheet component within a full climate model. After a brief introduction to TDD, I will present techniques that address some of the obstacles encountered with numerical algorithms. I will conclude with some quantitative and qualitative comparisons against climate components developed in a more traditional manner.
NASA Technical Reports Server (NTRS)
Hruby, R. J.; Bjorkman, W. S.
1977-01-01
Flight test results of the strapdown inertial reference unit (SIRU) navigation system are presented. The fault-tolerant SIRU navigation system features a redundant inertial sensor unit and dual computers. System software provides for detection and isolation of inertial sensor failures and continued operation in the event of failures. Flight test results include assessments of the system's navigational performance and fault tolerance.
Artificial intelligence and expert systems in-flight software testing
NASA Technical Reports Server (NTRS)
Demasie, M. P.; Muratore, J. F.
1991-01-01
The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.
Moving base Gravity Gradiometer Survey System (GGSS) program
NASA Astrophysics Data System (ADS)
Pfohl, Louis; Rusnak, Walter; Jircitano, Albert; Grierson, Andrew
1988-04-01
The GGSS program began in early 1983 with the objective of delivering a landmobile and airborne system capable of fast, accurate, and economical gravity gradient surveys of large areas anywhere in the world. The objective included the development and use of post-mission data reduction software to process the survey data into solutions for the gravity disturbance vector components (north, east and vertical). This document describes the GGSS equipment hardware and software, integration and lab test procedures and results, and airborne and land survey procedures and results. Included are discussions on test strategies, post-mission data reduction algorithms, and the data reduction processing experience. Perspectives and conclusions are drawn from the results.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... NUCLEAR REGULATORY COMMISSION [NRC-2012-0195] Test Documentation for Digital Computer Software...) is issuing for public comment draft regulatory guide (DG), DG-1207, ``Test Documentation for Digital... practices for test documentation for software and computer systems as described in the Institute of...
Software reliability experiments data analysis and investigation
NASA Technical Reports Server (NTRS)
Walker, J. Leslie; Caglayan, Alper K.
1991-01-01
The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.
Implementation and Simulation Results using Autonomous Aerobraking Development Software
NASA Technical Reports Server (NTRS)
Maddock, Robert W.; DwyerCianciolo, Alicia M.; Bowes, Angela; Prince, Jill L. H.; Powell, Richard W.
2011-01-01
An Autonomous Aerobraking software system is currently under development with support from the NASA Engineering and Safety Center (NESC) that would move typically ground-based operations functions to onboard an aerobraking spacecraft, reducing mission risk and mission cost. The suite of software that will enable autonomous aerobraking is the Autonomous Aerobraking Development Software (AADS) and consists of an ephemeris model, onboard atmosphere estimator, temperature and loads prediction, and a maneuver calculation. The software calculates the maneuver time, magnitude and direction commands to maintain the spacecraft periapsis parameters within design structural load and/or thermal constraints. The AADS is currently tested in simulations at Mars, with plans to also evaluate feasibility and performance at Venus and Titan.
Design, Development, and Testing of a Network Frequency Selection Service (NFSS)
1994-02-14
mercial simulation software (Sim++), word processor ( FrameMaker ), editor (Gnu Emacs), software ver- sion control (Revision Control System (RCS)), system...of FrameMaker ".mif" files. When viewed using FrameMaker or a PostScript reader, each page of results appears as two columns by four rows of graphics
On the Use of Software Metrics as a Predictor of Software Security Problems
2013-01-01
models to determine if additional metrics are required to increase the accuracy of the model: non-security SCSA warnings, code churn and size, the...vulnerabilities reported by testing and those found in the field. Summary of Most Important Results We evaluated our model on three commercial telecommunications
Experimental and simulation flow rate analysis of the 3/2 directional pneumatic valve
NASA Astrophysics Data System (ADS)
Blasiak, Slawomir; Takosoglu, Jakub E.; Laski, Pawel A.; Pietrala, Dawid S.; Zwierzchowski, Jaroslaw; Bracha, Gabriel; Nowakowski, Lukasz; Blasiak, Malgorzata
The work includes a study on the comparative analysis of two test methods. The first method - numerical method, consists in determining the flow characteristics with the use of ANSYS CFX. A modeled poppet directional valve 3/2 3D CAD software - SolidWorks was used for this purpose. Based on the solid model that was developed, simulation studies of the air flow through the way valve in the software for computational fluid dynamics Ansys CFX were conducted. The second method - experimental, entailed conducting tests on a specially constructed test stand. The comparison of the test results obtained on the basis of both methods made it possible to determine the cross-correlation. High compatibility of the results confirms the usefulness of the numerical procedures. Thus, they might serve to determine the flow characteristics of directional valves as an alternative to a costly and time-consuming test stand.
Automatically generated acceptance test: A software reliability experiment
NASA Technical Reports Server (NTRS)
Protzel, Peter W.
1988-01-01
This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey
Google Test MPI Listener is a plugin for the Google Test c++ unit testing library that organizes test output of software that uses both the MPI parallel programming model and Google Test. Typically, such output is ordered arbitrarily and disorganized, making difficult the process of interpreting test output. This plug organizes output in MPI rank order, enabling easy interpretation of test results.
Validation and Verification of LADEE Models and Software
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen
2013-01-01
The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.
Grayscale Optical Correlator Workbench
NASA Technical Reports Server (NTRS)
Hanan, Jay; Zhou, Hanying; Chao, Tien-Hsin
2006-01-01
Grayscale Optical Correlator Workbench (GOCWB) is a computer program for use in automatic target recognition (ATR). GOCWB performs ATR with an accurate simulation of a hardware grayscale optical correlator (GOC). This simulation is performed to test filters that are created in GOCWB. Thus, GOCWB can be used as a stand-alone ATR software tool or in combination with GOC hardware for building (target training), testing, and optimization of filters. The software is divided into three main parts, denoted filter, testing, and training. The training part is used for assembling training images as input to a filter. The filter part is used for combining training images into a filter and optimizing that filter. The testing part is used for testing new filters and for general simulation of GOC output. The current version of GOCWB relies on the mathematical software tools from MATLAB binaries for performing matrix operations and fast Fourier transforms. Optimization of filters is based on an algorithm, known as OT-MACH, in which variables specified by the user are parameterized and the best filter is selected on the basis of an average result for correct identification of targets in multiple test images.
Oliveira, M; Lopez, G; Geambastiani, P; Ubeda, C
2018-05-01
A quality assurance (QA) program is a valuable tool for the continuous production of optimal quality images. The aim of this paper is to assess a newly developed automatic computer software for image quality (IR) evaluation in fluoroscopy X-ray systems. Test object images were acquired using one fluoroscopy system, Siemens Axiom Artis model (Siemens AG, Medical Solutions Erlangen, Germany). The software was developed as an ImageJ plugin. Two image quality parameters were assessed: high-contrast spatial resolution (HCSR) and signal-to-noise ratio (SNR). The time between manual and automatic image quality assessment procedures were compared. The paired t-test was used to assess the data. p Values of less than 0.05 were considered significant. The Fluoro-QC software generated faster IQ evaluation results (mean = 0.31 ± 0.08 min) than manual procedure (mean = 4.68 ± 0.09 min). The mean difference between techniques was 4.36 min. Discrepancies were identified in the region of interest (ROI) areas drawn manually with evidence of user dependence. The new software presented the results of two tests (HCSR = 3.06, SNR = 5.17) and also collected information from the DICOM header. Significant differences were not identified between manual and automatic measures of SNR (p value = 0.22) and HCRS (p value = 0.46). The Fluoro-QC software is a feasible, fast and free to use method for evaluating imaging quality parameters on fluoroscopy systems. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.
Educational interactive multimedia software: The impact of interactivity on learning
NASA Astrophysics Data System (ADS)
Reamon, Derek Trent
This dissertation discusses the design, development, deployment and testing of two versions of educational interactive multimedia software. Both versions of the software are focused on teaching mechanical engineering undergraduates about the fundamentals of direct-current (DC) motor physics and selection. The two versions of Motor Workshop software cover the same basic materials on motors, but differ in the level of interactivity between the students and the software. Here, the level of interactivity refers to the particular role of the computer in the interaction between the user and the software. In one version, the students navigate through information that is organized by topic, reading text, and viewing embedded video clips; this is referred to as "low-level interactivity" software because the computer simply presents the content. In the other version, the students are given a task to accomplish---they must design a small motor-driven 'virtual' vehicle that competes against computer-generated opponents. The interaction is guided by the software which offers advice from 'experts' and provides contextual information; we refer to this as "high-level interactivity" software because the computer is actively participating in the interaction. The software was used in two sets of experiments, where students using the low-level interactivity software served as the 'control group,' and students using the highly interactive software were the 'treatment group.' Data, including pre- and post-performance tests, questionnaire responses, learning style characterizations, activity tracking logs and videotapes were collected for analysis. Statistical and observational research methods were applied to the various data to test the hypothesis that the level of interactivity effects the learning situation, with higher levels of interactivity being more effective for learning. The results show that both the low-level and high-level interactive versions of the software were effective in promoting learning about the subject of motors. The focus of learning varied between users of the two versions, however. The low-level version was more effective for teaching concepts and terminology, while the high-level version seemed to be more effective for teaching engineering applications.
Testing of Hand-Held Mine Detection Systems
2015-01-08
ITOP 04-2-5208 for guidance on software testing . Testing software is necessary to ensure that safety is designed into the software algorithm, and that...sensor verification areas or target lanes. F.2. TESTING OBJECTIVES. a. Testing objectives will impact on the test design . Some examples of...overall safety, performance, and reliability of the system. It describes activities necessary to ensure safety is designed into the system under test
DEIVA: a web application for interactive visual analysis of differential gene expression profiles.
Harshbarger, Jayson; Kratz, Anton; Carninci, Piero
2017-01-07
Differential gene expression (DGE) analysis is a technique to identify statistically significant differences in RNA abundance for genes or arbitrary features between different biological states. The result of a DGE test is typically further analyzed using statistical software, spreadsheets or custom ad hoc algorithms. We identified a need for a web-based system to share DGE statistical test results, and locate and identify genes in DGE statistical test results with a very low barrier of entry. We have developed DEIVA, a free and open source, browser-based single page application (SPA) with a strong emphasis on being user friendly that enables locating and identifying single or multiple genes in an immediate, interactive, and intuitive manner. By design, DEIVA scales with very large numbers of users and datasets. Compared to existing software, DEIVA offers a unique combination of design decisions that enable inspection and analysis of DGE statistical test results with an emphasis on ease of use.
Staged-Fault Testing of Distance Protection Relay Settings
NASA Astrophysics Data System (ADS)
Havelka, J.; Malarić, R.; Frlan, K.
2012-01-01
In order to analyze the operation of the protection system during induced fault testing in the Croatian power system, a simulation using the CAPE software has been performed. The CAPE software (Computer-Aided Protection Engineering) is expert software intended primarily for relay protection engineers, which calculates current and voltage values during faults in the power system, so that relay protection devices can be properly set up. Once the accuracy of the simulation model had been confirmed, a series of simulations were performed in order to obtain the optimal fault location to test the protection system. The simulation results were used to specify the test sequence definitions for the end-to-end relay testing using advanced testing equipment with GPS synchronization for secondary injection in protection schemes based on communication. The objective of the end-to-end testing was to perform field validation of the protection settings, including verification of the circuit breaker operation, telecommunication channel time and the effectiveness of the relay algorithms. Once the end-to-end secondary injection testing had been completed, the induced fault testing was performed with three-end lines loaded and in service. This paper describes and analyses the test procedure, consisting of CAPE simulations, end-to-end test with advanced secondary equipment and staged-fault test of a three-end power line in the Croatian transmission system.
NASA Technical Reports Server (NTRS)
Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo
1992-01-01
This report is one of a series discussing configuration management (CM) topics for Space Station ground systems software development. It provides a description of the Software Support Environment (SSE)-developed Software Test Management (STM) capability, and discusses the possible use of this capability for management of developed software during testing performed on target platforms. This is intended to supplement the formal documentation of STM provided by the SEE Project. How STM can be used to integrate contractor CM and formal CM for software before delivery to operations is described. STM provides a level of control that is flexible enough to support integration and debugging, but sufficiently rigorous to insure the integrity of the testing process.
SU-G-BRB-02: An Open-Source Software Analysis Library for Linear Accelerator Quality Assurance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerns, J; Yaldo, D
Purpose: Routine linac quality assurance (QA) tests have become complex enough to require automation of most test analyses. A new data analysis software library was built that allows physicists to automate routine linear accelerator quality assurance tests. The package is open source, code tested, and benchmarked. Methods: Images and data were generated on a TrueBeam linac for the following routine QA tests: VMAT, starshot, CBCT, machine logs, Winston Lutz, and picket fence. The analysis library was built using the general programming language Python. Each test was analyzed with the library algorithms and compared to manual measurements taken at the timemore » of acquisition. Results: VMAT QA results agreed within 0.1% between the library and manual measurements. Machine logs (dynalogs & trajectory logs) were successfully parsed; mechanical axis positions were verified for accuracy and MLC fluence agreed well with EPID measurements. CBCT QA measurements were within 10 HU and 0.2mm where applicable. Winston Lutz isocenter size measurements were within 0.2mm of TrueBeam’s Machine Performance Check. Starshot analysis was within 0.2mm of the Winston Lutz results for the same conditions. Picket fence images with and without a known error showed that the library was capable of detecting MLC offsets within 0.02mm. Conclusion: A new routine QA software library has been benchmarked and is available for use by the community. The library is open-source and extensible for use in larger systems.« less
Academic Testing and Grading with Spreadsheet Software.
ERIC Educational Resources Information Center
Ho, James K.
1987-01-01
Explains how spreadsheet software can be used in the design and grading of academic tests and in assigning grades. Macro programs and menu-driven software are highlighted and an example using IBM PCs and Lotus 1-2-3 software is given. (Author/LRW)
Assessment Environment for Complex Systems Software Guide
NASA Technical Reports Server (NTRS)
2013-01-01
This Software Guide (SG) describes the software developed to test the Assessment Environment for Complex Systems (AECS) by the West Virginia High Technology Consortium (WVHTC) Foundation's Mission Systems Group (MSG) for the National Aeronautics and Space Administration (NASA) Aeronautics Research Mission Directorate (ARMD). This software is referred to as the AECS Test Project throughout the remainder of this document. AECS provides a framework for developing, simulating, testing, and analyzing modern avionics systems within an Integrated Modular Avionics (IMA) architecture. The purpose of the AECS Test Project is twofold. First, it provides a means to test the AECS hardware and system developed by MSG. Second, it provides an example project upon which future AECS research may be based. This Software Guide fully describes building, installing, and executing the AECS Test Project as well as its architecture and design. The design of the AECS hardware is described in the AECS Hardware Guide. Instructions on how to configure, build and use the AECS are described in the User's Guide. Sample AECS software, developed by the WVHTC Foundation, is presented in the AECS Software Guide. The AECS Hardware Guide, AECS User's Guide, and AECS Software Guide are authored by MSG. The requirements set forth for AECS are presented in the Statement of Work for the Assessment Environment for Complex Systems authored by NASA Dryden Flight Research Center (DFRC). The intended audience for this document includes software engineers, hardware engineers, project managers, and quality assurance personnel from WVHTC Foundation (the suppliers of the software), NASA (the customer), and future researchers (users of the software). Readers are assumed to have general knowledge in the field of real-time, embedded computer software development.
Proactive Security Testing and Fuzzing
NASA Astrophysics Data System (ADS)
Takanen, Ari
Software is bound to have security critical flaws, and no testing or code auditing can ensure that software is flaw-less. But software security testing requirements have improved radically during the past years, largely due to criticism from security conscious consumers and Enterprise customers. Whereas in the past, security flaws were taken for granted (and patches were quietly and humbly installed), they now are probably one of the most common reasons why people switch vendors or software providers. The maintenance costs from security updates often add to become one of the biggest cost items to large Enterprise users. Fortunately test automation techniques have also improved. Techniques like model-based testing (MBT) enable efficient generation of security tests that reach good confidence levels in discovering zero-day mistakes in software. This technique is called fuzzing.
Optical fiber dispersion characterization study
NASA Technical Reports Server (NTRS)
Geeslin, A.; Arriad, A.; Riad, S. M.; Padgett, M. E.
1979-01-01
The theory, design, and results of optical fiber pulse dispersion measurements are considered. Both the hardware and software required to perform this type of measurement are described. Hardware includes a thermoelectrically cooled injection laser diode source, an 800 GHz gain bandwidth produce avalanche photodiode and an input mode scrambler. Software for a HP 9825 computer includes fast Fourier transform, inverse Fourier transform, and optimal compensation deconvolution. Test set construction details are also included. Test results include data collected on a 1 Km fiber, a 4 Km fiber, a fused spliced, eight 600 meter length fibers concatenated to form 4.8 Km, and up to nine optical connectors.
NASA Astrophysics Data System (ADS)
Parra, Pablo; da Silva, Antonio; Polo, Óscar R.; Sánchez, Sebastián
2018-02-01
In this day and age, successful embedded critical software needs agile and continuous development and testing procedures. This paper presents the overall testing and code coverage metrics obtained during the unit testing procedure carried out to verify the correctness of the boot software that will run in the Instrument Control Unit (ICU) of the Energetic Particle Detector (EPD) on-board Solar Orbiter. The ICU boot software is a critical part of the project so its verification should be addressed at an early development stage, so any test case missed in this process may affect the quality of the overall on-board software. According to the European Cooperation for Space Standardization ESA standards, testing this kind of critical software must cover 100% of the source code statement and decision paths. This leads to the complete testing of fault tolerance and recovery mechanisms that have to resolve every possible memory corruption or communication error brought about by the space environment. The introduced procedure enables fault injection from the beginning of the development process and enables to fulfill the exigent code coverage demands on the boot software.
Calculus domains modelled using an original bool algebra based on polygons
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2016-08-01
Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.
Comparison of software tools for kinetic evaluation of chemical degradation data.
Ranke, Johannes; Wöltjen, Janina; Meinecke, Stefan
2018-01-01
For evaluating the fate of xenobiotics in the environment, a variety of degradation or environmental metabolism experiments are routinely conducted. The data generated in such experiments are evaluated by optimizing the parameters of kinetic models in a way that the model simulation fits the data. No comparison of the main software tools currently in use has been published to date. This article shows a comparison of numerical results as well as an overall, somewhat subjective comparison based on a scoring system using a set of criteria. The scoring was separately performed for two types of uses. Uses of type I are routine evaluations involving standard kinetic models and up to three metabolites in a single compartment. Evaluations involving non-standard model components, more than three metabolites or more than a single compartment belong to use type II. For use type I, usability is most important, while the flexibility of the model definition is most important for use type II. Test datasets were assembled that can be used to compare the numerical results for different software tools. These datasets can also be used to ensure that no unintended or erroneous behaviour is introduced in newer versions. In the comparison of numerical results, good agreement between the parameter estimates was observed for datasets with up to three metabolites. For the now unmaintained reference software DegKinManager/ModelMaker, and for OpenModel which is still under development, user options were identified that should be taken care of in order to obtain results that are as reliable as possible. Based on the scoring system mentioned above, the software tools gmkin, KinGUII and CAKE received the best scores for use type I. Out of the 15 software packages compared with respect to use type II, again gmkin and KinGUII were the first two, followed by the script based tool mkin, which is the technical basis for gmkin, and by OpenModel. Based on the evaluation using the system of criteria mentioned above and the comparison of numerical results for the suite of test datasets, the software tools gmkin, KinGUII and CAKE are recommended for use type I, and gmkin and KinGUII for use type II. For users that prefer to work with scripts instead of graphical user interfaces, mkin is recommended. For future software evaluations, it is recommended to include a measure for the total time that a typical user needs for a kinetic evaluation into the scoring scheme. It is the hope of the authors that the publication of test data, source code and overall rankings foster the evolution of useful and reliable software in the field.
Reference software implementation for GIFTS ground data processing
NASA Astrophysics Data System (ADS)
Garcia, R. K.; Howell, H. B.; Knuteson, R. O.; Martin, G. D.; Olson, E. R.; Smuga-Otto, M. J.
2006-08-01
Future satellite weather instruments such as high spectral resolution imaging interferometers pose a challenge to the atmospheric science and software development communities due to the immense data volumes they will generate. An open-source, scalable reference software implementation demonstrating the calibration of radiance products from an imaging interferometer, the Geosynchronous Imaging Fourier Transform Spectrometer1 (GIFTS), is presented. This paper covers essential design principles laid out in summary system diagrams, lessons learned during implementation and preliminary test results from the GIFTS Information Processing System (GIPS) prototype.
Coble, M D; Buckleton, J; Butler, J M; Egeland, T; Fimmers, R; Gill, P; Gusmão, L; Guttman, B; Krawczak, M; Morling, N; Parson, W; Pinto, N; Schneider, P M; Sherry, S T; Willuweit, S; Prinz, M
2016-11-01
The use of biostatistical software programs to assist in data interpretation and calculate likelihood ratios is essential to forensic geneticists and part of the daily case work flow for both kinship and DNA identification laboratories. Previous recommendations issued by the DNA Commission of the International Society for Forensic Genetics (ISFG) covered the application of bio-statistical evaluations for STR typing results in identification and kinship cases, and this is now being expanded to provide best practices regarding validation and verification of the software required for these calculations. With larger multiplexes, more complex mixtures, and increasing requests for extended family testing, laboratories are relying more than ever on specific software solutions and sufficient validation, training and extensive documentation are of upmost importance. Here, we present recommendations for the minimum requirements to validate bio-statistical software to be used in forensic genetics. We distinguish between developmental validation and the responsibilities of the software developer or provider, and the internal validation studies to be performed by the end user. Recommendations for the software provider address, for example, the documentation of the underlying models used by the software, validation data expectations, version control, implementation and training support, as well as continuity and user notifications. For the internal validations the recommendations include: creating a validation plan, requirements for the range of samples to be tested, Standard Operating Procedure development, and internal laboratory training and education. To ensure that all laboratories have access to a wide range of samples for validation and training purposes the ISFG DNA commission encourages collaborative studies and public repositories of STR typing results. Published by Elsevier Ireland Ltd.
Discrete Element Method (DEM) Simulations using PFC3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matt Evans
Contains input scripts, background information, reduced data, and results associated with the discrete element method (DEM) simulations of interface shear tests, plate anchor pullout tests, and torpedo anchor installation and pullout tests, using the software PFC3D (v4.0).
Florida alternative NTCIP testing software (ANTS) for actuated signal controllers.
DOT National Transportation Integrated Search
2009-01-01
The scope of this research project did include the development of a software tool to test devices for NTCIP compliance. Development of the Florida Alternative NTCIP Testing Software (ANTS) was developed by the research team due to limitations found w...
Verification Testing: Meet User Needs Figure of Merit
NASA Technical Reports Server (NTRS)
Kelly, Bryan W.; Welch, Bryan W.
2017-01-01
Verification is the process through which Modeling and Simulation(M&S) software goes to ensure that it has been rigorously tested and debugged for its intended use. Validation confirms that said software accurately models and represents the real world system. Credibility gives an assessment of the development and testing effort that the software has gone through as well as how accurate and reliable test results are. Together, these three components form Verification, Validation, and Credibility(VV&C), the process by which all NASA modeling software is to be tested to ensure that it is ready for implementation. NASA created this process following the CAIB (Columbia Accident Investigation Board) report seeking to understand the reasons the Columbia space shuttle failed during reentry. The reports conclusion was that the accident was fully avoidable, however, among other issues, the necessary data to make an informed decision was not there and the result was complete loss of the shuttle and crew. In an effort to mitigate this problem, NASA put out their Standard for Models and Simulations, currently in version NASA-STD-7009A, in which they detailed their recommendations, requirements and rationale for the different components of VV&C. They did this with the intention that it would allow for people receiving MS software to clearly understand and have data from the past development effort. This in turn would allow the people who had not worked with the MS software before to move forward with greater confidence and efficiency in their work. This particular project looks to perform Verification on several MATLAB (Registered Trademark)(The MathWorks, Inc.) scripts that will be later implemented in a website interface. It seeks to take note and define the limits of operation, the units and significance, and the expected datatype and format of the inputs and outputs of each of the scripts. This is intended to prevent the code from attempting to make incorrect or impossible calculations. Additionally, this project will look at the coding generally and note inconsistencies, redundancies, and other aspects that may become problematic or slow down the codes run time. Certain scripts lacking in documentation also will be commented and cataloged.
SU-E-P-43: A Knowledge Based Approach to Guidelines for Software Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salomons, G; Kelly, D
Purpose: In the fall of 2012, a survey was distributed to medical physicists across Canada. The survey asked the respondents to comment on various aspects of software development and use in their clinic. The survey revealed that most centers employ locally produced (in-house) software of some kind. The respondents also indicated an interest in having software guidelines, but cautioned that the realities of cancer clinics include variations, that preclude a simple solution. Traditional guidelines typically involve periodically repeating a set of prescribed tests with defined tolerance limits. However, applying a similar formula to software is problematic since it assumes thatmore » the users have a perfect knowledge of how and when to apply the software and that if the software operates correctly under one set of conditions it will operate correctly under all conditions Methods: In the approach presented here the personnel involved with the software are included as an integral part of the system. Activities performed to improve the safety of the software are done with both software and people in mind. A learning oriented approach is taken, following the premise that the best approach to safety is increasing the understanding of those associated with the use or development of the software. Results: The software guidance document is organized by areas of knowledge related to use and development of software. The categories include: knowledge of the underlying algorithm and its limitations; knowledge of the operation of the software, such as input values, parameters, error messages, and interpretation of output; and knowledge of the environment for the software including both data and users. Conclusion: We propose a new approach to developing guidelines which is based on acquiring knowledge-rather than performing tests. The ultimate goal is to provide robust software guidelines which will be practical and effective.« less
Nouri, Mahtab; Hamidiaval, Shadi; Akbarzadeh Baghban, Alireza; Basafa, Mohammad; Fahim, Mohammad
2015-01-01
Cephalometric norms of McNamara analysis have been studied in various populations due to their optimal efficiency. Dolphin cephalometric software greatly enhances the conduction of this analysis for orthodontic measurements. However, Dolphin is very expensive and cannot be afforded by many clinicians in developing countries. A suitable alternative software program in Farsi/English will greatly help Farsi speaking clinicians. The present study aimed to develop an affordable Iranian cephalometric analysis software program and compare it with Dolphin, the standard software available on the market for cephalometric analysis. In this diagnostic, descriptive study, 150 lateral cephalograms of normal occlusion individuals were selected in Mashhad and Qazvin, two major cities of Iran mainly populated with Fars ethnicity, the main Iranian ethnic group. After tracing the cephalograms, the McNamara analysis standards were measured both with Dolphin and the new software. The cephalometric software was designed using Microsoft Visual C++ program in Windows XP. Measurements made with the new software were compared with those of Dolphin software on both series of cephalograms. The validity and reliability were tested using intra-class correlation coefficient. Calculations showed a very high correlation between the results of the Iranian cephalometric analysis software and Dolphin. This confirms the validity and optimal efficacy of the newly designed software (ICC 0.570-1.0). According to our results, the newly designed software has acceptable validity and reliability and can be used for orthodontic diagnosis, treatment planning and assessment of treatment outcome.
DAME: planetary-prototype drilling automation.
Glass, B; Cannon, H; Branson, M; Hanagud, S; Paulsen, G
2008-06-01
We describe results from the Drilling Automation for Mars Exploration (DAME) project, including those of the summer 2006 tests from an Arctic analog site. The drill hardware is a hardened, evolved version of the Advanced Deep Drill by Honeybee Robotics. DAME has developed diagnostic and executive software for hands-off surface operations of the evolved version of this drill. The DAME drill automation tested from 2004 through 2006 included adaptively controlled drilling operations and the downhole diagnosis of drilling faults. It also included dynamic recovery capabilities when unexpected failures or drilling conditions were discovered. DAME has developed and tested drill automation software and hardware under stressful operating conditions during its Arctic field testing campaigns at a Mars analog site.
FPGA Based Reconfigurable ATM Switch Test Bed
NASA Technical Reports Server (NTRS)
Chu, Pong P.; Jones, Robert E.
1998-01-01
Various issues associated with "FPGA Based Reconfigurable ATM Switch Test Bed" are presented in viewgraph form. Specific topics include: 1) Network performance evaluation; 2) traditional approaches; 3) software simulation; 4) hardware emulation; 5) test bed highlights; 6) design environment; 7) test bed architecture; 8) abstract sheared-memory switch; 9) detailed switch diagram; 10) traffic generator; 11) data collection circuit and user interface; 12) initial results; and 13) the following conclusions: Advances in FPGA make hardware emulation feasible for performance evaluation, hardware emulation can provide several orders of magnitude speed-up over software simulation; due to the complexity of hardware synthesis process, development in emulation is much more difficult than simulation and requires knowledge in both networks and digital design.
DAME: Planetary-Prototype Drilling Automation
NASA Astrophysics Data System (ADS)
Glass, B.; Cannon, H.; Branson, M.; Hanagud, S.; Paulsen, G.
2008-06-01
We describe results from the Drilling Automation for Mars Exploration (DAME) project, including those of the summer 2006 tests from an Arctic analog site. The drill hardware is a hardened, evolved version of the Advanced Deep Drill by Honeybee Robotics. DAME has developed diagnostic and executive software for hands-off surface operations of the evolved version of this drill. The DAME drill automation tested from 2004 through 2006 included adaptively controlled drilling operations and the downhole diagnosis of drilling faults. It also included dynamic recovery capabilities when unexpected failures or drilling conditions were discovered. DAME has developed and tested drill automation software and hardware under stressful operating conditions during its Arctic field testing campaigns at a Mars analog site.
NASA Astrophysics Data System (ADS)
Su, Peng; Khreishi, Manal A. H.; Su, Tianquan; Huang, Run; Dominguez, Margaret Z.; Maldonado, Alejandro; Butel, Guillaume; Wang, Yuhao; Parks, Robert E.; Burge, James H.
2014-03-01
A software configurable optical test system (SCOTS) based on deflectometry was developed at the University of Arizona for rapidly, robustly, and accurately measuring precision aspheric and freeform surfaces. SCOTS uses a camera with an external stop to realize a Hartmann test in reverse. With the external camera stop as the reference, a coordinate measuring machine can be used to calibrate the SCOTS test geometry to a high accuracy. Systematic errors from the camera are carefully investigated and controlled. Camera pupil imaging aberration is removed with the external aperture stop. Imaging aberration and other inherent errors are suppressed with an N-rotation test. The performance of the SCOTS test is demonstrated with the measurement results from a 5-m-diameter Large Synoptic Survey Telescope tertiary mirror and an 8.4-m diameter Giant Magellan Telescope primary mirror. The results show that SCOTS can be used as a large-dynamic-range, high-precision, and non-null test method for precision aspheric and freeform surfaces. The SCOTS test can achieve measurement accuracy comparable to traditional interferometric tests.
Flight test results of the strapdown hexad inertial reference unit (SIRU). Volume 2: Test report
NASA Technical Reports Server (NTRS)
Hruby, R. J.; Bjorkman, W. S.
1977-01-01
Results of flight tests of the Strapdown Inertial Reference Unit (SIRU) navigation system are presented. The fault tolerant SIRU navigation system features a redundant inertial sensor unit and dual computers. System software provides for detection and isolation of inertial sensor failures and continued operation in the event of failures. Flight test results include assessments of the system's navigational performance and fault tolerance. Performance shortcomings are analyzed.
GPM Timeline Inhibits For IT Processing
NASA Technical Reports Server (NTRS)
Dion, Shirley K.
2014-01-01
The Safety Inhibit Timeline Tool was created as one approach to capturing and understanding inhibits and controls from IT through launch. Global Precipitation Measurement (GPM) Mission, which launched from Japan in March 2014, was a joint mission under a partnership between the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM was one of the first NASA Goddard in-house programs that extensively used software controls. Using this tool during the GPM buildup allowed a thorough review of inhibit and safety critical software design for hazardous subsystems such as the high gain antenna boom, solar array, and instrument deployments, transmitter turn-on, propulsion system release, and instrument radar turn-on. The GPM safety team developed a methodology to document software safety as part of the standard hazard report. As a result of this process, a new tool safety inhibit timeline was created for management of inhibits and their controls during spacecraft buildup and testing during IT at GSFC and at the launch range in Japan. The Safety Inhibit Timeline Tool was a pathfinder approach for reviewing software that controls the electrical inhibits. The Safety Inhibit Timeline Tool strengthens the Safety Analysts understanding of the removal of inhibits during the IT process with safety critical software. With this tool, the Safety Analyst can confirm proper safe configuration of a spacecraft during each IT test, track inhibit and software configuration changes, and assess software criticality. In addition to understanding inhibits and controls during IT, the tool allows the Safety Analyst to better communicate to engineers and management the changes in inhibit states with each phase of hardware and software testing and the impact of safety risks. Lessons learned from participating in the GPM campaign at NASA and JAXA will be discussed during this session.
NASA Technical Reports Server (NTRS)
Staveland, Lowell
1994-01-01
This is the experimental and software detailed design report for the prototype task loading model (TLM) developed as part of the man-machine integration design and analysis system (MIDAS), as implemented and tested in phase 6 of the Army-NASA Aircrew/Aircraft Integration (A3I) Program. The A3I program is an exploratory development effort to advance the capabilities and use of computational representations of human performance and behavior in the design, synthesis, and analysis of manned systems. The MIDAS TLM computationally models the demands designs impose on operators to aide engineers in the conceptual design of aircraft crewstations. This report describes TLM and the results of a series of experiments which were run this phase to test its capabilities as a predictive task demand modeling tool. Specifically, it includes discussions of: the inputs and outputs of TLM, the theories underlying it, the results of the test experiments, the use of the TLM as both stand alone tool and part of a complete human operator simulation, and a brief introduction to the TLM software design.
NASA Astrophysics Data System (ADS)
Budiman, Kholiq; Prahasto, Toni; Kusumawardhani, Amie
2018-02-01
This research has applied an integrated design and development of planning information system, which is been designed using Enterprise Architecture Planning. Frequent discrepancy between planning and realization of the budget that has been made, resulted in ineffective planning, is one of the reason for doing this research. The design using EAP aims to keep development aligned and in line with the strategic direction of the organization. In the practice, EAP is carried out in several stages of the planning initiation, identification and definition of business functions, proceeded with architectural design and EA implementation plan that has been built. In addition to the design of the Enterprise Architecture, this research carried out the implementation, and was tested by several methods of black box and white box. Black box testing method is used to test the fundamental aspects of the software, tested by two kinds of testing, first is using User Acceptance Testing and the second is using software functionality testing. White box testing method is used to test the effectiveness of the code in the software, tested using unit testing. Tests conducted using white box and black box on the integrated planning information system, is declared successful. Success in the software testing can not be ascertained if the software built has not shown any distinction from prior circumstance to the development of this integrated planning information system. For ensuring the success of this system implementation, the authors test consistency between the planning of data and the realization of prior-use of the information system, until after-use information system. This consistency test is done by reducing the time data of the planning and realization time. From the tabulated data, the planning information system that has been built reduces the difference between the planning time and the realization time, in which indicates that the planning information system can motivate the planner unit in realizing the budget that has been designed. It also proves that the value chain of the information planning system has brought implications for budget realization.
Versatile Software Package For Near Real-Time Analysis of Experimental Data
NASA Technical Reports Server (NTRS)
Wieseman, Carol D.; Hoadley, Sherwood T.
1998-01-01
This paper provides an overview of a versatile software package developed for time- and frequency-domain analyses of experimental wind-tunnel data. This package, originally developed for analyzing data in the NASA Langley Transonic Dynamics Tunnel (TDT), is applicable for analyzing any time-domain data. A Matlab-based software package, TDT-analyzer, provides a compendium of commonly-required dynamic analysis functions in a user-friendly interactive and batch processing environment. TDT-analyzer has been used extensively to provide on-line near real-time and post-test examination and reduction of measured data acquired during wind tunnel tests of aeroelastically-scaled models of aircraft and rotorcraft as well as a flight test of the NASA High Alpha Research Vehicle (HARV) F-18. The package provides near real-time results in an informative and timely manner far exceeding prior methods of data reduction at the TDT.
An expert system executive for automated assembly of large space truss structures
NASA Technical Reports Server (NTRS)
Allen, Cheryl L.
1993-01-01
Langley Research Center developed a unique test bed for investigating the practical problems associated with the assembly of large space truss structures using robotic manipulators. The test bed is the result of an interdisciplinary effort that encompasses the full spectrum of assembly problems - from the design of mechanisms to the development of software. The automated structures assembly test bed and its operation are described, the expert system executive and its development are detailed, and the planned system evolution is discussed. Emphasis is on the expert system implementation of the program executive. The executive program must direct and reliably perform complex assembly tasks with the flexibility to recover from realistic system errors. The employment of an expert system permits information that pertains to the operation of the system to be encapsulated concisely within a knowledge base. This consolidation substantially reduced code, increased flexibility, eased software upgrades, and realized a savings in software maintenance costs.
1988-06-01
Based Software Engineering Project Course .............. 83 SSoftware Engineering, Software Engineering Concepts: The Importance of Object-Based...quality assurance, and independent system testing . The Chief Programmer is responsible for all software development activities, including prototyping...during the Requirements Analysis phase, the Preliminary Design, the Detailed Design, Coding and Unit Testing , CSC Integration and Testing , and informal
Software OT&E Guidelines. Volume 1. Software Test Manager’s Handbook
1981-02-01
on reverse side If neceeary and identify by block number) The Software OT&E Guidelines is a set of handbooks prepared by the Computer / Support Systems...is one of a set of handbooks prepared by the Computer /Support Systems Division of the Test and Evaluation Directorate, Air Force Test and Evaluation...15 E. Software Maintainability .. .. ........ ... 16 F. Standard Questionnaires. .. .. ....... .... 16 1. Operator- Computer Interface Evaluation
Modeling and Performance Considerations for Automated Fault Isolation in Complex Systems
NASA Technical Reports Server (NTRS)
Ferrell, Bob; Oostdyk, Rebecca
2010-01-01
The purpose of this paper is to document the modeling considerations and performance metrics that were examined in the development of a large-scale Fault Detection, Isolation and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FDIR team members developed a set of operational requirements for the models that would be used for fault isolation and worked closely with the vendor of the software tools selected for fault isolation to ensure that the software was able to meet the requirements. Once the requirements were established, example models of sufficient complexity were used to test the performance of the software. The results of the performance testing demonstrated the need for enhancements to the software in order to meet the demands of the full-scale ground and vehicle FDIR system. The paper highlights the importance of the development of operational requirements and preliminary performance testing as a strategy for identifying deficiencies in highly scalable systems and rectifying those deficiencies before they imperil the success of the project
Advanced information processing system: Fault injection study and results
NASA Technical Reports Server (NTRS)
Burkhardt, Laura F.; Masotto, Thomas K.; Lala, Jaynarayan H.
1992-01-01
The objective of the AIPS program is to achieve a validated fault tolerant distributed computer system. The goals of the AIPS fault injection study were: (1) to present the fault injection study components addressing the AIPS validation objective; (2) to obtain feedback for fault removal from the design implementation; (3) to obtain statistical data regarding fault detection, isolation, and reconfiguration responses; and (4) to obtain data regarding the effects of faults on system performance. The parameters are described that must be varied to create a comprehensive set of fault injection tests, the subset of test cases selected, the test case measurements, and the test case execution. Both pin level hardware faults using a hardware fault injector and software injected memory mutations were used to test the system. An overview is provided of the hardware fault injector and the associated software used to carry out the experiments. Detailed specifications are given of fault and test results for the I/O Network and the AIPS Fault Tolerant Processor, respectively. The results are summarized and conclusions are given.
International Space Station alpha remote manipulator system workstation controls test report
NASA Astrophysics Data System (ADS)
Ehrenstrom, William A.; Swaney, Colin; Forrester, Patrick
1994-05-01
Previous development testing for the space station remote manipulator system workstation controls determined the need for hardware controls for the emergency stop, brakes on/off, and some camera functions. This report documents the results of an evaluation to further determine control implementation requirements, requested by the Canadian Space Agency (CSA), to close outstanding review item discrepancies. This test was conducted at the Johnson Space Center's Space Station Mockup and Trainer Facility in Houston, Texas, with nine NASA astronauts and one CSA astronaut as operators. This test evaluated camera iris and focus, back-up drive, latching end effector release, and autosequence controls using several types of hardware and software implementations. Recommendations resulting from the testing included providing guarded hardware buttons to prevent accidental actuation, providing autosequence controls and back-up drive controls on a dedicated hardware control panel, and that 'latch on/latch off', or on-screen software, controls not be considered. Generally, the operators preferred hardware controls although other control implementations were acceptable. The results of this evaluation will be used along with further testing to define specific requirements for the workstation design.
International Space Station alpha remote manipulator system workstation controls test report
NASA Technical Reports Server (NTRS)
Ehrenstrom, William A.; Swaney, Colin; Forrester, Patrick
1994-01-01
Previous development testing for the space station remote manipulator system workstation controls determined the need for hardware controls for the emergency stop, brakes on/off, and some camera functions. This report documents the results of an evaluation to further determine control implementation requirements, requested by the Canadian Space Agency (CSA), to close outstanding review item discrepancies. This test was conducted at the Johnson Space Center's Space Station Mockup and Trainer Facility in Houston, Texas, with nine NASA astronauts and one CSA astronaut as operators. This test evaluated camera iris and focus, back-up drive, latching end effector release, and autosequence controls using several types of hardware and software implementations. Recommendations resulting from the testing included providing guarded hardware buttons to prevent accidental actuation, providing autosequence controls and back-up drive controls on a dedicated hardware control panel, and that 'latch on/latch off', or on-screen software, controls not be considered. Generally, the operators preferred hardware controls although other control implementations were acceptable. The results of this evaluation will be used along with further testing to define specific requirements for the workstation design.
NASA Technical Reports Server (NTRS)
Kenny, Patrick
2004-01-01
The Acoustics Branch is responsible for reducing noise levels for jet and fan components on aircraft engines. To do this, data must be measured and calibrated accurately to ensure validity of test results. This noise reduction is accomplished by modifications to hardware such as jet nozzles, and by the use of other experimental hardware such as fluidic chevrons, elliptic cores, and fluidic shields. To insure validity of data calibration, a variety of software is used. This software adjusts the sound amplitude and frequency to be consistent with data taken on another day. Both the software and the hardware help make noise reduction possible. work properly. These software programs were designed to make corrections for atmosphere, shear, attenuation, electronic, and background noise. All data can be converted to a one-foot lossless condition, using the proper software corrections, making a reading independent of weather and distance. Also, data can be transformed from model scale to full scale for noise predictions of a real flight. Other programs included calculations of Over All Sound Pressure Level (OASPL), Effective Perceived Noise Level (EPNL). OASPL is the integration of sound with respect to frequency, and EPNL is weighted for a human s response to different sound frequencies and integrated with respect to time. With the proper software correction, data taken in the NATR are useful in determining ways to reduce noise. display any difference between two or more data files. Using this program and graphs of the data, the actual and predicted data can be compared. This software was tested on data collected at the Aero Acoustic Propulsion Laboratory (AAPL) using a variety of window types and overlaps. Similarly, short scripts were written to test each individual program in the software suite for verification. Each graph displays both the original points and the adjusted points connected with lines. During this summer, data points were taken during a live experiment at the AAPL to measure Nozzle Acoustic Test Rig (NATR) background noise levels. Six condenser microphones were placed in strategic locations around the dome and the inlet tunnel to measure different noise sources. From the control room the jet was monitored with the help of video cameras and other sensors. The data points were recorded, reduced, and plotted, and will be used to plan future modifications to the NATR. The primary goal to create data reduction test programs and provide verification was completed. As a result of the internship, I learned C/C++, UNIX/LINUX, Excel, and acoustic data processing methods. I also recorded data at the AAPL, then processed and plotted it. These data would be useful to compare against existing data. In addition, I adjusted software to work on the Mac OSX platform. And I used the available training resources.
NASA Astrophysics Data System (ADS)
Wang, Qiang
2017-09-01
As an important part of software engineering, the software process decides the success or failure of software product. The design and development feature of security software process is discussed, so is the necessity and the present significance of using such process. Coordinating the function software, the process for security software and its testing are deeply discussed. The process includes requirement analysis, design, coding, debug and testing, submission and maintenance. In each process, the paper proposed the subprocesses to support software security. As an example, the paper introduces the above process into the power information platform.
The Rapid Integration and Test Environment: A Process for Achieving Software Test Acceptance
2010-05-01
Test Environment : A Process for Achieving Software Test Acceptance 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...mlif`v= 365= k^s^i=mlpqdo^ar^qb=p`elli= The Rapid Integration and Test Environment : A Process for Achieving Software Test Acceptance Patrick V...was awarded the Bronze Star. Introduction The Rapid Integration and Test Environment (RITE) initiative, implemented by the Program Executive Office
IMCS reflight certification requirements and design specifications
NASA Technical Reports Server (NTRS)
1984-01-01
The requirements for reflight certification are established. Software requirements encompass the software programs that are resident in the PCC, DEP, PDSS, EC, or any related GSE. A design approach for the reflight software packages is recommended. These designs will be of sufficient detail to permit the implementation of reflight software. The PDSS/IMC Reflight Certification system provides the tools and mechanisms for the user to perform the reflight certification test procedures, test data capture, test data display, and test data analysis. The system as defined will be structured to permit maximum automation of reflight certification procedures and test data analysis.
Mining dynamic noteworthy functions in software execution sequences
Huang, Guoyan; Wang, Yuqian; He, Haitao; Ren, Jiadong
2017-01-01
As the quality of crucial entities can directly affect that of software, their identification and protection become an important premise for effective software development, management, maintenance and testing, which thus contribute to improving the software quality and its attack-defending ability. Most analysis and evaluation on important entities like codes-based static structure analysis are on the destruction of the actual software running. In this paper, from the perspective of software execution process, we proposed an approach to mine dynamic noteworthy functions (DNFM)in software execution sequences. First, according to software decompiling and tracking stack changes, the execution traces composed of a series of function addresses were acquired. Then these traces were modeled as execution sequences and then simplified so as to get simplified sequences (SFS), followed by the extraction of patterns through pattern extraction (PE) algorithm from SFS. After that, evaluating indicators inner-importance and inter-importance were designed to measure the noteworthiness of functions in DNFM algorithm. Finally, these functions were sorted by their noteworthiness. Comparison and contrast were conducted on the experiment results from two traditional complex network-based node mining methods, namely PageRank and DegreeRank. The results show that the DNFM method can mine noteworthy functions in software effectively and precisely. PMID:28278276
Cassini's Test Methodology for Flight Software Verification and Operations
NASA Technical Reports Server (NTRS)
Wang, Eric; Brown, Jay
2007-01-01
The Cassini spacecraft was launched on 15 October 1997 on a Titan IV-B launch vehicle. The spacecraft is comprised of various subsystems, including the Attitude and Articulation Control Subsystem (AACS). The AACS Flight Software (FSW) and its development has been an ongoing effort, from the design, development and finally operations. As planned, major modifications to certain FSW functions were designed, tested, verified and uploaded during the cruise phase of the mission. Each flight software upload involved extensive verification testing. A standardized FSW testing methodology was used to verify the integrity of the flight software. This paper summarizes the flight software testing methodology used for verifying FSW from pre-launch through the prime mission, with an emphasis on flight experience testing during the first 2.5 years of the prime mission (July 2004 through January 2007).
Airborne antenna pattern calculations
NASA Technical Reports Server (NTRS)
Knerr, T. J.; Mielke, R. R.
1981-01-01
Progress on the development of modeling software, testing software against caclulated data from program VPAP and measured patterns, and calculating roll plane patterns for general aviation aircraft is reported. Major objectives are the continued development of computer software for aircraft modeling and use of this software and program OSUVOL to calculate principal plane and volumetric radiation patterns. The determination of proper placement of antennas on aircraft to meet the requirements of the Microwave Landing System is discussed. An overview of the performed work, and an example of a roll plane model for the Piper PA-31T Cheyenne aircraft and the resulting calculated roll plane radiation pattern are included.
Mars Science Laboratory Flight Software Boot Robustness Testing Project Report
NASA Technical Reports Server (NTRS)
Roth, Brian
2011-01-01
On the surface of Mars, the Mars Science Laboratory will boot up its flight computers every morning, having charged the batteries through the night. This boot process is complicated, critical, and affected by numerous hardware states that can be difficult to test. The hardware test beds do not facilitate testing a long duration of back-to-back unmanned automated tests, and although the software simulation has provided the necessary functionality and fidelity for this boot testing, there has not been support for the full flexibility necessary for this task. Therefore to perform this testing a framework has been build around the software simulation that supports running automated tests loading a variety of starting configurations for software and hardware states. This implementation has been tested against the nominal cases to validate the methodology, and support for configuring off-nominal cases is ongoing. The implication of this testing is that the introduction of input configurations that have yet proved difficult to test may reveal boot scenarios worth higher fidelity investigation, and in other cases increase confidence in the robustness of the flight software boot process.
Experiments in fault tolerant software reliability
NASA Technical Reports Server (NTRS)
Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.
1987-01-01
The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.
Multi-version software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1989-01-01
A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.
NASA Technical Reports Server (NTRS)
McComas, David C.; Strege, Susanne L.; Carpenter, Paul B. Hartman, Randy
2015-01-01
The core Flight System (cFS) is a flight software (FSW) product line developed by the Flight Software Systems Branch (FSSB) at NASA's Goddard Space Flight Center (GSFC). The cFS uses compile-time configuration parameters to implement variable requirements to enable portability across embedded computing platforms and to implement different end-user functional needs. The verification and validation of these requirements is proving to be a significant challenge. This paper describes the challenges facing the cFS and the results of a pilot effort to apply EXB Solution's testing approach to the cFS applications.
Benchmark tests for a Formula SAE Student car prototyping
NASA Astrophysics Data System (ADS)
Mariasiu, Florin
2011-12-01
Aerodynamic characteristics of a vehicle are important elements in its design and construction. A low drag coefficient brings significant fuel savings and increased engine power efficiency. In designing and developing vehicles trough computer simulation process to determine the vehicles aerodynamic characteristics are using dedicated CFD (Computer Fluid Dynamics) software packages. However, the results obtained by this faster and cheaper method, are validated by experiments in wind tunnels tests, which are expensive and were complex testing equipment are used in relatively high costs. Therefore, the emergence and development of new low-cost testing methods to validate CFD simulation results would bring great economic benefits for auto vehicles prototyping process. This paper presents the initial development process of a Formula SAE Student race-car prototype using CFD simulation and also present a measurement system based on low-cost sensors through which CFD simulation results were experimentally validated. CFD software package used for simulation was Solid Works with the FloXpress add-on and experimental measurement system was built using four piezoresistive force sensors FlexiForce type.
ERIC Educational Resources Information Center
Bermundo, Cesar B.; Bermundo, Alex B.; Ballester, Rex C.
2012-01-01
iBank is a project that utilizes a software to create an item Bank that store quality questions, generate test and print exam. The items are from analyze teacher-constructed test questions that provides the basis for discussing test results, by determining why a test item is or not discriminating between the better and poorer students, and by…
15 CFR 740.9 - Temporary imports, exports, and reexports (TMP).
Code of Federal Regulations, 2013 CFR
2013-01-01
... the end of the beta test period as defined by the software producer or, if the software producer does... States; and exports and reexports of beta test software. (a) Temporary exports and reexports—(1) Scope. You may export and reexport commodities and software for temporary use abroad (including use in...
15 CFR 740.9 - Temporary imports, exports, and reexports (TMP).
Code of Federal Regulations, 2012 CFR
2012-01-01
... the end of the beta test period as defined by the software producer or, if the software producer does... States; and exports and reexports of beta test software. (a) Temporary exports and reexports—(1) Scope. You may export and reexport commodities and software for temporary use abroad (including use in...
Mars Science Laboratory Boot Robustness Testing
NASA Technical Reports Server (NTRS)
Banazadeh, Payam; Lam, Danny
2011-01-01
Mars Science Laboratory (MSL) is one of the most complex spacecrafts in the history of mankind. Due to the nature of its complexity, a large number of flight software (FSW) requirements have been written for implementation. In practice, these requirements necessitate very complex and very precise flight software with no room for error. One of flight software's responsibilities is to be able to boot up and check the state of all devices on the spacecraft after the wake up process. This boot up and initialization is crucial to the mission success since any misbehavior of different devices needs to be handled through the flight software. I have created a test toolkit that allows the FSW team to exhaustively test the flight software under variety of different unexpected scenarios and validate that flight software can handle any situation after booting up. The test includes initializing different devices on spacecraft to different configurations and validate at the end of the flight software boot up that the flight software has initialized those devices to what they are suppose to be in that particular scenario.
Supporting the Use of CERT (registered trademark) Secure Coding Standards in DoD Acquisitions
2012-07-01
Capability Maturity Model IntegrationSM (CMMI®) [Davis 2009]. SM Team Software Process, TSP, and Capability Maturity Model Integration are service...STP Software Test Plan TEP Test and Evaluation Plan TSP Team Software Process V & V verification and validation CMU/SEI-2012-TN-016 | 47...Supporting the Use of CERT® Secure Coding Standards in DoD Acquisitions Tim Morrow ( Software Engineering Institute) Robert Seacord ( Software
Firing Room Remote Application Software Development
NASA Technical Reports Server (NTRS)
Liu, Kan
2015-01-01
The Engineering and Technology Directorate (NE) at National Aeronautics and Space Administration (NASA) Kennedy Space Center (KSC) is designing a new command and control system for the checkout and launch of Space Launch System (SLS) and future rockets. The purposes of the semester long internship as a remote application software developer include the design, development, integration, and verification of the software and hardware in the firing rooms, in particular with the Mobile Launcher (ML) Launch Accessories (LACC) subsystem. In addition, a software test verification procedure document was created to verify and checkout LACC software for Launch Equipment Test Facility (LETF) testing.
Electronic nose for space program applications
NASA Technical Reports Server (NTRS)
Young, Rebecca C.; Buttner, William J.; Linnell, Bruce R.; Ramesham, Rajeshuni
2003-01-01
The ability to monitor air contaminants in the shuttle and the International Space Station is important to ensure the health and safety of astronauts, and equipment integrity. Three specific space applications have been identified that would benefit from a chemical monitor: (a) organic contaminants in space cabin air; (b) hypergolic propellant contaminants in the shuttle airlock; (c) pre-combustion signature vapors from electrical fires. NASA at Kennedy Space Center (KSC) is assessing several commercial and developing electronic noses (E-noses) for these applications. A short series of tests identified those E-noses that exhibited sufficient sensitivity to the vapors of interest. Only two E-noses exhibited sufficient sensitivity for hypergolic fuels at the required levels, while several commercial E-noses showed sufficient sensitivity of common organic vapors. These E-noses were subjected to further tests to assess their ability to identify vapors. Development and testing of E-nose models using vendor supplied software packages correctly identified vapors with an accuracy of 70-90%. In-house software improvements increased the identification rates between 90 and 100%. Further software enhancements are under development. Details on the experimental setup, test protocols, and results on E-nose performance are presented in this paper along with special emphasis on specific software enhancements. c2003 Elsevier Science B.V. All rights reserved.
1992-04-01
contractor’s existing data collection, analysis and corrective action system shall be utilized, with modification only as necessary to meet the...either from test or from analysis of field data . The procedures of MIL-STD-756B assume that the reliability of a 18 DEFINE IDENTIFY SOFTWARE LIFE CYCLE...to generate sufficient data to report a statistically valid reliability figure for a class of software. Casual data gathering accumulates data more
Mohammed, Emad A.; Naugler, Christopher
2017-01-01
Background: Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. Method: In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. Results: This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. Conclusion: This tool will allow anyone with historic test volume data to model future demand. PMID:28400996
Multicore Considerations for Legacy Flight Software Migration
NASA Technical Reports Server (NTRS)
Vines, Kenneth; Day, Len
2013-01-01
In this paper we will discuss potential benefits and pitfalls when considering a migration from an existing single core code base to a multicore processor implementation. The results of this study present options that should be considered before migrating fault managers, device handlers and tasks with time-constrained requirements to a multicore flight software environment. Possible future multicore test bed demonstrations are also discussed.
ERIC Educational Resources Information Center
Denbel, Dejene Girma
2015-01-01
Students learning experiences were investigated in geometry lesson when using Dynamic Geometry Software (DGS) tool in geometry learning in 25 Ethiopian secondary students. The research data were drawn from the used worksheets, classroom observations, results of pre- and post-test, a questionnaire and interview responses. I used GeoGebra as a DGS…
Reliability Validation and Improvement Framework
2012-11-01
systems . Steps in that direction include the use of the Architec- ture Tradeoff Analysis Method ® (ATAM®) developed at the Carnegie Mellon...embedded software • cyber - physical systems (CPSs) to indicate that the embedded software interacts with, manag - es, and controls a physical system [Lee...the use of formal static analysis methods to increase our confidence in system operation beyond testing. However, analysis results
Hadlich, Marcelo Souza; Oliveira, Gláucia Maria Moraes; Feijóo, Raúl A; Azevedo, Clerio F; Tura, Bernardo Rangel; Ziemer, Paulo Gustavo Portela; Blanco, Pablo Javier; Pina, Gustavo; Meira, Márcio; Souza e Silva, Nelson Albuquerque de
2012-10-01
The standardization of images used in Medicine in 1993 was performed using the DICOM (Digital Imaging and Communications in Medicine) standard. Several tests use this standard and it is increasingly necessary to design software applications capable of handling this type of image; however, these software applications are not usually free and open-source, and this fact hinders their adjustment to most diverse interests. To develop and validate a free and open-source software application capable of handling DICOM coronary computed tomography angiography images. We developed and tested the ImageLab software in the evaluation of 100 tests randomly selected from a database. We carried out 600 tests divided between two observers using ImageLab and another software sold with Philips Brilliance computed tomography appliances in the evaluation of coronary lesions and plaques around the left main coronary artery (LMCA) and the anterior descending artery (ADA). To evaluate intraobserver, interobserver and intersoftware agreements, we used simple and kappa statistics agreements. The agreements observed between software applications were generally classified as substantial or almost perfect in most comparisons. The ImageLab software agreed with the Philips software in the evaluation of coronary computed tomography angiography tests, especially in patients without lesions, with lesions < 50% in the LMCA and < 70% in the ADA. The agreement for lesions > 70% in the ADA was lower, but this is also observed when the anatomical reference standard is used.
HPC Software Stack Testing Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garvey, Cormac
The HPC Software stack testing framework (hpcswtest) is used in the INL Scientific Computing Department to test the basic sanity and integrity of the HPC Software stack (Compilers, MPI, Numerical libraries and Applications) and to quickly discover hard failures, and as a by-product it will indirectly check the HPC infrastructure (network, PBS and licensing servers).
Fully Employing Software Inspections Data
NASA Technical Reports Server (NTRS)
Shull, Forrest; Feldmann, Raimund L.; Seaman, Carolyn; Regardie, Myrna; Godfrey, Sally
2009-01-01
Software inspections provide a proven approach to quality assurance for software products of all kinds, including requirements, design, code, test plans, among others. Common to all inspections is the aim of finding and fixing defects as early as possible, and thereby providing cost savings by minimizing the amount of rework necessary later in the lifecycle. Measurement data, such as the number and type of found defects and the effort spent by the inspection team, provide not only direct feedback about the software product to the project team but are also valuable for process improvement activities. In this paper, we discuss NASA's use of software inspections and the rich set of data that has resulted. In particular, we present results from analysis of inspection data that illustrate the benefits of fully utilizing that data for process improvement at several levels. Examining such data across multiple inspections or projects allows team members to monitor and trigger cross project improvements. Such improvements may focus on the software development processes of the whole organization as well as improvements to the applied inspection process itself.
Rey-Martinez, Jorge; Pérez-Fernández, Nicolás
2016-12-01
The proposed validation goal of 0.9 in intra-class correlation coefficient was reached with the results of this study. With the obtained results we consider that the developed software (RombergLab) is a validated balance assessment software. The reliability of this software is dependent of the used force platform technical specifications. Develop and validate a posturography software and share its source code in open source terms. Prospective non-randomized validation study: 20 consecutive adults underwent two balance assessment tests, six condition posturography was performed using a clinical approved software and force platform and the same conditions were measured using the new developed open source software using a low cost force platform. Intra-class correlation index of the sway area obtained from the center of pressure variations in both devices for the six conditions was the main variable used for validation. Excellent concordance between RombergLab and clinical approved force platform was obtained (intra-class correlation coefficient =0.94). A Bland and Altman graphic concordance plot was also obtained. The source code used to develop RombergLab was published in open source terms.
NASA Astrophysics Data System (ADS)
Lee, Gong Hee; Bang, Young Seok; Woo, Sweng Woong; Kim, Do Hyeong; Kang, Min Ku
2014-06-01
As the computer hardware technology develops the license applicants for nuclear power plant use the commercial CFD software with the aim of reducing the excessive conservatism associated with using simplified and conservative analysis tools. Even if some of CFD software developer and its user think that a state of the art CFD software can be used to solve reasonably at least the single-phase nuclear reactor problems, there is still limitation and uncertainty in the calculation result. From a regulatory perspective, Korea Institute of Nuclear Safety (KINS) is presently conducting the performance assessment of the commercial CFD software for nuclear reactor problems. In this study, in order to examine the validity of the results of 1/5 scaled APR+ (Advanced Power Reactor Plus) flow distribution tests and the applicability of CFD in the analysis of reactor internal flow, the simulation was conducted with the two commercial CFD software (ANSYS CFX V.14 and FLUENT V.14) among the numerous commercial CFD software and was compared with the measurement. In addition, what needs to be improved in CFD for the accurate simulation of reactor core inlet flow was discussed.
NASA Technical Reports Server (NTRS)
Brown, David B.
1990-01-01
The results of research and development efforts are described for Task one, Phase two of a general project entitled The Development of a Program Analysis Environment for Ada. The scope of this task includes the design and development of a prototype system for testing Ada software modules at the unit level. The system is called Query Utility Environment for Software Testing of Ada (QUEST/Ada). The prototype for condition coverage provides a platform that implements expert system interaction with program testing. The expert system can modify data in the instrument source code in order to achieve coverage goals. Given this initial prototype, it is possible to evaluate the rule base in order to develop improved rules for test case generation. The goals of Phase two are the following: (1) to continue to develop and improve the current user interface to support the other goals of this research effort (i.e., those related to improved testing efficiency and increased code reliable); (2) to develop and empirically evaluate a succession of alternative rule bases for the test case generator such that the expert system achieves coverage in a more efficient manner; and (3) to extend the concepts of the current test environment to address the issues of Ada concurrency.
EMMA: a new paradigm in configurable software
Nogiec, J. M.; Trombly-Freytag, K.
2017-11-23
EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.
EMMA: a new paradigm in configurable software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nogiec, J. M.; Trombly-Freytag, K.
EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.
Development of a Software Safety Process and a Case Study of Its Use
NASA Technical Reports Server (NTRS)
Knight, J. C.
1996-01-01
Research in the year covered by this reporting period has been primarily directed toward: continued development of mock-ups of computer screens for operator of a digital reactor control system; development of a reactor simulation to permit testing of various elements of the control system; formal specification of user interfaces; fault-tree analysis including software; evaluation of formal verification techniques; and continued development of a software documentation system. Technical results relating to this grant and the remainder of the principal investigator's research program are contained in various reports and papers.
Significant lexical relationships
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedersen, T.; Kayaalp, M.; Bruce, R.
Statistical NLP inevitably deals with a large number of rare events. As a consequence, NLP data often violates the assumptions implicit in traditional statistical procedures such as significance testing. We describe a significance test, an exact conditional test, that is appropriate for NLP data and can be performed using freely available software. We apply this test to the study of lexical relationships and demonstrate that the results obtained using this test are both theoretically more reliable and different from the results obtained using previously applied tests.
Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results
NASA Technical Reports Server (NTRS)
Glass, B. J. (Editor)
1992-01-01
The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.
Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results
NASA Astrophysics Data System (ADS)
Glass, B. J.
1992-10-01
The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.
NASA Astrophysics Data System (ADS)
Nguyen, Khoa Dang; Ha, Cheolkeun
2018-04-01
Hardware-in-the-loop simulation (HILS) is well known as an effective approach in the design of unmanned aerial vehicles (UAV) systems, enabling engineers to test the control algorithm on a hardware board with a UAV model on the software. Performance of HILS is determined by performances of the control algorithm, the developed model, and the signal transfer between the hardware and software. The result of HILS is degraded if any signal could not be transferred to the correct destination. Therefore, this paper aims to develop a middleware software to secure communications in HILS system for testing the operation of a quad-rotor UAV. In our HILS, the Gazebo software is used to generate a nonlinear six-degrees-of-freedom (6DOF) model, sensor model, and 3D visualization for the quad-rotor UAV. Meanwhile, the flight control algorithm is designed and implemented on the Pixhawk hardware. New middleware software, referred to as the control application software (CAS), is proposed to ensure the connection and data transfer between Gazebo and Pixhawk using the multithread structure in Qt Creator. The CAS provides a graphical user interface (GUI), allowing the user to monitor the status of packet transfer, and perform the flight control commands and the real-time tuning parameters for the quad-rotor UAV. Numerical implementations have been performed to prove the effectiveness of the middleware software CAS suggested in this paper.
Measuring outcomes in children's rehabilitation: a decision protocol.
Law, M; King, G; Russell, D; MacKinnon, E; Hurley, P; Murphy, C
1999-06-01
To develop and test the feasibility and clinical utility of a computerized self-directed software program designed to enable service providers in children's rehabilitation to make decisions about the most appropriate outcome measures to use in client and program evaluation. A before-and-after design was used to test the feasibility and initial impact of the decision-making outcome software in improving knowledge and use of clinical outcome measures. A children's rehabilitation center in a city of 50,000. All service providers in the children's rehabilitation center. Disciplines represented included early childhood education, occupational therapy, physical therapy, speech and language pathology, audiology, social work, and psychology. Using a conceptual framework based on the International Classification of Impairment, Disability, and Handicap (ICIDH), an outcome measurement decision-making protocol was developed. The decision-making protocol was computerized in an educational software program with an attached database of critically appraised measures. Participants learned about outcome measures through the program and selected outcome measures that met their specifications. The computer software was tested for feasibility in the children's rehabilitation center for 6 months. Knowledge and use of clinical outcome measures were determined before and after the feasibility testing using a survey of all service providers currently at the centre and audits of 30 randomly selected rehabilitation records (at pretest, posttest, and follow-up). Service providers indicated that the outcomes software was easy to follow and believed that the use of the ICIDH framework helped them in making decisions about selecting outcome measures. Results of the survey indicated that there were significant changes in the service providers' level of comfort with selecting measures and knowing what measures were available. Use of outcome measures as identified through the audit did not change. The "All About Outcomes" software is clinically useful. Further research should evaluate whether using the software affects the use of outcome measures in clinical practice.
A program downloader and other utility software for the DATAC bus monitor unit
NASA Technical Reports Server (NTRS)
Novacki, Stanley M., III
1987-01-01
A set or programs designed to facilitate software testing on the DATAC Bus Monitor is described. By providing a means to simplify program loading, firmware generation, and subsequent testing of programs, the overhead involved in software evaluation is reduced and that time is used more productively in performance, analysis and improvement of current software.
NASA Astrophysics Data System (ADS)
Oliveira, N. P.; Maciel, L.; Catarino, A. P.; Rocha, A. M.
2017-10-01
This work proposes the creation of models of surfaces using a parametric computer modelling software to obtain three-dimensional structures in weft knitted fabrics produced on single needle system machines. Digital prototyping, another feature of digital modelling software, was also explored in three-dimensional drawings generated using the Rhinoceros software. With this approach, different 3D structures were developed and produced. Physical characterization tests were then performed on the resulting 3D weft knitted structures to assess their ability to promote comfort. From the obtained results, it is apparent that the developed structures have potential for application in different market segments, such as clothing and interior textiles.
Software resilience and the effectiveness of software mitigation in microcontrollers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather; Baker, Zachary; Fairbanks, Tom
Commercially available microprocessors could be useful to the space community for noncritical computations. There are many possible components that are smaller, lower-power, and less expensive than traditional radiation-hardened microprocessors. Many commercial microprocessors have issues with single-event effects (SEEs), such as single-event upsets (SEUs) and single-event transients (SETs), that can cause the microprocessor to calculate an incorrect result or crash. In this paper we present the Trikaya technique for masking SEUs and SETs through software mitigation techniques. Furthermore, test results show that this technique can be very effective at masking errors, making it possible to fly these microprocessors for a varietymore » of missions.« less
Software resilience and the effectiveness of software mitigation in microcontrollers
Quinn, Heather; Baker, Zachary; Fairbanks, Tom; ...
2015-12-01
Commercially available microprocessors could be useful to the space community for noncritical computations. There are many possible components that are smaller, lower-power, and less expensive than traditional radiation-hardened microprocessors. Many commercial microprocessors have issues with single-event effects (SEEs), such as single-event upsets (SEUs) and single-event transients (SETs), that can cause the microprocessor to calculate an incorrect result or crash. In this paper we present the Trikaya technique for masking SEUs and SETs through software mitigation techniques. Furthermore, test results show that this technique can be very effective at masking errors, making it possible to fly these microprocessors for a varietymore » of missions.« less
Software metrics: The key to quality software on the NCC project
NASA Technical Reports Server (NTRS)
Burns, Patricia J.
1993-01-01
Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.
Quality Assurance Results for a Commercial Radiosurgery System: A Communication.
Ruschin, Mark; Lightstone, Alexander; Beachey, David; Wronski, Matt; Babic, Steven; Yeboah, Collins; Lee, Young; Soliman, Hany; Sahgal, Arjun
2015-10-01
The purpose of this communication is to inform the radiosurgery community of quality assurance (QA) results requiring attention in a commercial FDA-approved linac-based cone stereo-tactic radiosurgery (SRS) system. Standard published QA guidelines as per the American Association of Physics in Medicine (AAPM) were followed during the SRS system's commissioning process including end-to-end testing, cone concentricity testing, image transfer verification, and documentation. Several software and hardware deficiencies that were deemed risky were uncovered during the process and QA processes were put in place to mitigate these risks during clinical practice. In particular, the present work focuses on daily cone concentricity testing and commissioning-related findings associated with the software. Cone concentricity/alignment is measured daily using both optical light field inspection, as well as quantitative radiation field tests with the electronic portal imager. In 10 out of 36 clini-cal treatments, adjustments to the cone position had to be made to align the cone with the collimator axis to less than 0.5 mm and on two occasions the pre-adjustment measured offset was 1.0 mm. Software-related errors discovered during commissioning included incorrect transfer of the isocentre in DICOM coordinates, improper handling of non-axial image sets, and complex handling of beam data, especially for multi-target treatments. QA processes were established to mitigate the occurrence of the software errors. With proper QA processes, the reported SRS system complies with tolerances set out in established guidelines. Discussions with the vendor are ongoing to address some of the hardware issues related to cone alignment. © The Author(s) 2014.
Comparison of particle tracking algorithms in commercial CFD packages: sedimentation and diffusion.
Robinson, Risa J; Snyder, Pam; Oldham, Michael J
2007-05-01
Computational fluid dynamic modeling software has enabled microdosimetry patterns of inhaled toxins and toxicants to be predicted and visualized, and is being used in inhalation toxicology and risk assessment. These predicted microdosimetry patterns in airway structures are derived from predicted airflow patterns within these airways and particle tracking algorithms used in computational fluid dynamics (CFD) software packages. Although these commercial CFD codes have been tested for accuracy under various conditions, they have not been well tested for respiratory flows in general. Nor has their particle tracking algorithm accuracy been well studied. In this study, three software packages, Fluent Discrete Phase Model (DPM), Fluent Fine Particle Model (FPM), and ANSYS CFX, were evaluated. Sedimentation and diffusion were each isolated in a straight tube geometry and tested for accuracy. A range of flow rates corresponding to adult low activity (minute ventilation = 10 L/min) and to heavy exertion (minute ventilation = 60 L/min) were tested by varying the range of dimensionless diffusion and sedimentation parameters found using the Weibel symmetric 23 generation lung morphology. Numerical results for fully developed parabolic and uniform (slip) profiles were compared respectively, to Pich (1972) and Yu (1977) analytical sedimentation solutions. Schum and Yeh (1980) equations for sedimentation were also compared. Numerical results for diffusional deposition were compared to analytical solutions of Ingham (1975) for parabolic and uniform profiles. Significant differences were found among the various CFD software packages and between numerical and analytical solutions. Therefore, it is prudent to validate CFD predictions against analytical solutions in idealized geometry before tackling the complex geometries of the respiratory tract.
Detection of faults and software reliability analysis
NASA Technical Reports Server (NTRS)
Knight, John C.
1987-01-01
Multi-version or N-version programming is proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. These versions are executed in parallel in the application environment; each receives identical inputs and each produces its version of the required outputs. The outputs are collected by a voter and, in principle, they should all be the same. In practice there may be some disagreement. If this occurs, the results of the majority are taken to be the correct output, and that is the output used by the system. A total of 27 programs were produced. Each of these programs was then subjected to one million randomly-generated test cases. The experiment yielded a number of programs containing faults that are useful for general studies of software reliability as well as studies of N-version programming. Fault tolerance through data diversity and analytic models of comparison testing are discussed.
Software used with the flux mapper at the solar parabolic dish test site
NASA Technical Reports Server (NTRS)
Miyazono, C.
1984-01-01
Software for data archiving and data display was developed for use on a Digital Equipment Corporation (DEC) PDP-11/34A minicomputer for use with the JPL-designed flux mapper. The flux mapper is a two-dimensional, high radiant energy scanning device designed to measure radiant flux energies expected at the focal point of solar parabolic dish concentrators. Interfacing to the DEC equipment was accomplished by standard RS-232C serial lines. The design of the software was dicated by design constraints of the flux-mapper controller. Early attemps at data acquisition from the flux-mapper controller were not without difficulty. Time and personnel limitations result in an alternative method of data recording at the test site with subsequent analysis accomplished at a data evaluation location at some later time. Software for plotting was also written to better visualize the flux patterns. Recommendations for future alternative development are discussed. A listing of the programs used in the anaysis is included in an appendix.
Hutchison, N.E.; Harbaugh, A.W.; Holloway, R.A.; Merk, C.F.
1987-01-01
The Water Resources Division (WRD) of the U.S. Geological Survey is evaluating 32-bit microcomputers to determine how they can complement, and perhaps later replace, the existing network of minicomputers. The WRD is also designing a National Water Information System (NWIS) that will combine and integrate the existing National Water Data Storage and Retrieval System (WATSTORE), National Water Data Exchange (NAWDEX), and components of several other existing systems. The procedures and testing done in a market evaluation of 32-bit microcomputers are documented. The results of the testing are documented in the NWIS Project Office. The market evaluation was done to identify commercially available hardware and software that could be used for implementing early NWIS prototypes to determine the applicability of 32-bit microcomputers for data base and general computing applications. Three microcomputers will be used for these prototype studies. The results of the prototype studies will be used to compile requirements for a Request for Procurement (RFP) for hardware and software to meet the WRD 's needs in the early 1990's. The identification of qualified vendors to provide the prototype hardware and software included reviewing industry literature, and making telephone calls and personal visits to prospective vendors. Those vendors that appeared to meet general requirements were required to run benchmark tests. (Author 's abstract)
Are Earth System model software engineering practices fit for purpose? A case study.
NASA Astrophysics Data System (ADS)
Easterbrook, S. M.; Johns, T. C.
2009-04-01
We present some analysis and conclusions from a case study of the culture and practices of scientists at the Met Office and Hadley Centre working on the development of software for climate and Earth System models using the MetUM infrastructure. The study examined how scientists think about software correctness, prioritize their requirements in making changes, and develop a shared understanding of the resulting models. We conclude that highly customized techniques driven strongly by scientific research goals have evolved for verification and validation of such models. In a formal software engineering context these represents costly, but invaluable, software integration tests with considerable benefits. The software engineering practices seen also exhibit recognisable features of both agile and open source software development projects - self-organisation of teams consistent with a meritocracy rather than top-down organisation, extensive use of informal communication channels, and software developers who are generally also users and science domain experts. We draw some general conclusions on whether these practices work well, and what new software engineering challenges may lie ahead as Earth System models become ever more complex and petascale computing becomes the norm.
Risk-Based Object Oriented Testing
NASA Technical Reports Server (NTRS)
Rosenberg, Linda H.; Stapko, Ruth; Gallo, Albert
2000-01-01
Software testing is a well-defined phase of the software development life cycle. Functional ("black box") testing and structural ("white box") testing are two methods of test case design commonly used by software developers. A lesser known testing method is risk-based testing, which takes into account the probability of failure of a portion of code as determined by its complexity. For object oriented programs, a methodology is proposed for identification of risk-prone classes. Risk-based testing is a highly effective testing technique that can be used to find and fix the most important problems as quickly as possible.
1979-08-21
Appendix s - Outline and Draft Material for Proposed Triservice Interim Guideline on Application of Software Acceptance Criteria....... 269 Appendix 9...AND DRAFT MATERIAL FOR PROPOSED TRISERVICE INTERIM GUIDELINE ON APPLICATION OF SOFTWARE ACCEPTANCE CRITERIA I I INTRODUCTION The purpose of this guide...contract item (CPCI) (code) 5. CPCI test plan 6. CPCI test procedures 7. CPCI test report 8. Handbooks and manuals. Al though additional material does
242A Distributed Control System Year 2000 Acceptance Test Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
TEATS, M.C.
1999-08-31
This report documents acceptance test results for the 242-A Evaporator distributive control system upgrade to D/3 version 9.0-2 for year 2000 compliance. This report documents the test results obtained by acceptance testing as directed by procedure HNF-2695. This verification procedure will document the initial testing and evaluation of the potential 242-A Distributed Control System (DCS) operating difficulties across the year 2000 boundary and the calendar adjustments needed for the leap year. Baseline system performance data will be recorded using current, as-is operating system software. Data will also be collected for operating system software that has been modified to correct yearmore » 2000 problems. This verification procedure is intended to be generic such that it may be performed on any D/3{trademark} (GSE Process Solutions, Inc.) distributed control system that runs with the VMSTM (Digital Equipment Corporation) operating system. This test may be run on simulation or production systems depending upon facility status. On production systems, DCS outages will occur nine times throughout performance of the test. These outages are expected to last about 10 minutes each.« less
NREL Improves Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2012-01-01
This technical highlight describes NREL research to develop Building Energy Simulation Test for Existing Homes (BESTEST-EX) to increase the quality and accuracy of energy analysis tools for the building retrofit market. Researchers at the National Renewable Energy Laboratory (NREL) have developed a new test procedure to increase the quality and accuracy of energy analysis tools for the building retrofit market. The Building Energy Simulation Test for Existing Homes (BESTEST-EX) is a test procedure that enables software developers to evaluate the performance of their audit tools in modeling energy use and savings in existing homes when utility bills are available formore » model calibration. Similar to NREL's previous energy analysis tests, such as HERS BESTEST and other BESTEST suites included in ANSI/ASHRAE Standard 140, BESTEST-EX compares software simulation findings to reference results generated with state-of-the-art simulation tools such as EnergyPlus, SUNREL, and DOE-2.1E. The BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX includes building physics and utility bill calibration test cases. The diagram illustrates the utility bill calibration test cases. Participants are given input ranges and synthetic utility bills. Software tools use the utility bills to calibrate key model inputs and predict energy savings for the retrofit cases. Participant energy savings predictions using calibrated models are compared to NREL predictions using state-of-the-art building energy simulation programs.« less
SEXTANT X-Ray Pulsar Navigation Demonstration: Flight System and Test Results
NASA Technical Reports Server (NTRS)
Winternitz, Luke; Mitchell, Jason W.; Hassouneh, Munther A.; Valdez, Jennifer E.; Price, Samuel R.; Semper, Sean R.; Yu, Wayne H.; Ray, Paul S.; Wood, Kent S.; Arzoumanian, Zaven;
2016-01-01
The Station Explorer for X-ray Timing and Navigation Technology (SEXTANT) is a technology demonstration enhancement to the Neutron-star Interior Composition Explorer (NICER) mission. NICER is a NASA Explorer Mission of Opportunity that will be hosted on the International Space Station (ISS). SEXTANT will, for the first time, demonstrate real-time, on-board X-ray Pulsar Navigation (XNAV), a significant milestone in the quest to establish a GPS-like navigation capability available throughout our Solar System and beyond. This paper gives an overview of the SEXTANT system architecture and describes progress prior to environmental testing of the NICER flight instrument. It provides descriptions and development status of the SEXTANT flight software and ground system, as well as detailed description and results from the flight software functional and performance testing within the high-fidelity Goddard Space Flight Center (GSFC) X-ray Navigation Laboratory Testbed (GXLT) software and hardware simulation environment. Hardware-in-the-loop simulation results are presented, using the engineering model of the NICER timing electronics and the GXLT pulsar simulator-the GXLT precisely controls NASA GSFC's unique Modulated X-ray Source to produce X-rays that make the NICER detector electronics appear as if they were aboard the ISS viewing a sequence of millisecond pulsars
SEXTANT X-Ray Pulsar Navigation Demonstration: Flight System and Test Results
NASA Technical Reports Server (NTRS)
Winternitz, Luke M. B.; Mitchell, Jason W.; Hassouneh, Munther A.; Valdez, Jennifer E.; Price, Samuel R.; Semper, Sean R.; Yu, Wayne H.; Ray, Paul S.; Wood, Kent S.; Arzoumanian, Zaven;
2016-01-01
The Station Explorer for X-ray Timing and Navigation Technology (SEXTANT) is a technology demonstration enhancement to the Neutron-star Interior Composition Explorer (NICER) mission. NICER is a NASA Explorer Mission of Opportunity that will be hosted on the International Space Station (ISS). SEXTANT will, for the first time, demonstrate real-time, on-board X-ray Pulsar Navigation (XNAV), a significant milestone in the quest to establish a GPS-like navigation capability available throughout our Solar System and beyond. This paper gives an overview of the SEXTANT system architecture and describes progress prior to environmental testing of the NICER flight instrument. It provides descriptions and development status of the SEXTANT flight software and ground system, as well as detailed description and results from the flight software functional and performance testing within the highfidelity Goddard Space Flight Center (GSFC) X-ray Navigation Laboratory Testbed (GXLT) software and hardware simulation environment. Hardware-in-the-loop simulation results are presented, using the engineering model of the NICER timing electronics and the GXLT pulsar simulator-the GXLT precisely controls NASA GSFC's unique Modulated X-ray Source to produce X-rays that make the NICER detector electronics appear as if they were aboard the ISS viewing a sequence of millisecond pulsars.
Bedard, B; Kennedy, B S; Weimer, A C
2014-12-01
In 2011, from August to November, the Monroe County Department of Public Health (MCDPH) investigated 47 salmonellosis cases. Geographical information software (GIS) was used to map the address locations of these cases. The resulting GIS analysis and culture information indicated that there were two distinct clusters of Salmonella that were geographically different. Pulsed-field gel electrophoresis (PFGE) testing was run at the New York State Department of Health Wadsworth Laboratory and identified S. Enteritidis (23 cases) and S. Typhimurium (10 cases). The epidemiological investigation identified Turkish pine nuts as the link between ill S. Enteritidis cases. Pine nut samples sent for laboratory testing were a PFGE match to human isolates with S. Enteritidis. A national recall of Turkish pine nuts ensued. A multistate outbreak was identified as a result of the initial investigation of MCDPH, in which 43 people were infected with the outbreak strain from five states. GIS software and shopper card data provided important tools in the epidemiological investigation.
OSPAR standard method and software for statistical analysis of beach litter data.
Schulz, Marcus; van Loon, Willem; Fleet, David M; Baggelaar, Paul; van der Meulen, Eit
2017-09-15
The aim of this study is to develop standard statistical methods and software for the analysis of beach litter data. The optimal ensemble of statistical methods comprises the Mann-Kendall trend test, the Theil-Sen slope estimation, the Wilcoxon step trend test and basic descriptive statistics. The application of Litter Analyst, a tailor-made software for analysing the results of beach litter surveys, to OSPAR beach litter data from seven beaches bordering on the south-eastern North Sea, revealed 23 significant trends in the abundances of beach litter types for the period 2009-2014. Litter Analyst revealed a large variation in the abundance of litter types between beaches. To reduce the effects of spatial variation, trend analysis of beach litter data can most effectively be performed at the beach or national level. Spatial aggregation of beach litter data within a region is possible, but resulted in a considerable reduction in the number of significant trends. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Over, Ann P.; Barrett, Michael J.; Reinhart, Richard C.; Free, James M.; Cikanek, Harry A., III
2011-01-01
The Communication Navigation and Networking Reconfigurable Testbed (CoNNeCT) is a NASA-sponsored mission, which will investigate the usage of Software Defined Radios (SDRs) as a multi-function communication system for space missions. A softwaredefined radio system is a communication system in which typical components of the system (e.g., modulators) are incorporated into software. The software-defined capability allows flexibility and experimentation in different modulation, coding and other parameters to understand their effects on performance. This flexibility builds inherent redundancy and flexibility into the system for improved operational efficiency, real-time changes to space missions and enhanced reliability/redundancy. The CoNNeCT Project is a collaboration between industrial radio providers and NASA. The industrial radio providers are providing the SDRs and NASA is designing, building and testing the entire flight system. The flight system will be integrated on the Express Logistics Carrier (ELC) on the International Space Station (ISS) after launch on the H-IIB Transfer Vehicle in 2012. This paper provides an overview of the technology research objectives, payload description, design challenges and pre-flight testing results.
HDTS 2017.0 Testing and verification document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiteside, Tad S.
2017-08-01
This report is a continuation of the series of Hunter Dose Tracking System (HDTS) Quality Assurance documents including (Foley and Powell, 2010; Dixon, 2012). In this report we have created a suite of automated test cases and a system to analyze the results of those tests as well as documented the methodology to ensure the field system performs within specifications. The software test cases cover all of the functions and interactions of functions that are practical to test. With the developed framework, if software defects are discovered, it will be easy to create one or more test cases to reproducemore » the defect and ensure that code changes correct the defect. These tests con rm HDTS version 2017.0 performs according to its specifications and documentation and that its performance meets the needs of its users at the Savannah River Site.« less
NASA Technical Reports Server (NTRS)
Szabo, Carl M., Jr.; Duncan, Adam; LaBel, Kenneth A.; Kay, Matt; Bruner, Pat; Krzesniak, Mike; Dong, Lei
2015-01-01
Hardness assurance test results of Intel state-of-the-art 14nm Broadwell U-series processor System-on-a-Chip (SoC) for total dose are presented, along with first-look exploratory results from trials at a medical proton facility. Test method builds upon previous efforts by utilizing commercial laptop motherboards and software stress applications as opposed to more traditional automated test equipment (ATE).
NASA Astrophysics Data System (ADS)
Lanciotti, E.; Merino, G.; Bria, A.; Blomer, J.
2011-12-01
In a distributed computing model as WLCG the software of experiment specific application software has to be efficiently distributed to any site of the Grid. Application software is currently installed in a shared area of the site visible for all Worker Nodes (WNs) of the site through some protocol (NFS, AFS or other). The software is installed at the site by jobs which run on a privileged node of the computing farm where the shared area is mounted in write mode. This model presents several drawbacks which cause a non-negligible rate of job failure. An alternative model for software distribution based on the CERN Virtual Machine File System (CernVM-FS) has been tried at PIC, the Spanish Tierl site of WLCG. The test bed used and the results are presented in this paper.
Software for Optimizing Quality Assurance of Other Software
NASA Technical Reports Server (NTRS)
Feather, Martin; Cornford, Steven; Menzies, Tim
2004-01-01
Software assurance is the planned and systematic set of activities that ensures that software processes and products conform to requirements, standards, and procedures. Examples of such activities are the following: code inspections, unit tests, design reviews, performance analyses, construction of traceability matrices, etc. In practice, software development projects have only limited resources (e.g., schedule, budget, and availability of personnel) to cover the entire development effort, of which assurance is but a part. Projects must therefore select judiciously from among the possible assurance activities. At its heart, this can be viewed as an optimization problem; namely, to determine the allocation of limited resources (time, money, and personnel) to minimize risk or, alternatively, to minimize the resources needed to reduce risk to an acceptable level. The end result of the work reported here is a means to optimize quality-assurance processes used in developing software.
GEDAE-LaB: A Free Software to Calculate the Energy System Contributions during Exercise
Bertuzzi, Rômulo; Melegati, Jorge; Bueno, Salomão; Ghiarone, Thaysa; Pasqua, Leonardo A.; Gáspari, Arthur Fernandes; Lima-Silva, Adriano E.; Goldman, Alfredo
2016-01-01
Purpose The aim of the current study is to describe the functionality of free software developed for energy system contributions and energy expenditure calculation during exercise, namely GEDAE-LaB. Methods Eleven participants performed the following tests: 1) a maximal cycling incremental test to measure the ventilatory threshold and maximal oxygen uptake (V˙O2max); 2) a cycling workload constant test at moderate domain (90% ventilatory threshold); 3) a cycling workload constant test at severe domain (110% V˙O2max). Oxygen uptake and plasma lactate were measured during the tests. The contributions of the aerobic (AMET), anaerobic lactic (LAMET), and anaerobic alactic (ALMET) systems were calculated based on the oxygen uptake during exercise, the oxygen energy equivalents provided by lactate accumulation, and the fast component of excess post-exercise oxygen consumption, respectively. In order to assess the intra-investigator variation, four different investigators performed the analyses independently using GEDAE-LaB. A direct comparison with commercial software was also provided. Results All subjects completed 10 min of exercise at moderate domain, while the time to exhaustion at severe domain was 144 ± 65 s. The AMET, LAMET, and ALMET contributions during moderate domain were about 93, 2, and 5%, respectively. The AMET, LAMET, and ALMET contributions during severe domain were about 66, 21, and 13%, respectively. No statistical differences were found between the energy system contributions and energy expenditure obtained by GEDAE-LaB and commercial software for both moderate and severe domains (P > 0.05). The ICC revealed that these estimates were highly reliable among the four investigators for both moderate and severe domains (all ICC ≥ 0.94). Conclusion These findings suggest that GEDAE-LaB is a free software easily comprehended by users minimally familiarized with adopted procedures for calculations of energetic profile using oxygen uptake and lactate accumulation during exercise. By providing availability of the software and its source code we hope to facilitate future related research. PMID:26727499
Using CAD/CAM to improve productivity - The IPAD approach
NASA Technical Reports Server (NTRS)
Fulton, R. E.
1981-01-01
Progress in designing and implementing CAD/CAM systems as a result of the NASA Integrated Programs for Aerospace-Vehicle Design is discussed. Essential software packages have been identified as executive, data management, general user, and geometry and graphics software. Data communication, as a means to integrate data over a network of computers of different vendors, provides data management with the capability of meeting design and manufacturing requirements of the vendors. Geometry software is dependent on developmental success with solid geometry software, which is necessary for continual measurements of, for example, a block of metal while it is being machined. Applications in the aerospace industry, such as for design, analysis, tooling, testing, quality control, etc., are outlined.
PDSS/IMC qualification test software acceptance procedures
NASA Technical Reports Server (NTRS)
1984-01-01
Tests to be performed for qualifying the payload development support system image motion compensator (IMC) are identified. The performance of these tests will verify the IMC interfaces and thereby verify the qualification test software.
DSN system performance test software
NASA Technical Reports Server (NTRS)
Martin, M.
1978-01-01
The system performance test software is currently being modified to include additional capabilities and enhancements. Additional software programs are currently being developed for the Command Store and Forward System and the Automatic Total Recall System. The test executive is the main program. It controls the input and output of the individual test programs by routing data blocks and operator directives to those programs. It also processes data block dump requests from the operator.
Test Driven Development of Scientific Models
NASA Technical Reports Server (NTRS)
Clune, Thomas L.
2012-01-01
Test-Driven Development (TDD) is a software development process that promises many advantages for developer productivity and has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices. Of course, scientific/technical software differs from other software categories in a number of important respects, but I nonetheless believe that TDD is quite applicable to the development of such software and has the potential to significantly improve programmer productivity and code quality within the scientific community. After a detailed introduction to TDD, I will present the experience within the Software Systems Support Office (SSSO) in applying the technique to various scientific applications. This discussion will emphasize the various direct and indirect benefits as well as some of the difficulties and limitations of the methodology. I will conclude with a brief description of pFUnit, a unit testing framework I co-developed to support test-driven development of parallel Fortran applications.
Applications of Logic Coverage Criteria and Logic Mutation to Software Testing
ERIC Educational Resources Information Center
Kaminski, Garrett K.
2011-01-01
Logic is an important component of software. Thus, software logic testing has enjoyed significant research over a period of decades, with renewed interest in the last several years. One approach to detecting logic faults is to create and execute tests that satisfy logic coverage criteria. Another approach to detecting faults is to perform mutation…
C-C1-04: Building a Health Services Information Technology Research Environment
Gehrum, David W; Jones, JB; Romania, Gregory J; Young, David L; Lerch, Virginia R; Bruce, Christa A; Donkochik, Diane; Stewart, Walter F
2010-01-01
Background: The electronic health record (EHR) has opened a new era for health services research (HSR) where information technology (IT) is used to re-engineer care processes. While the EHR provides one means of advancing novel solutions, a promising strategy is to develop tools (e.g., online questionnaires, visual display tools, decision support) distinct from, but which interact with, the EHR. Development of such software tools outside the EHR offers an advantage in flexibility, sophistication, and ultimately in portability to other settings. However, institutional IT departments have an imperative to protect patient data and to standardize IT processes to ensure system-level security and support traditional business needs. Such imperatives usually present formidable process barriers to testing novel software solutions. We describe how, in collaboration with our IT department, we are creating an environment and a process that allows for routine and rapid testing of novel software solutions. Methods: We convened a working group consisting of IT and research personnel with expertise in information security, database design/management, web design, EHR programming, and health services research. The working group was tasked with developing a research IT environment to accomplish two objectives: maintain network/ data security and regulatory compliance; allow researchers working with external vendors to rapidly prototype and, in a clinical setting, test web-based tools. Results: Two parallel solutions, one focused on hardware, the second on oversight and management, were developed. First, we concluded that three separate, staged development environments were required to allow external vendor access for testing software and for transitioning software to be used in a clinic. In parallel, the extant oversight process for approving/managing access to internal/external personnel had to be altered to reflect the scope and scale of discrete research projects, as opposed to an enterpriselevel approach to IT management. Conclusions: Innovation in health services software development requires a flexible, scalable IT environment adapted to the unique objectives of a HSR software development model. In our experience, implementing the hardware solution is less challenging than the cultural change required to implement such a model and the modifications to administrative and oversight processes to sustain an environment for rapid product development and testing.
NASA Astrophysics Data System (ADS)
Georgiev, Bozhidar; Georgieva, Adriana
2013-12-01
In this paper, are presented some possibilities concerning the implementation of a test-driven development as a programming method. Here is offered a different point of view for creation of advanced programming techniques (build tests before programming source with all necessary software tools and modules respectively). Therefore, this nontraditional approach for easier programmer's work through building tests at first is preferable way of software development. This approach allows comparatively simple programming (applied with different object-oriented programming languages as for example JAVA, XML, PYTHON etc.). It is predictable way to develop software tools and to provide help about creating better software that is also easier to maintain. Test-driven programming is able to replace more complicated casual paradigms, used by many programmers.
NASA Technical Reports Server (NTRS)
Gedeon, D.; Wood, J. G.
1996-01-01
A number of wire mesh and metal felt test samples, with a range of porosities, yield generic correlations for friction factor, Nusselt number, enhanced axial conduction ratio, and overall heat flux ratio. This information is directed primarily toward stirling cycle regenerator modelers, but will be of use to anyone seeking to better model fluid flow through these porous materials. Behind these results lies an oscillating-flow test rig, which measures pumping dissipation and thermal energy transport in sample matrices, and several stages of data-reduction software, which correlate instantaneous values for the above dimensionless groups. Within the software, theoretical model reduces instantaneous quantifies from cycle-averaged measurables using standard parameter estimation techniques.
Promoting Science Software Best Practices: A Scientist's Perspective (Invited)
NASA Astrophysics Data System (ADS)
Blanton, B. O.
2013-12-01
Software is at the core of most modern scientific activities, and as societal awareness of, and impacts from, extreme weather, disasters, and climate and global change continue to increase, the roles that scientific software play in analyses and decision-making are brought more to the forefront. Reproducibility of research results (particularly those that enter into the decision-making arena) and open access to the software is essential for scientific and scientists' credibility. This has been highlighted in a recent article by Joppa et al (Troubling Trends in Scientific Software Use, Science Magazine, May 2013) that describes reasons for particular software being chosen by scientists, including that the "developer is well-respected" and on "recommendation from a close colleague". This reliance on recommendation, Joppa et al conclude, is fraught with risks to both sciences and scientists. Scientists must frequently take software for granted, assuming that it performs as expected and advertised and that the software itself has been validated and results verified. This is largely due to the manner in which much software is written and developed; in an ad hoc manner, with an inconsistent funding stream, and with little application of core software engineering best practices. Insufficient documentation, limited test cases, and code unavailability are significant barriers to informed and intelligent science software usage. This situation is exacerbated when the scientist becomes the software developer out of necessity due to resource constraints. Adoption of, and adherence to, best practices in scientific software development will substantially increase intelligent software usage and promote a sustainable evolution of the science as encoded in the software. We describe a typical scientist's perspective on using and developing scientific software in the context of storm surge research and forecasting applications that have real-time objectives and regulatory constraints. This include perspectives on what scientists/users of software can contribute back to the software development process and examples of successful scientist/developer interactions, and the competition between "getting it done" and "getting it done right".
NASA Technical Reports Server (NTRS)
Ensey, Tyler S.
2013-01-01
During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a fluid component, a discrete pressure switch. The switch takes a fluid pressure input, and if the pressure is greater than a designated cutoff pressure, the switch would stop fluid flow.
Contal, Olivier; Vignaux, Laurence; Combescure, Christophe; Pepin, Jean-Louis; Jolliet, Philippe; Janssens, Jean-Paul
2012-02-01
Current bilevel positive-pressure ventilators for home noninvasive ventilation (NIV) provide physicians with software that records items important for patient monitoring, such as compliance, tidal volume (Vt), and leaks. However, to our knowledge, the validity of this information has not yet been independently assessed. Testing was done for seven home ventilators on a bench model adapted to simulate NIV and generate unintentional leaks (ie, other than of the mask exhalation valve). Five levels of leaks were simulated using a computer-driven solenoid valve (0-60 L/min) at different levels of inspiratory pressure (15 and 25 cm H(2)O) and at a fixed expiratory pressure (5 cm H(2)O), for a total of 10 conditions. Bench data were compared with results retrieved from ventilator software for leaks and Vt. For assessing leaks, three of the devices tested were highly reliable, with a small bias (0.3-0.9 L/min), narrow limits of agreement (LA), and high correlations (R(2), 0.993-0.997) when comparing ventilator software and bench results; conversely, for four ventilators, bias ranged from -6.0 L/min to -25.9 L/min, exceeding -10 L/min for two devices, with wide LA and lower correlations (R(2), 0.70-0.98). Bias for leaks increased markedly with the importance of leaks in three devices. Vt was underestimated by all devices, and bias (range, 66-236 mL) increased with higher insufflation pressures. Only two devices had a bias < 100 mL, with all testing conditions considered. Physicians monitoring patients who use home ventilation must be aware of differences in the estimation of leaks and Vt by ventilator software. Also, leaks are reported in different ways according to the device used.
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1993-01-01
Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.
The impact of software quality characteristics on healthcare outcome: a literature review.
Aghazadeh, Sakineh; Pirnejad, Habibollah; Moradkhani, Alireza; Aliev, Alvosat
2014-01-01
The aim of this study was to discover the effect of software quality characteristics on healthcare quality and efficiency indicators. Through a systematic literature review, we selected and analyzed 37 original research papers to investigate the impact of the software indicators (coming from the standard ISO 9126 quality characteristics and sub-characteristics) on some of healthcare important outcome indicators and finally ranked these software indicators. The results showed that the software characteristics usability, reliability and efficiency were mostly favored in the studies, indicating their importance. On the other hand, user satisfaction, quality of patient care, clinical workflow efficiency, providers' communication and information exchange, patient satisfaction and care costs were among the healthcare outcome indicators frequently evaluated in relation to the mentioned software characteristics. Regression Logistic Method was the most common assessment methodology, and Confirmatory Factor Analysis and Structural Equation Modeling were performed to test the structural model's fit. The software characteristics were considered to impact the healthcare outcome indicators through other intermediate factors (variables).
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
An OpenStudio Measure is a script that can manipulate an OpenStudio model and associated data to apply energy conservation measures (ECMs), run supplemental simulations, or visualize simulation results. The OpenStudio software development kit (SDK) and accessibility of the Ruby scripting language makes measure authorship accessible to both software developers and energy modelers. This paper discusses the life cycle of an OpenStudio Measure from development, testing, and distribution, to application.
Statistical modeling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1992-01-01
This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.
NASA Technical Reports Server (NTRS)
Allard, R.; Mack, B.; Bayoumi, M. M.
1989-01-01
Most robot systems lack a suitable hardware and software environment for the efficient research of new control and sensing schemes. Typically, engineers and researchers need to be experts in control, sensing, programming, communication and robotics in order to implement, integrate and test new ideas in a robot system. In order to reduce this time, the Robot Controller Test Station (RCTS) has been developed. It uses a modular hardware and software architecture allowing easy physical and functional reconfiguration of a robot. This is accomplished by emphasizing four major design goals: flexibility, portability, ease of use, and ease of modification. An enhanced distributed processing version of RCTS is described. It features an expanded and more flexible communication system design. Distributed processing results in the availability of more local computing power and retains the low cost of microprocessors. A large number of possible communication, control and sensing schemes can therefore be easily introduced and tested, using the same basic software structure.
QSAR Study for Carcinogenic Potency of Aromatic Amines Based on GEP and MLPs
Song, Fucheng; Zhang, Anling; Liang, Hui; Cui, Lianhua; Li, Wenlian; Si, Hongzong; Duan, Yunbo; Zhai, Honglin
2016-01-01
A new analysis strategy was used to classify the carcinogenicity of aromatic amines. The physical-chemical parameters are closely related to the carcinogenicity of compounds. Quantitative structure activity relationship (QSAR) is a method of predicting the carcinogenicity of aromatic amine, which can reveal the relationship between carcinogenicity and physical-chemical parameters. This study accessed gene expression programming by APS software, the multilayer perceptrons by Weka software to predict the carcinogenicity of aromatic amines, respectively. All these methods relied on molecular descriptors calculated by CODESSA software and eight molecular descriptors were selected to build function equations. As a remarkable result, the accuracy of gene expression programming in training and test sets are 0.92 and 0.82, the accuracy of multilayer perceptrons in training and test sets are 0.84 and 0.74 respectively. The precision of the gene expression programming is obviously superior to multilayer perceptrons both in training set and test set. The QSAR application in the identification of carcinogenic compounds is a high efficiency method. PMID:27854309
The Software Engineering Laboratory: An operational software experience factory
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Caldiera, Gianluigi; Mcgarry, Frank; Pajerski, Rose; Page, Gerald; Waligora, Sharon
1992-01-01
For 15 years, the Software Engineering Laboratory (SEL) has been carrying out studies and experiments for the purpose of understanding, assessing, and improving software and software processes within a production software development environment at NASA/GSFC. The SEL comprises three major organizations: (1) NASA/GSFC, Flight Dynamics Division; (2) University of Maryland, Department of Computer Science; and (3) Computer Sciences Corporation, Flight Dynamics Technology Group. These organizations have jointly carried out several hundred software studies, producing hundreds of reports, papers, and documents, all of which describe some aspect of the software engineering technology that was analyzed in the flight dynamics environment at NASA. The studies range from small, controlled experiments (such as analyzing the effectiveness of code reading versus that of functional testing) to large, multiple project studies (such as assessing the impacts of Ada on a production environment). The organization's driving goal is to improve the software process continually, so that sustained improvement may be observed in the resulting products. This paper discusses the SEL as a functioning example of an operational software experience factory and summarizes the characteristics of and major lessons learned from 15 years of SEL operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malone, R.; Wang, X.J.
BY WRITING BOTH A CUSTOM WINDOWS(NTTM) DYNAMIC LINK LIBRARY AND GENERIC COMPANION SERVER SOFTWARE, THE INTRINSIC FUNCTIONS OF MATHSOFT MATHCAD(TM) HAVE BEEN EXTENDED WITH NEW CAPABILITIES WHICH PERMIT DIRECT ACCESS TO THE CONTROL SYSTEM DATABASES OF BROOKHAVEN NATIONAL LABORATORY ACCELERATOR TEST FACILITY. UNDER THIS SCHEME, A MATHCAD WORKSHEET EXECUTING ON A PERSONAL COMPUTER BECOMES A CLIENT WHICH CAN BOTH IMPORT AND EXPORT DATA TO A CONTROL SYSTEM SERVER VIA A NETWORK STREAM SOCKET CONNECTION. THE RESULT IS AN ALTERNATIVE, MATHEMATICALLY ORIENTED VIEW OF CONTROLLING THE ACCELERATOR INTERACTIVELY.
Features of the Upgraded Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software
NASA Technical Reports Server (NTRS)
Mason, Michelle L.; Rufer, Shann J.
2016-01-01
The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) software is used at the NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used in the design of thermal protection systems for hypersonic vehicles that are exposed to severe aeroheating loads, such as reentry vehicles during descent and landing procedures. This software program originally was written in the PV-WAVE(Registered Trademark) programming language to analyze phosphor thermography data from the two-color, relative-intensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the program was migrated to MATLAB(Registered Trademark) syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to perform diagnostic checks of the accuracy of the acquired data during a wind tunnel test, to extract data along a specified multi-segment line following a feature such as a leading edge or a streamline, and to batch process all of the temporal frame data from a wind tunnel run. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy software to validate the program. The absolute differences between the heat transfer data output from the two programs were on the order of 10(exp -5) to 10(exp -7). IHEAT 4.0 replaces the PV-WAVE(Registered Trademark) version as the production software for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.
NASA Data Acquisition System Software Development for Rocket Propulsion Test Facilities
NASA Technical Reports Server (NTRS)
Herbert, Phillip W., Sr.; Elliot, Alex C.; Graves, Andrew R.
2015-01-01
Current NASA propulsion test facilities include Stennis Space Center in Mississippi, Marshall Space Flight Center in Alabama, Plum Brook Station in Ohio, and White Sands Test Facility in New Mexico. Within and across these centers, a diverse set of data acquisition systems exist with different hardware and software platforms. The NASA Data Acquisition System (NDAS) is a software suite designed to operate and control many critical aspects of rocket engine testing. The software suite combines real-time data visualization, data recording to a variety formats, short-term and long-term acquisition system calibration capabilities, test stand configuration control, and a variety of data post-processing capabilities. Additionally, data stream conversion functions exist to translate test facility data streams to and from downstream systems, including engine customer systems. The primary design goals for NDAS are flexibility, extensibility, and modularity. Providing a common user interface for a variety of hardware platforms helps drive consistency and error reduction during testing. In addition, with an understanding that test facilities have different requirements and setups, the software is designed to be modular. One engine program may require real-time displays and data recording; others may require more complex data stream conversion, measurement filtering, or test stand configuration management. The NDAS suite allows test facilities to choose which components to use based on their specific needs. The NDAS code is primarily written in LabVIEW, a graphical, data-flow driven language. Although LabVIEW is a general-purpose programming language; large-scale software development in the language is relatively rare compared to more commonly used languages. The NDAS software suite also makes extensive use of a new, advanced development framework called the Actor Framework. The Actor Framework provides a level of code reuse and extensibility that has previously been difficult to achieve using LabVIEW. The
Rules of thumb to increase the software quality through testing
NASA Astrophysics Data System (ADS)
Buttu, M.; Bartolini, M.; Migoni, C.; Orlati, A.; Poppi, S.; Righini, S.
2016-07-01
The software maintenance typically requires 40-80% of the overall project costs, and this considerable variability mostly depends on the software internal quality: the more the software is designed and implemented to constantly welcome new changes, the lower will be the maintenance costs. The internal quality is typically enforced through testing, which in turn also affects the development and maintenance costs. This is the reason why testing methodologies have become a major concern for any company that builds - or is involved in building - software. Although there is no testing approach that suits all contexts, we infer some general guidelines learned during the Development of the Italian Single-dish COntrol System (DISCOS), which is a project aimed at producing the control software for the three INAF radio telescopes (the Medicina and Noto dishes, and the newly-built SRT). These guidelines concern both the development and the maintenance phases, and their ultimate goal is to maximize the DISCOS software quality through a Behavior-Driven Development (BDD) workflow beside a continuous delivery pipeline. We consider different topics and patterns; they involve the proper apportion of the tests (from end-to-end to low-level tests), the choice between hardware simulators and mockers, why and how to apply TDD and the dependency injection to increase the test coverage, the emerging technologies available for test isolation, bug fixing, how to protect the system from the external resources changes (firmware updating, hardware substitution, etc.) and, eventually, how to accomplish BDD starting from functional tests and going through integration and unit tests. We discuss pros and cons of each solution and point out the motivations of our choices either as a general rule or narrowed in the context of the DISCOS project.
ERIC Educational Resources Information Center
Scott, Elsje; Zadirov, Alexander; Feinberg, Sean; Jayakody, Ruwanga
2004-01-01
Software testing is a crucial component in the development of good quality systems in industry. For this reason it was considered important to investigate the extent to which the Information Systems (IS) syllabus at the University of Cape Town (UCT) was aligned with accepted software testing practices in South Africa. For students to be effective…
Battery Data MI Importer Template Quick Start Guide
NASA Technical Reports Server (NTRS)
Levinson, Laurie H.
2017-01-01
In order to ensure the persistent availability and reliability of test data generated over the course of the project, the M-SHELLS Project has decided to store acquired test data, as well as associated pedigree information, in the Granta Materials Intelligence (MI) database. To facilitate that effort, an importer template and associated graphical user interface (GUI) software have been developed, with this guide providing the operating instructions for their use. The template and automation software GUI are contained in the BatteryDataImporter.xlsm Excel workbook, and are to be used to import M-SHELLS summary, or pedigree, data and the associated raw test data results into an importer template-based file, formatted in such a way as to be ready for immediate upload to the Test Data: Battery Performance table of the Granta MI database. The provided GUI enables the user to select the appropriate summary data file(s), with each file containing the required information to identify any associated raw test data file(s) to be processed. In addition to describing the setup and operation of the importer template and GUI software, this guide also provides instructions for uploading processed data to the database and for viewing the data following upload.
NASA Technical Reports Server (NTRS)
Shull, Forrest; Feldmann, Raimund; Haingaertner, Ralf; Regardie, Myrna; Seaman, Carolyn
2007-01-01
It is often the case in software projects that when schedule and budget resources are limited, the Verification and Validation (V&V) activities suffer. Fewer V&V activities can be afforded and moreover, short-term challenges can result in V&V activities being scaled back or dropped altogether. As a result, too often the default solution is to save activities for improving software quality until too late in the life-cycle, relying on late-term code inspections followed by thorough testing activities to reduce defect counts to acceptable levels. As many project managers realize, however, this is a resource-intensive way of achieving the required quality for software. The Full Life-cycle Defect Management Assessment Initiative, funded by NASA s Office of Safety and Mission Assurance under the Software Assurance Research Program, aims to address these problems by: Improving the effectiveness of early life-cycle V&V activities to make their benefits more attractive to team leads. Specifically, we focus on software inspection, a proven method that can be applied to any software work product, long before executable code has been developed; Better communicating this effectiveness to software development teams, along with suggestions for parameters to improve in the future to increase effectiveness; Analyzing the impact of early life-cycle V&V on the effectiveness and cost required for late life-cycle V&V activities, such as testing, in order to make the tradeoffs more apparent. This white paper reports on an initial milestone in this work, the development of a preliminary model of inspection effectiveness across multiple NASA Centers. This model contributes toward reaching our project goals by: Allowing an examination of inspection parameters, across different types of projects and different work products, for an analysis of factors that impact defect detection effectiveness. Allowing a comparison of this NASA-specific model to existing recommendations in the literature regarding how to plan effective inspections. Forming a baseline model which can be extended to incorporate factors describing: the numbers and types of defects that are missed by inspections; how such defects flow downstream through software development phases; how effectively they can be caught by testing activities in the late stages of development. The model has been implemented in a prototype web-enabled decision-support tool which allows developers to enter their inspection data and receive feedback based on a comparison against the model. The tool also allows users to access reusable materials (such as checklists) from projects included in the baseline. Both the tool itself and the model underlying it will continue to be extended throughout the remainder of this initiative. As results of analyzing inspection effectiveness for defect containment are determined, they can be shared via the tool and also via updates to existing training courses on metrics and software inspections. Moreover, the tool will help satisfy key CMMI requirements for the NASA Centers, as it will enable NASA to take a global view across peer review results for various types of projects to identify systemic problems. This analysis can result in continuous improvements to the approach to verification.
Case Study of Using High Performance Commercial Processors in Space
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Olivas, Zulema
2009-01-01
The purpose of the Space Shuttle Cockpit Avionics Upgrade project (1999 2004) was to reduce crew workload and improve situational awareness. The upgrade was to augment the Shuttle avionics system with new hardware and software. A major success of this project was the validation of the hardware architecture and software design. This was significant because the project incorporated new technology and approaches for the development of human rated space software. An early version of this system was tested at the Johnson Space Center for one month by teams of astronauts. The results were positive, but NASA eventually cancelled the project towards the end of the development cycle. The goal to reduce crew workload and improve situational awareness resulted in the need for high performance Central Processing Units (CPUs). The choice of CPU selected was the PowerPC family, which is a reduced instruction set computer (RISC) known for its high performance. However, the requirement for radiation tolerance resulted in the re-evaluation of the selected family member of the PowerPC line. Radiation testing revealed that the original selected processor (PowerPC 7400) was too soft to meet mission objectives and an effort was established to perform trade studies and performance testing to determine a feasible candidate. At that time, the PowerPC RAD750s were radiation tolerant, but did not meet the required performance needs of the project. Thus, the final solution was to select the PowerPC 7455. This processor did not have a radiation tolerant version, but had some ability to detect failures. However, its cache tags did not provide parity and thus the project incorporated a software strategy to detect radiation failures. The strategy was to incorporate dual paths for software generating commands to the legacy Space Shuttle avionics to prevent failures due to the softness of the upgraded avionics.
Case Study of Using High Performance Commercial Processors in a Space Environment
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Olivas, Zulema
2009-01-01
The purpose of the Space Shuttle Cockpit Avionics Upgrade project was to reduce crew workload and improve situational awareness. The upgrade was to augment the Shuttle avionics system with new hardware and software. A major success of this project was the validation of the hardware architecture and software design. This was significant because the project incorporated new technology and approaches for the development of human rated space software. An early version of this system was tested at the Johnson Space Center for one month by teams of astronauts. The results were positive, but NASA eventually cancelled the project towards the end of the development cycle. The goal to reduce crew workload and improve situational awareness resulted in the need for high performance Central Processing Units (CPUs). The choice of CPU selected was the PowerPC family, which is a reduced instruction set computer (RISC) known for its high performance. However, the requirement for radiation tolerance resulted in the reevaluation of the selected family member of the PowerPC line. Radiation testing revealed that the original selected processor (PowerPC 7400) was too soft to meet mission objectives and an effort was established to perform trade studies and performance testing to determine a feasible candidate. At that time, the PowerPC RAD750s where radiation tolerant, but did not meet the required performance needs of the project. Thus, the final solution was to select the PowerPC 7455. This processor did not have a radiation tolerant version, but faired better than the 7400 in the ability to detect failures. However, its cache tags did not provide parity and thus the project incorporated a software strategy to detect radiation failures. The strategy was to incorporate dual paths for software generating commands to the legacy Space Shuttle avionics to prevent failures due to the softness of the upgraded avionics.
Integration and software for thermal test of heat rate sensors. [space shuttle external tank
NASA Technical Reports Server (NTRS)
Wojciechowski, C. J.; Shrider, K. R.
1982-01-01
A minicomputer controlled radiant test facility is described which was developed and calibrated in an effort to verify analytical thermal models of instrumentation islands installed aboard the space shuttle external tank to measure thermal flight parameters during ascent. Software was provided for the facility as well as for development tests on the SRB actuator tail stock. Additional testing was conducted with the test facility to determine the temperature and heat flux rate and loads required to effect a change of color in the ET tank external paint. This requirement resulted from the review of photographs taken of the ET at separation from the orbiter which showed that 75% of the external tank paint coating had not changed color from its original white color. The paint on the remaining 25% of the tank was either brown or black, indicating that it had degraded due to heating or that the spray on form insulation had receded in these areas. The operational capability of the facility as well as the various tests which were conducted and their results are discussed.
Retinal Image Simulation of Subjective Refraction Techniques.
Perches, Sara; Collados, M Victoria; Ares, Jorge
2016-01-01
Refraction techniques make it possible to determine the most appropriate sphero-cylindrical lens prescription to achieve the best possible visual quality. Among these techniques, subjective refraction (i.e., patient's response-guided refraction) is the most commonly used approach. In this context, this paper's main goal is to present a simulation software that implements in a virtual manner various subjective-refraction techniques--including Jackson's Cross-Cylinder test (JCC)--relying all on the observation of computer-generated retinal images. This software has also been used to evaluate visual quality when the JCC test is performed in multifocal-contact-lens wearers. The results reveal this software's usefulness to simulate the retinal image quality that a particular visual compensation provides. Moreover, it can help to gain a deeper insight and to improve existing refraction techniques and it can be used for simulated training.
Grasping objects autonomously in simulated KC-135 zero-g
NASA Technical Reports Server (NTRS)
Norsworthy, Robert S.
1994-01-01
The KC-135 aircraft was chosen for simulated zero gravity testing of the Extravehicular Activity Helper/retriever (EVAHR). A software simulation of the EVAHR hardware, KC-135 flight dynamics, collision detection and grasp inpact dynamics has been developed to integrate and test the EVAHR software prior to flight testing on the KC-135. The EVAHR software will perform target pose estimation, tracking, and motion estimation for rigid, freely rotating, polyhedral objects. Manipulator grasp planning and trajectory control software has also been developed to grasp targets while avoiding collisions.
1982-03-01
pilot systems. Magnitude of the mutant error is classified as: o Program does not compute. o Program computes but does not run test data. o Program...14 Test and Integration ... ............ .. 105 15 The Mapping of SQM to the SDLC ........ ... 108 16 ADS Development .... .............. . 224 17...and funds. While the test phase concludes the normal development cycle, one should realize that with software the development continues in the
Open core control software for surgical robots
Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B.; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo
2010-01-01
Object In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge “intelligent surgical robot” will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are “home-made” in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. Materials and methods In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a “force guide” for supporting operators to perform precise manipulation by using a master–slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. Results The Open Core Control software was implemented on a surgical master–slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a “force guide” on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. Conclusion In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement “General Principles of Software Validation” or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field. PMID:20033506
Chandra X-ray Center Science Data Systems Regression Testing of CIAO
NASA Astrophysics Data System (ADS)
Lee, N. P.; Karovska, M.; Galle, E. C.; Bonaventura, N. R.
2011-07-01
The Chandra Interactive Analysis of Observations (CIAO) is a software system developed for the analysis of Chandra X-ray Observatory observations. An important component of a successful CIAO release is the repeated testing of the tools across various platforms to ensure consistent and scientifically valid results. We describe the procedures of the scientific regression testing of CIAO and the enhancements made to the testing system to increase the efficiency of run time and result validation.
Preliminary Radiation Testing of a State-of-the-Art Commercial 14nm CMOS Processor/System-on-a-Chip
NASA Technical Reports Server (NTRS)
Szabo, Carl M., Jr.; Duncan, Adam; LaBel, Kenneth A.; Kay, Matt; Bruner, Pat; Krzesniak, Mike; Dong, Lei
2015-01-01
Hardness assurance test results of Intel state-of-the-art 14nm “Broadwell” U-series processor / System-on-a-Chip (SoC) for total ionizing dose (TID) are presented, along with exploratory results from trials at a medical proton facility. Test method builds upon previous efforts [1] by utilizing commercial laptop motherboards and software stress applications as opposed to more traditional automated test equipment (ATE).
2006-12-01
NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI-AGENT PHYSICALLY INTERACTING SPACECRAFT (AMPHIS) TEST BED by Blake D. Eikenberry...Engineer Degree 4. TITLE AND SUBTITLE Guidance and Navigation Software Architecture Design for the Autonomous Multi- Agent Physically Interacting...iii Approved for public release; distribution is unlimited GUIDANCE AND NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI
Model-based software process improvement
NASA Technical Reports Server (NTRS)
Zettervall, Brenda T.
1994-01-01
The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.
[Porting Radiotherapy Software of Varian to Cloud Platform].
Zou, Lian; Zhang, Weisha; Liu, Xiangxiang; Xie, Zhao; Xie, Yaoqin
2017-09-30
To develop a low-cost private cloud platform of radiotherapy software. First, a private cloud platform which was based on OpenStack and the virtual GPU hardware was builded. Then on the private cloud platform, all the Varian radiotherapy software modules were installed to the virtual machine, and the corresponding function configuration was completed. Finally the software on the cloud was able to be accessed by virtual desktop client. The function test results of the cloud workstation show that a cloud workstation is equivalent to an isolated physical workstation, and any clients on the LAN can use the cloud workstation smoothly. The cloud platform transplantation in this study is economical and practical. The project not only improves the utilization rates of radiotherapy software, but also makes it possible that the cloud computing technology can expand its applications to the field of radiation oncology.
Development of a methodology for assessing the safety of embedded software systems
NASA Technical Reports Server (NTRS)
Garrett, C. J.; Guarro, S. B.; Apostolakis, G. E.
1993-01-01
A Dynamic Flowgraph Methodology (DFM) based on an integrated approach to modeling and analyzing the behavior of software-driven embedded systems for assessing and verifying reliability and safety is discussed. DFM is based on an extension of the Logic Flowgraph Methodology to incorporate state transition models. System models which express the logic of the system in terms of causal relationships between physical variables and temporal characteristics of software modules are analyzed to determine how a certain state can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating the system parameters at different point in time. The resulting information concerning the hardware and software states can be used to eliminate unsafe execution paths and identify testing criteria for safety critical software functions.
Hypersonic Navier Stokes Comparisons to Orbiter Flight Data
NASA Technical Reports Server (NTRS)
Campbell, Charles H.; Nompelis, Ioannis; Candler, Graham; Barnhart, Michael; Yoon, Seokkwan
2009-01-01
Hypersonic chemical nonequilibrium simulations of low earth orbit entry flow fields are becoming increasingly commonplace as software and computational capabilities become more capable. However, development of robust and accurate software to model these environments will always encounter a significant barrier in developing a suite of high quality calibration cases. The US3D hypersonic nonequilibrium Navier Stokes analysis capability has been favorably compared to a number of wind tunnel test cases. Extension of the calibration basis for this software to Orbiter flight conditions will provide an incremental increase in confidence. As part of the Orbiter Boundary Layer Transition Flight Experiment and the Hypersonic Thermodynamic Infrared Measurements project, NASA is performing entry flight testing on the Orbiter to provide valuable aerothermodynamic heating data. An increase in interest related to orbiter entry environments is resulting from this activity. With the advent of this new data, comparisons of the US3D software to the new flight testing data is warranted. This paper will provide information regarding the framework of analyses that will be applied with the US3D analysis tool. In addition, comparisons will be made to entry flight testing data provided by the Orbiter BLT Flight Experiment and HYTHIRM projects. If data from digital scans of the Orbiter windward surface become available, simulations will also be performed to characterize the difference in surface heating between the CAD reference OML and the digitized surface provided by the surface scans.
NASA Data Acquisitions System (NDAS) Software Architecture
NASA Technical Reports Server (NTRS)
Davis, Dawn; Duncan, Michael; Franzl, Richard; Holladay, Wendy; Marshall, Peggi; Morris, Jon; Turowski, Mark
2012-01-01
The NDAS Software Project is for the development of common low speed data acquisition system software to support NASA's rocket propulsion testing facilities at John C. Stennis Space Center (SSC), White Sands Test Facility (WSTF), Plum Brook Station (PBS), and Marshall Space Flight Center (MSFC).
Bazrafkan, Leila; Hemmati, Mehdi
2018-04-01
One of the important tasks of nurses in intensive care unit is interpretation of ECG. The use of training simulator is a new paradigm in the age of computers. This study was performed to evaluate the impact of cardiac arrhythmias simulator software on nurses' learning in the subspecialty Vali-Asr Hospital in 2016. This study was conducted by quasi-experimental randomized Salomon four group design with the participation of 120 nurses in subspecialty Vali-Asr Hospital in Tehran, Iran in 2016 that were selected purposefully and allocated in 4 groups. By this design other confounding factors such as the prior information, maturation and the role of sex and age were controlled by Solomon 4 design. The valid and reliable multiple choice test tools were used to gather information; the validity of the test was approved by experts and its reliability was obtained by Cronbach's alpha coefficient 0.89. At first, the knowledge and skills of the participants were assessed by a pre-test; following the educational intervention with cardiac arrhythmias simulator software during 14 days in ICUs, the mentioned factors were measured for the two groups again by a post-test in the four groups. Data were analyzed using the two way ANOVA. The significance level was considered as p<0.05. Based on randomized four-group Solomon designs and our test results, using cardiac arrhythmias simulator software as an intervention was effective in the nurses' learning since a significant difference was found between pre-test and post-test in the first group (p<0.05). Also, other comparisons by ANOVA test showed that there was no interaction between pre-test and intervention in all of the three knowledge areas of cardiac arrhythmias, their treatments and their diagnosis (P>0.05). The use of software-based simulator for cardiac arrhythmias was effective in nurses' learning in light of its attractive components and interactive method. This intervention increased the knowledge of the nurses in cognitive domain of cardiac arrhythmias in addition to their diagnosis and treatment. Also, the package can be used for training in other areas such as continuing medical education.
Sowunmi, Olaperi Yeside; Misra, Sanjay; Fernandez-Sanz, Luis; Crawford, Broderick; Soto, Ricardo
2016-01-01
The importance of quality assurance in the software development process cannot be overemphasized because its adoption results in high reliability and easy maintenance of the software system and other software products. Software quality assurance includes different activities such as quality control, quality management, quality standards, quality planning, process standardization and improvement amongst others. The aim of this work is to further investigate the software quality assurance practices of practitioners in Nigeria. While our previous work covered areas on quality planning, adherence to standardized processes and the inherent challenges, this work has been extended to include quality control, software process improvement and international quality standard organization membership. It also makes comparison based on a similar study carried out in Turkey. The goal is to generate more robust findings that can properly support decision making by the software community. The qualitative research approach, specifically, the use of questionnaire research instruments was applied to acquire data from software practitioners. In addition to the previous results, it was observed that quality assurance practices are quite neglected and this can be the cause of low patronage. Moreover, software practitioners are neither aware of international standards organizations or the required process improvement techniques; as such their claimed standards are not aligned to those of accredited bodies, and are only limited to their local experience and knowledge, which makes it questionable. The comparison with Turkey also yielded similar findings, making the results typical of developing countries. The research instrument used was tested for internal consistency using the Cronbach's alpha, and it was proved reliable. For the software industry in developing countries to grow strong and be a viable source of external revenue, software assurance practices have to be taken seriously because its effect is evident in the final product. Moreover, quality frameworks and tools which require minimum time and cost are highly needed in these countries.
Nurturing reliable and robust open-source scientific software
NASA Astrophysics Data System (ADS)
Uieda, L.; Wessel, P.
2017-12-01
Scientific results are increasingly the product of software. The reproducibility and validity of published results cannot be ensured without access to the source code of the software used to produce them. Therefore, the code itself is a fundamental part of the methodology and must be published along with the results. With such a reliance on software, it is troubling that most scientists do not receive formal training in software development. Tools such as version control, continuous integration, and automated testing are routinely used in industry to ensure the correctness and robustness of software. However, many scientist do not even know of their existence (although efforts like Software Carpentry are having an impact on this issue; software-carpentry.org). Publishing the source code is only the first step in creating an open-source project. For a project to grow it must provide documentation, participation guidelines, and a welcoming environment for new contributors. Expanding the project community is often more challenging than the technical aspects of software development. Maintainers must invest time to enforce the rules of the project and to onboard new members, which can be difficult to justify in the context of the "publish or perish" mentality. This problem will continue as long as software contributions are not recognized as valid scholarship by hiring and tenure committees. Furthermore, there are still unsolved problems in providing attribution for software contributions. Many journals and metrics of academic productivity do not recognize citations to sources other than traditional publications. Thus, some authors choose to publish an article about the software and use it as a citation marker. One issue with this approach is that updating the reference to include new contributors involves writing and publishing a new article. A better approach would be to cite a permanent archive of individual versions of the source code in services such as Zenodo (zenodo.org). However, citations to these sources are not always recognized when computing citation metrics. In summary, the widespread development of reliable and robust open-source software relies on the creation of formal training programs in software development best practices and the recognition of software as a valid form of scholarship.
VirtualDose: a software for reporting organ doses from CT for adult and pediatric patients.
Ding, Aiping; Gao, Yiming; Liu, Haikuan; Caracappa, Peter F; Long, Daniel J; Bolch, Wesley E; Liu, Bob; Xu, X George
2015-07-21
This paper describes the development and testing of VirtualDose--a software for reporting organ doses for adult and pediatric patients who undergo x-ray computed tomography (CT) examinations. The software is based on a comprehensive database of organ doses derived from Monte Carlo (MC) simulations involving a library of 25 anatomically realistic phantoms that represent patients of different ages, body sizes, body masses, and pregnant stages. Models of GE Lightspeed Pro 16 and Siemens SOMATOM Sensation 16 scanners were carefully validated for use in MC dose calculations. The software framework is designed with the 'software as a service (SaaS)' delivery concept under which multiple clients can access the web-based interface simultaneously from any computer without having to install software locally. The RESTful web service API also allows a third-party picture archiving and communication system software package to seamlessly integrate with VirtualDose's functions. Software testing showed that VirtualDose was compatible with numerous operating systems including Windows, Linux, Apple OS X, and mobile and portable devices. The organ doses from VirtualDose were compared against those reported by CT-Expo and ImPACT-two dosimetry tools that were based on the stylized pediatric and adult patient models that were known to be anatomically simple. The organ doses reported by VirtualDose differed from those reported by CT-Expo and ImPACT by as much as 300% in some of the patient models. These results confirm the conclusion from past studies that differences in anatomical realism offered by stylized and voxel phantoms have caused significant discrepancies in CT dose estimations.
Modeling Complex Workflow in Molecular Diagnostics
Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan
2010-01-01
One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844
The Computational Infrastructure for Geodynamics as a Community of Practice
NASA Astrophysics Data System (ADS)
Hwang, L.; Kellogg, L. H.
2016-12-01
Computational Infrastructure for Geodynamics (CIG), geodynamics.org, originated in 2005 out of community recognition that the efforts of individual or small groups of researchers to develop scientifically-sound software is impossible to sustain, duplicates effort, and makes it difficult for scientists to adopt state-of-the art computational methods that promote new discovery. As a community of practice, participants in CIG share an interest in computational modeling in geodynamics and work together on open source software to build the capacity to support complex, extensible, scalable, interoperable, reliable, and reusable software in an effort to increase the return on investment in scientific software development and increase the quality of the resulting software. The group interacts regularly to learn from each other and better their practices formally through webinar series, workshops, and tutorials and informally through listservs and hackathons. Over the past decade, we have learned that successful scientific software development requires at a minimum: collaboration between domain-expert researchers, software developers and computational scientists; clearly identified and committed lead developer(s); well-defined scientific and computational goals that are regularly evaluated and updated; well-defined benchmarks and testing throughout development; attention throughout development to usability and extensibility; understanding and evaluation of the complexity of dependent libraries; and managed user expectations through education, training, and support. CIG's code donation standards provide the basis for recently formalized best practices in software development (geodynamics.org/cig/dev/best-practices/). Best practices include use of version control; widely used, open source software libraries; extensive test suites; portable configuration and build systems; extensive documentation internal and external to the code; and structured, human readable input formats.
Removing Background Noise with Phased Array Signal Processing
NASA Technical Reports Server (NTRS)
Podboy, Gary; Stephens, David
2015-01-01
Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.
Description and Flight Test Results of the NASA F-8 Digital Fly-by-Wire Control System
NASA Technical Reports Server (NTRS)
1975-01-01
A NASA program to develop digital fly-by-wire (DFBW) technology for aircraft applications is discussed. Phase I of the program demonstrated the feasibility of using a digital fly-by-wire system for aircraft control through developing and flight testing a single channel system, which used Apollo hardware, in an F-8C airplane. The objective of Phase II of the program is to establish a technology base for designing practical DFBW systems. It will involve developing and flight testing a triplex digital fly-by-wire system using state-of-the-art airborne computers, system hardware, software, and redundancy concepts. The papers included in this report describe the Phase I system and its development and present results from the flight program. Man-rated flight software and the effects of lightning on digital flight control systems are also discussed.
An experience of qualified preventive screening: shiraz smart screening software.
Islami Parkoohi, Parisa; Zare, Hashem; Abdollahifard, Gholamreza
2015-01-01
Computerized preventive screening software is a cost effective intervention tool to address non-communicable chronic diseases. Shiraz Smart Screening Software (SSSS) was developed as an innovative tool for qualified screening. It allows simultaneous smart screening of several high-burden chronic diseases and supports reminder notification functionality. The extent in which SSSS affects screening quality is also described. Following software development, preventive screening and annual health examinations of 261 school staff (Medical School of Shiraz, Iran) was carried out in a software-assisted manner. To evaluate the quality of the software-assisted screening, we used quasi-experimental study design and determined coverage, irregular attendance and inappropriateness proportions in relation with the manual and software-assisted screening as well as the corresponding number of requested tests. In manual screening method, 27% of employees were covered (with 94% irregular attendance) while by software-assisted screening, the coverage proportion was 79% (attendance status will clear after the specified time). The frequency of inappropriate screening test requests, before the software implementation, was 41.37% for fasting plasma glucose, 41.37% for lipid profile, 0.84% for occult blood, 0.19% for flexible sigmoidoscopy/colonoscopy, 35.29% for Pap smear, 19.20% for mammography and 11.2% for prostate specific antigen. All of the above were corrected by the software application. In total, 366 manual screening and 334 software-assisted screening tests were requested. SSSS is an innovative tool to improve the quality of preventive screening plans in terms of increased screening coverage, reduction in inappropriateness and the total number of requested tests.
de Carvalho, Fredson Marcio Acris; Gonçalves, Leonardo Cantanhede de Oliveira; Marques, André Augusto Franco; Alves, Vanessa; Bueno, Carlos Eduardo da Silveira; De Martin, Alexandre Sigrist
2016-01-01
Objective: To compare cleaning effectiveness by histological analysis of a reciprocating single-file system with ProTaper rotary instruments during the preparation of curved root canals in extracted teeth. Methods: A total of 40 root canals with curvatures ranging between 20 - 40 degrees were divided into two groups of 20 canals. Canals were prepared to the following apical sizes: Reciproc size 25 (n=20); ProTaper: F2 (n=20). The normal distribution of data was tested by the Kolmogorov-Smirnov test and the values obtained for the test (Mann-Whitney U test, P < .05) were statistically analyzed using the GraphPad InStat for the Mac OS software (GraphPad Software, La Jolla, CA, USA). Results: There were no significant differences in remaining debris (P > .05) between the two groups. Conclusion: The application of reciprocating motion during instrumentation did not result in increased debris when compared with continuous rotation motion, even in the apical part of curved canals. Both instruments resulted in debris in the canal lumen, irrespective of the movement kinematics applied. PMID:28217185
LV software support for supersonic flow analysis
NASA Technical Reports Server (NTRS)
Bell, W. A.; Lepicovsky, J.
1992-01-01
The software for configuring an LV counter processor system has been developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system has been developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.
LV software support for supersonic flow analysis
NASA Technical Reports Server (NTRS)
Bell, William A.
1992-01-01
The software for configuring a Laser Velocimeter (LV) counter processor system was developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system was developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.
Development and verification testing of automation and robotics for assembly of space structures
NASA Technical Reports Server (NTRS)
Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.
1993-01-01
A program was initiated within the past several years to develop operational procedures for automated assembly of truss structures suitable for large-aperture antennas. The assembly operations require the use of a robotic manipulator and are based on the principle of supervised autonomy to minimize crew resources. A hardware testbed was established to support development and evaluation testing. A brute-force automation approach was used to develop the baseline assembly hardware and software techniques. As the system matured and an operation was proven, upgrades were incorprated and assessed against the baseline test results. This paper summarizes the developmental phases of the program, the results of several assembly tests, the current status, and a series of proposed developments for additional hardware and software control capability. No problems that would preclude automated in-space assembly of truss structures have been encountered. The current system was developed at a breadboard level and continued development at an enhanced level is warranted.
de Carvalho, Fredson Marcio Acris; Gonçalves, Leonardo Cantanhede de Oliveira; Marques, André Augusto Franco; Alves, Vanessa; Bueno, Carlos Eduardo da Silveira; De Martin, Alexandre Sigrist
2016-01-01
To compare cleaning effectiveness by histological analysis of a reciprocating single-file system with ProTaper rotary instruments during the preparation of curved root canals in extracted teeth. A total of 40 root canals with curvatures ranging between 20 - 40 degrees were divided into two groups of 20 canals. Canals were prepared to the following apical sizes: Reciproc size 25 (n=20); ProTaper: F2 (n=20). The normal distribution of data was tested by the Kolmogorov-Smirnov test and the values obtained for the test (Mann-Whitney U test, P < .05) were statistically analyzed using the GraphPad InStat for the Mac OS software (GraphPad Software, La Jolla, CA, USA). There were no significant differences in remaining debris (P > .05) between the two groups. The application of reciprocating motion during instrumentation did not result in increased debris when compared with continuous rotation motion, even in the apical part of curved canals. Both instruments resulted in debris in the canal lumen, irrespective of the movement kinematics applied.
A graphical simulation software for instruction in cardiovascular mechanics physiology.
Wildhaber, Reto A; Verrey, François; Wenger, Roland H
2011-01-25
Computer supported, interactive e-learning systems are widely used in the teaching of physiology. However, the currently available complimentary software tools in the field of the physiology of cardiovascular mechanics have not yet been adapted to the latest systems software. Therefore, a simple-to-use replacement for undergraduate and graduate students' education was needed, including an up-to-date graphical software that is validated and field-tested. Software compatible to Windows, based on modified versions of existing mathematical algorithms, has been newly developed. Testing was performed during a full term of physiological lecturing to medical and biology students. The newly developed CLabUZH software models a reduced human cardiovascular loop containing all basic compartments: an isolated heart including an artificial electrical stimulator, main vessels and the peripheral resistive components. Students can alter several physiological parameters interactively. The resulting output variables are printed in x-y diagrams and in addition shown in an animated, graphical model. CLabUZH offers insight into the relations of volume, pressure and time dependency in the circulation and their correlation to the electrocardiogram (ECG). Established mechanisms such as the Frank-Starling Law or the Windkessel Effect are considered in this model. The CLabUZH software is self-contained with no extra installation required and runs on most of today's personal computer systems. CLabUZH is a user-friendly interactive computer programme that has proved to be useful in teaching the basic physiological principles of heart mechanics.
Software system design for the non-null digital Moiré interferometer
NASA Astrophysics Data System (ADS)
Chen, Meng; Hao, Qun; Hu, Yao; Wang, Shaopu; Li, Tengfei; Li, Lin
2016-11-01
Aspheric optical components are an indispensable part of modern optics systems. With the development of aspheric optical elements fabrication technique, high-precision figure error test method of aspheric surfaces is a quite urgent issue now. We proposed a digital Moiré interferometer technique (DMIT) based on partial compensation principle for aspheric and freeform surface measurement. Different from traditional interferometer, DMIT consists of a real and a virtual interferometer. The virtual interferometer is simulated with Zemax software to perform phase-shifting and alignment. We can get the results by a series of calculation with the real interferogram and virtual interferograms generated by computer. DMIT requires a specific, reliable software system to ensure its normal work. Image acquisition and data processing are two important parts in this system. And it is also a challenge to realize the connection between the real and virtual interferometer. In this paper, we present a software system design for DMIT with friendly user interface and robust data processing features, enabling us to acquire the figure error of the measured asphere. We choose Visual C++ as the software development platform and control the ideal interferometer by using hybrid programming with Zemax. After image acquisition and data transmission, the system calls image processing algorithms written with Matlab to calculate the figure error of the measured asphere. We test the software system experimentally. In the experiment, we realize the measurement of an aspheric surface and prove the feasibility of the software system.
Reiter, Rachel; Viehdorfer, Matt; Hescock, Kimmy; Clark, Terri; Nemanic, Sarah
The goal of this study was to determine the effectiveness of an interactive radiology software application that we developed to enhance learning of normal canine radiographic anatomy. All first-year veterinary medical students were eligible to participate in this subject pre-test-post-test experimental design. When presented with the software application, all students had completed two terms of gross anatomy in which the complete anatomy of the dog had been taught using a combination of lectures and laboratory dissections, including radiographic examples. The software application was divided into four body regions: front limb, hind limb, skull/spine, and thorax/abdomen, each with a learning mode and a quiz mode. Quizzes were composed of 15 questions drawn pseudo-randomly without repeat from all structures within a region (median 206 structures). Students were initially given the software application with only the quiz mode activated. After completing four quizzes, one for each body region, students were given access to the software application with both learning mode and quiz mode activated. Students were instructed to spend 30 minutes using the learning mode to study the radiographic anatomy of each region and to retake each quiz. Quiz scores after using the learning mode were significantly higher for each body region (p<.001), with a large effect size for all four regions (Cohen's d=0.83-1.56). These results suggest that this radiographic anatomy software application is an effective tool for students to use to learn normal radiographic anatomy.
Software for roof defects recognition on aerial photographs
NASA Astrophysics Data System (ADS)
Yudin, D.; Naumov, A.; Dolzhenko, A.; Patrakova, E.
2018-05-01
The article presents information on software for roof defects recognition on aerial photographs, made with air drones. An areal image segmentation mechanism is described. It allows detecting roof defects – unsmoothness that causes water stagnation after rain. It is shown that HSV-transformation approach allows quick detection of stagnation areas, their size and perimeters, but is sensitive to shadows and changes of the roofing-types. Deep Fully Convolutional Network software solution eliminates this drawback. The tested data set consists of the roofing photos with defects and binary masks for them. FCN approach gave acceptable results of image segmentation in Dice metric average value. This software can be used in inspection automation of roof conditions in the production sector and housing and utilities infrastructure.
A Software Framework for Aircraft Simulation
NASA Technical Reports Server (NTRS)
Curlett, Brian P.
2008-01-01
The National Aeronautics and Space Administration Dryden Flight Research Center has a long history in developing simulations of experimental fixed-wing aircraft from gliders to suborbital vehicles on platforms ranging from desktop simulators to pilot-in-the-loop/aircraft-in-the-loop simulators. Regardless of the aircraft or simulator hardware, much of the software framework is common to all NASA Dryden simulators. Some of this software has withstood the test of time, but in recent years the push toward high-fidelity user-friendly simulations has resulted in some significant changes. This report presents an overview of the current NASA Dryden simulation software framework and capabilities with an emphasis on the new features that have permitted NASA to develop more capable simulations while maintaining the same staffing levels.
Design and Applications of Rapid Image Tile Producing Software Based on Mosaic Dataset
NASA Astrophysics Data System (ADS)
Zha, Z.; Huang, W.; Wang, C.; Tang, D.; Zhu, L.
2018-04-01
Map tile technology is widely used in web geographic information services. How to efficiently produce map tiles is key technology for rapid service of images on web. In this paper, a rapid producing software for image tile data based on mosaic dataset is designed, meanwhile, the flow of tile producing is given. Key technologies such as cluster processing, map representation, tile checking, tile conversion and compression in memory are discussed. Accomplished by software development and tested by actual image data, the results show that this software has a high degree of automation, would be able to effectively reducing the number of IO and improve the tile producing efficiency. Moreover, the manual operations would be reduced significantly.
Tool Use Within NASA Software Quality Assurance
NASA Technical Reports Server (NTRS)
Shigeta, Denise; Port, Dan; Nikora, Allen P.; Wilf, Joel
2013-01-01
As space mission software systems become larger and more complex, it is increasingly important for the software assurance effort to have the ability to effectively assess both the artifacts produced during software system development and the development process itself. Conceptually, assurance is a straightforward idea - it is the result of activities carried out by an organization independent of the software developers to better inform project management of potential technical and programmatic risks, and thus increase management's confidence in the decisions they ultimately make. In practice, effective assurance for large, complex systems often entails assessing large, complex software artifacts (e.g., requirements specifications, architectural descriptions) as well as substantial amounts of unstructured information (e.g., anomaly reports resulting from testing activities during development). In such an environment, assurance engineers can benefit greatly from appropriate tool support. In order to do so, an assurance organization will need accurate and timely information on the tool support available for various types of assurance activities. In this paper, we investigate the current use of tool support for assurance organizations within NASA, and describe on-going work at JPL for providing assurance organizations with the information about tools they need to use them effectively.
Utility of coupling nonlinear optimization methods with numerical modeling software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, M.J.
1996-08-05
Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less
PFLOTRAN-RepoTREND Source Term Comparison Summary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frederick, Jennifer M.
Code inter-comparison studies are useful exercises to verify and benchmark independently developed software to ensure proper function, especially when the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment. This summary describes the results of the first portion of the code inter-comparison between PFLOTRAN and RepoTREND, which compares the radionuclide source term used in a typical performance assessment.
Simulation test beds for the space station electrical power system
NASA Technical Reports Server (NTRS)
Sadler, Gerald G.
1988-01-01
NASA Lewis Research Center and its prime contractor are responsible for developing the electrical power system on the space station. The power system will be controlled by a network of distributed processors. Control software will be verified, validated, and tested in hardware and software test beds. Current plans for the software test bed involve using real time and nonreal time simulations of the power system. This paper will discuss the general simulation objectives and configurations, control architecture, interfaces between simulator and controls, types of tests, and facility configurations.
Numerical Analyses of Subsoil-structure Interaction in Original Non-commercial Software based on FEM
NASA Astrophysics Data System (ADS)
Cajka, R.; Vaskova, J.; Vasek, J.
2018-04-01
For decades attention has been paid to interaction of foundation structures and subsoil and development of interaction models. Given that analytical solutions of subsoil-structure interaction could be deduced only for some simple shapes of load, analytical solutions are increasingly being replaced by numerical solutions (eg. FEM – Finite element method). Numerical analyses provides greater possibilities for taking into account the real factors involved in the subsoil-structure interaction and was also used in this article. This makes it possible to design the foundation structures more efficiently and still reliably and securely. Currently there are several software that, can deal with the interaction of foundations and subsoil. It has been demonstrated that non-commercial software called MKPINTER (created by Cajka) provides appropriately results close to actual measured values. In MKPINTER software stress-strain analysis of elastic half-space by means of Gauss numerical integration and Jacobean of transformation is done. Input data for numerical analysis were observed by experimental loading test of concrete slab. The loading was performed using unique experimental equipment which was constructed in the area Faculty of Civil Engineering, VŠB-TU Ostrava. The purpose of this paper is to compare resulting deformation of the slab with values observed during experimental loading test.
Sankaranarayanan, Ganesh; Halic, Tansel; Arikatla, Venkata Sreekanth; Lu, Zhonghua; De, Suvranu
2010-01-01
Purpose Surgical simulations require haptic interactions and collaboration in a shared virtual environment. A software framework for decoupled surgical simulation based on a multi-controller and multi-viewer model-view-controller (MVC) pattern was developed and tested. Methods A software framework for multimodal virtual environments was designed, supporting both visual interactions and haptic feedback while providing developers with an integration tool for heterogeneous architectures maintaining high performance, simplicity of implementation, and straightforward extension. The framework uses decoupled simulation with updates of over 1,000 Hz for haptics and accommodates networked simulation with delays of over 1,000 ms without performance penalty. Results The simulation software framework was implemented and was used to support the design of virtual reality-based surgery simulation systems. The framework supports the high level of complexity of such applications and the fast response required for interaction with haptics. The efficacy of the framework was tested by implementation of a minimally invasive surgery simulator. Conclusion A decoupled simulation approach can be implemented as a framework to handle simultaneous processes of the system at the various frame rates each process requires. The framework was successfully used to develop collaborative virtual environments (VEs) involving geographically distributed users connected through a network, with the results comparable to VEs for local users. PMID:20714933
Comparison of methods for quantitative evaluation of endoscopic distortion
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Castro, Kurt; Desai, Viraj N.; Cheng, Wei-Chung; Pfefer, Joshua
2015-03-01
Endoscopy is a well-established paradigm in medical imaging, and emerging endoscopic technologies such as high resolution, capsule and disposable endoscopes promise significant improvements in effectiveness, as well as patient safety and acceptance of endoscopy. However, the field lacks practical standardized test methods to evaluate key optical performance characteristics (OPCs), in particular the geometric distortion caused by fisheye lens effects in clinical endoscopic systems. As a result, it has been difficult to evaluate an endoscope's image quality or assess its changes over time. The goal of this work was to identify optimal techniques for objective, quantitative characterization of distortion that are effective and not burdensome. Specifically, distortion measurements from a commercially available distortion evaluation/correction software package were compared with a custom algorithm based on a local magnification (ML) approach. Measurements were performed using a clinical gastroscope to image square grid targets. Recorded images were analyzed with the ML approach and the commercial software where the results were used to obtain corrected images. Corrected images based on the ML approach and the software were compared. The study showed that the ML method could assess distortion patterns more accurately than the commercial software. Overall, the development of standardized test methods for characterizing distortion and other OPCs will facilitate development, clinical translation, manufacturing quality and assurance of performance during clinical use of endoscopic technologies.
System integration test plan for HANDI 2000 business management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, D.
This document presents the system integration test plan for the Commercial-Off-The-Shelf, PassPort and PeopleSoft software, and custom software created to work with the COTS products. The PP software is an integrated application for AP, Contract Management, Inventory Management, Purchasing and Material Safety Data Sheet. The PS software is an integrated application for Project Costing, General Ledger, Human Resources/Training, Payroll, and Base Benefits.
Instrument control software development process for the multi-star AO system ARGOS
NASA Astrophysics Data System (ADS)
Kulas, M.; Barl, L.; Borelli, J. L.; Gässler, W.; Rabien, S.
2012-09-01
The ARGOS project (Advanced Rayleigh guided Ground layer adaptive Optics System) will upgrade the Large Binocular Telescope (LBT) with an AO System consisting of six Rayleigh laser guide stars. This adaptive optics system integrates several control loops and many different components like lasers, calibration swing arms and slope computers that are dispersed throughout the telescope. The purpose of the instrument control software (ICS) is running this AO system and providing convenient client interfaces to the instruments and the control loops. The challenges for the ARGOS ICS are the development of a distributed and safety-critical software system with no defects in a short time, the creation of huge and complex software programs with a maintainable code base, the delivery of software components with the desired functionality and the support of geographically distributed project partners. To tackle these difficult tasks, the ARGOS software engineers reuse existing software like the novel middleware from LINC-NIRVANA, an instrument for the LBT, provide many tests at different functional levels like unit tests and regression tests, agree about code and architecture style and deliver software incrementally while closely collaborating with the project partners. Many ARGOS ICS components are already successfully in use in the laboratories for testing ARGOS control loops.
Behavior driven testing in ALMA telescope calibration software
NASA Astrophysics Data System (ADS)
Gil, Juan P.; Garces, Mario; Broguiere, Dominique; Shen, Tzu-Chiang
2016-07-01
ALMA software development cycle includes well defined testing stages that involves developers, testers and scientists. We adapted Behavior Driven Development (BDD) to testing activities applied to Telescope Calibration (TELCAL) software. BDD is an agile technique that encourages communication between roles by defining test cases using natural language to specify features and scenarios, what allows participants to share a common language and provides a high level set of automated tests. This work describes how we implemented and maintain BDD testing for TELCAL, the infrastructure needed to support it and proposals to expand this technique to other subsystems.
2014-01-01
Background A balance test provides important information such as the standard to judge an individual’s functional recovery or make the prediction of falls. The development of a tool for a balance test that is inexpensive and widely available is needed, especially in clinical settings. The Wii Balance Board (WBB) is designed to test balance, but there is little software used in balance tests, and there are few studies on reliability and validity. Thus, we developed a balance assessment software using the Nintendo Wii Balance Board, investigated its reliability and validity, and compared it with a laboratory-grade force platform. Methods Twenty healthy adults participated in our study. The participants participated in the test for inter-rater reliability, intra-rater reliability, and concurrent validity. The tests were performed with balance assessment software using the Nintendo Wii balance board and a laboratory-grade force platform. Data such as Center of Pressure (COP) path length and COP velocity were acquired from the assessment systems. The inter-rater reliability, the intra-rater reliability, and concurrent validity were analyzed by an intraclass correlation coefficient (ICC) value and a standard error of measurement (SEM). Results The inter-rater reliability (ICC: 0.89-0.79, SEM in path length: 7.14-1.90, SEM in velocity: 0.74-0.07), intra-rater reliability (ICC: 0.92-0.70, SEM in path length: 7.59-2.04, SEM in velocity: 0.80-0.07), and concurrent validity (ICC: 0.87-0.73, SEM in path length: 5.94-0.32, SEM in velocity: 0.62-0.08) were high in terms of COP path length and COP velocity. Conclusion The balance assessment software incorporating the Nintendo Wii balance board was used in our study and was found to be a reliable assessment device. In clinical settings, the device can be remarkably inexpensive, portable, and convenient for the balance assessment. PMID:24912769
A survey of quality assurance practices in biomedical open source software projects.
Koru, Günes; El Emam, Khaled; Neisa, Angelica; Umarji, Medha
2007-05-07
Open source (OS) software is continuously gaining recognition and use in the biomedical domain, for example, in health informatics and bioinformatics. Given the mission critical nature of applications in this domain and their potential impact on patient safety, it is important to understand to what degree and how effectively biomedical OS developers perform standard quality assurance (QA) activities such as peer reviews and testing. This would allow the users of biomedical OS software to better understand the quality risks, if any, and the developers to identify process improvement opportunities to produce higher quality software. A survey of developers working on biomedical OS projects was conducted to examine the QA activities that are performed. We took a descriptive approach to summarize the implementation of QA activities and then examined some of the factors that may be related to the implementation of such practices. Our descriptive results show that 63% (95% CI, 54-72) of projects did not include peer reviews in their development process, while 82% (95% CI, 75-89) did include testing. Approximately 74% (95% CI, 67-81) of developers did not have a background in computing, 80% (95% CI, 74-87) were paid for their contributions to the project, and 52% (95% CI, 43-60) had PhDs. A multivariate logistic regression model to predict the implementation of peer reviews was not significant (likelihood ratio test = 16.86, 9 df, P = .051) and neither was a model to predict the implementation of testing (likelihood ratio test = 3.34, 9 df, P = .95). Less attention is paid to peer review than testing. However, the former is a complementary, and necessary, QA practice rather than an alternative. Therefore, one can argue that there are quality risks, at least at this point in time, in transitioning biomedical OS software into any critical settings that may have operational, financial, or safety implications. Developers of biomedical OS applications should invest more effort in implementing systemic peer review practices throughout the development and maintenance processes.
Real-Time Extended Interface Automata for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi; Man, Tianlong; Liu, Bin
2014-01-01
Testing and verification of the interface between software components are particularly important due to the large number of complex interactions, which requires the traditional modeling languages to overcome the existing shortcomings in the aspects of temporal information description and software testing input controlling. This paper presents the real-time extended interface automata (RTEIA) which adds clearer and more detailed temporal information description by the application of time words. We also establish the input interface automaton for every input in order to solve the problems of input controlling and interface covering nimbly when applied in the software testing field. Detailed definitions of the RTEIA and the testing cases generation algorithm are provided in this paper. The feasibility and efficiency of this method have been verified in the testing of one real aircraft braking system. PMID:24892080
Design and validation of Segment--freely available software for cardiovascular image analysis.
Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan
2010-01-11
Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.
Optimal shortening of uniform covering arrays
Rangel-Valdez, Nelson; Avila-George, Himer; Carrizalez-Turrubiates, Oscar
2017-01-01
Software test suites based on the concept of interaction testing are very useful for testing software components in an economical way. Test suites of this kind may be created using mathematical objects called covering arrays. A covering array, denoted by CA(N; t, k, v), is an N × k array over Zv={0,…,v-1} with the property that every N × t sub-array covers all t-tuples of Zvt at least once. Covering arrays can be used to test systems in which failures occur as a result of interactions among components or subsystems. They are often used in areas such as hardware Trojan detection, software testing, and network design. Because system testing is expensive, it is critical to reduce the amount of testing required. This paper addresses the Optimal Shortening of Covering ARrays (OSCAR) problem, an optimization problem whose objective is to construct, from an existing covering array matrix of uniform level, an array with dimensions of (N − δ) × (k − Δ) such that the number of missing t-tuples is minimized. Two applications of the OSCAR problem are (a) to produce smaller covering arrays from larger ones and (b) to obtain quasi-covering arrays (covering arrays in which the number of missing t-tuples is small) to be used as input to a meta-heuristic algorithm that produces covering arrays. In addition, it is proven that the OSCAR problem is NP-complete, and twelve different algorithms are proposed to solve it. An experiment was performed on 62 problem instances, and the results demonstrate the effectiveness of solving the OSCAR problem to facilitate the construction of new covering arrays. PMID:29267343
Postel-Vinay, Nicolas; Bobrie, Guillaume; Ruelland, Alan; Oufkir, Majida; Savard, Sebastien; Persu, Alexandre; Katsahian, Sandrine; Plouin, Pierre F
2016-04-01
Hy-Result is the first software for self-interpretation of home blood pressure measurement results, taking into account both the recommended thresholds for normal values and patient characteristics. We compare the software-generated classification with the physician's evaluation. The primary assessment criterion was whether algorithm classification of the blood pressure (BP) status concurred with the physician's advice (blinded to the software's results) following a consultation (n=195 patients). Secondary assessment was the reliability of text messages. In the 58 untreated patients, the agreement between classification of the BP status generated by the software and the physician's classification was 87.9%. In the 137 treated patients, the agreement was 91.9%. The κ-test applied for all the patients was 0.81 (95% confidence interval: 0.73-0.89). After correction of errors identified in the algorithm during the study, agreement increased to 95.4% [κ=0.9 (95% confidence interval: 0.84-0.97)]. For 100% of the patients with comorbidities (n=46), specific text messages were generated, indicating that a physician might recommend a target BP lower than 135/85 mmHg. Specific text messages were also generated for 100% of the patients for whom global cardiovascular risks markedly exceeded norms. Classification by Hy-Result is at least as accurate as that of a specialist in current practice (http://www.hy-result.com).
Data collection and analysis software development for rotor dynamics testing in spin laboratory
NASA Astrophysics Data System (ADS)
Abdul-Aziz, Ali; Arble, Daniel; Woike, Mark
2017-04-01
Gas turbine engine components undergo high rotational loading another complex environmental conditions. Such operating environment leads these components to experience damages and cracks that can cause catastrophic failure during flights. There are traditional crack detections and health monitoring methodologies currently being used which rely on periodic routine maintenances, nondestructive inspections that often times involve engine and components dis-assemblies. These methods do not also offer adequate information about the faults, especially, if these faults at subsurface or not clearly evident. At NASA Glenn research center, the rotor dynamics laboratory is presently involved in developing newer techniques that are highly dependent on sensor technology to enable health monitoring and prediction of damage and cracks in rotor disks. These approaches are noninvasive and relatively economical. Spin tests are performed using a subscale test article mimicking turbine rotor disk undergoing rotational load. Non-contact instruments such as capacitive and microwave sensors are used to measure the blade tip gap displacement and blade vibrations characteristics in an attempt develop a physics based model to assess/predict the faults in the rotor disk. Data collection is a major component in this experimental-analytical procedure and as a result, an upgrade to an older version of the data acquisition software which is based on LabVIEW program has been implemented to support efficiently running tests and analyze the results. Outcomes obtained from the tests data and related experimental and analytical rotor dynamics modeling including key features of the updated software are presented and discussed.
NASA Technical Reports Server (NTRS)
Nickum, J. D.
1978-01-01
The software package developed for the KIM-1 Micro-System and the Mini-L PLL receiver to simplify taking flight test data is described along with the address and data bus buffers used in the KIM-1 Micro-system. The interface hardware and timing are also presented to describe completely the software programs.
Design ATE systems for complex assemblies
NASA Astrophysics Data System (ADS)
Napier, R. S.; Flammer, G. H.; Moser, S. A.
1983-06-01
The use of ATE systems in radio specification testing can reduce the test time by approximately 90 to 95 percent. What is more, the test station does not require a highly trained operator. Since the system controller has full power over all the measurements, human errors are not introduced into the readings. The controller is immune to any need to increase output by allowing marginal units to pass through the system. In addition, the software compensates for predictable, repeatable system errors, for example, cabling losses, which are an inherent part of the test setup. With no variation in test procedures from unit to unit, there is a constant repeatability factor. Preparing the software, however, usually entails considerable expense. It is pointed out that many of the problems associated with ATE system software can be avoided with the use of a software-intensive, or computer-intensive, system organization. Its goal is to minimize the user's need for software development, thereby saving time and money.
Software platform virtualization in chemistry research and university teaching.
Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver
2009-11-16
Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.
Cooperative Work and Sustainable Scientific Software Practices in R
NASA Astrophysics Data System (ADS)
Weber, N.
2013-12-01
Most scientific software projects are dependent on the work of many diverse people, institutions and organizations. Incentivizing these actors to cooperatively develop software that is both reliable, and sustainable is complicated by the fact that the reward structures of these various actors greatly differ: research scientists want results from a software or model run in order to publish papers, produce new data, or test a hypothesis; software engineers and research centers want compilable, well documented code that is refactorable, reusable and reproducible in future research scenarios. While much research has been done on incentives and motivations for participating in open source software projects or cyberinfrastrcture development, little work has been done on what motivates or incentivizes developers to maintain scientific software projects beyond their original application. This poster will present early results of research into the incentives and motivation for cooperative scientific software development. In particular, this work focuses on motivations for the maintenance and repair of libraries on the software platform R. Our work here uses a sample of R packages that were created by research centers, or are specific to earth, environmental and climate science applications. We first mined 'check' logs from the Comprehensive R Archive Network (CRAN) to determine the amount of time a package has existed, the number of versions it has gone through over this time, the number of releases, and finally the contact information for each official package 'maintainer'. We then sent a survey to each official maintainer, asking them questions about what role they played in developing the original package, and what their motivations were for sustaining the project over time. We will present early results from this mining and our survey of R maintainers.
Noninvasive Test Detects Cardiovascular Disease
NASA Technical Reports Server (NTRS)
2007-01-01
At NASA's Jet Propulsion Laboratory (JPL), NASA-developed Video Imaging Communication and Retrieval (VICAR) software laid the groundwork for analyzing images of all kinds. A project seeking to use imaging technology for health care diagnosis began when the imaging team considered using the VICAR software to analyze X-ray images of soft tissue. With marginal success using X-rays, the team applied the same methodology to ultrasound imagery, which was already digitally formatted. The new approach proved successful for assessing amounts of plaque build-up and arterial wall thickness, direct predictors of heart disease, and the result was a noninvasive diagnostic system with the ability to accurately predict heart health. Medical Technologies International Inc. (MTI) further developed and then submitted the technology to a vigorous review process at the FDA, which cleared the software for public use. The software, patented under the name Prowin, is being used in MTI's patented ArterioVision, a carotid intima-media thickness (CIMT) test that uses ultrasound image-capturing and analysis software to noninvasively identify the risk for the major cause of heart attack and strokes: atherosclerosis. ArterioVision provides a direct measurement of atherosclerosis by safely and painlessly measuring the thickness of the first two layers of the carotid artery wall using an ultrasound procedure and advanced image-analysis software. The technology is now in use in all 50 states and in many countries throughout the world.
Intensity measurement of automotive headlamps using a photometric vision system
NASA Astrophysics Data System (ADS)
Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.
1996-01-01
Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.
Simulation-based Testing of Control Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozmen, Ozgur; Nutaro, James J.; Sanyal, Jibonananda
It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulatormore » can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, R.; Neymark, J.
2013-07-01
ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs applies the IEA BESTEST building thermal fabric test cases and example simulation results originally published in 1995. These software accuracy test cases and their example simulation results, which comprise the first test suite adapted for the initial 2001 version of Standard 140, are approaching their 20th anniversary. In response to the evolution of the state of the art in building thermal fabric modeling since the test cases and example simulation results were developed, work is commencing to update the normative test specification and themore » informative example results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, R.; Neymark, J.
2013-07-01
ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs applies the IEA BESTEST building thermal fabric test cases and example simulation results originally published in 1995. These software accuracy test cases and their example simulation results, which comprise the first test suite adapted for the initial 2001 version of Standard 140, are approaching their 20th anniversary. In response to the evolution of the state of the art in building thermal fabric modeling since the test cases and example simulation results were developed, work is commencing to update the normative test specification and themore » informative example results.« less
Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.
Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C
2004-01-01
Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %.
Waveform Generator Signal Processing Software
DOT National Transportation Integrated Search
1988-09-01
This report describes the software that was developed to process test waveforms that were recorded by crash test data acquisition systems. The test waveforms are generated by an electronic waveform generator developed by MGA Research Corporation unde...
Software error data collection and categorization
NASA Technical Reports Server (NTRS)
Ostrand, T. J.; Weyuker, E. J.
1982-01-01
Software errors detected during development of an interactive special purpose editor system were studied. This product was followed during nine months of coding, unit testing, function testing, and system testing. A new error categorization scheme was developed.
Foo Kune, Denis [Saint Paul, MN; Mahadevan, Karthikeyan [Mountain View, CA
2011-01-25
A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.
A Model Independent S/W Framework for Search-Based Software Testing
Baik, Jongmoon
2014-01-01
In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314
Benchmarking the ATLAS software through the Kit Validation engine
NASA Astrophysics Data System (ADS)
De Salvo, Alessandro; Brasolin, Franco
2010-04-01
The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.
Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard
2013-01-01
Purpose: With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. Methods: A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. Results: The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. Conclusions: The work demonstrates the viability of the design approach and the software tool for analysis of large data sets. PMID:24320426
[Results of testing of MINISKAN mobile gamma-ray camera and specific features of its design].
Utkin, V M; Kumakhov, M A; Blinov, N N; Korsunskiĭ, V N; Fomin, D K; Kolesnikova, N V; Tultaev, A V; Nazarov, A A; Tararukhina, O B
2007-01-01
The main results of engineering, biomedical, and clinical testing of MINISKAN mobile gamma-ray camera are presented. Specific features of the camera hardware and software, as well as the main technical specifications, are described. The gamma-ray camera implements a new technology based on reconstructive tomography, aperture encoding, and digital processing of signals.
Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)
NASA Technical Reports Server (NTRS)
Hughes, Steven; Knittel, Jeremy; Shoan, Wendy; Kim, Youngkwang; Conway, Claire; Conway, Darrel J.
2017-01-01
This paper describes the processes and results of Verification and Validation (VV) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The VV effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.
Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)
NASA Technical Reports Server (NTRS)
Hughes, Steven; Knittel, Jeremy; Shoan, Wendy (Compiler); Kim, Youngkwang; Conway, Claire (Compiler); Conway, Darrel
2017-01-01
This paper describes the processes and results of Verification and Validation (V&V) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The V&V effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.
Erickson, Collin B; Ankenman, Bruce E; Sanchez, Susan M
2018-06-01
This data article provides the summary data from tests comparing various Gaussian process software packages. Each spreadsheet represents a single function or type of function using a particular input sample size. In each spreadsheet, a row gives the results for a particular replication using a single package. Within each spreadsheet there are the results from eight Gaussian process model-fitting packages on five replicates of the surface. There is also one spreadsheet comparing the results from two packages performing stochastic kriging. These data enable comparisons between the packages to determine which package will give users the best results.
Test Driven Development: Lessons from a Simple Scientific Model
NASA Astrophysics Data System (ADS)
Clune, T. L.; Kuo, K.
2010-12-01
In the commercial software industry, unit testing frameworks have emerged as a disruptive technology that has permanently altered the process by which software is developed. Unit testing frameworks significantly reduce traditional barriers, both practical and psychological, to creating and executing tests that verify software implementations. A new development paradigm, known as test driven development (TDD), has emerged from unit testing practices, in which low-level tests (i.e. unit tests) are created by developers prior to implementing new pieces of code. Although somewhat counter-intuitive, this approach actually improves developer productivity. In addition to reducing the average time for detecting software defects (bugs), the requirement to provide procedure interfaces that enable testing frequently leads to superior design decisions. Although TDD is widely accepted in many software domains, its applicability to scientific modeling still warrants reasonable skepticism. While the technique is clearly relevant for infrastructure layers of scientific models such as the Earth System Modeling Framework (ESMF), numerical and scientific components pose a number of challenges to TDD that are not often encountered in commercial software. Nonetheless, our experience leads us to believe that the technique has great potential not only for developer productivity, but also as a tool for understanding and documenting the basic scientific assumptions upon which our models are implemented. We will provide a brief introduction to test driven development and then discuss our experience in using TDD to implement a relatively simple numerical model that simulates the growth of snowflakes. Many of the lessons learned are directly applicable to larger scientific models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevins, N; Vanderhoek, M; Lang, S
2014-06-15
Purpose: Medical display monitor calibration and quality control present challenges to medical physicists. The purpose of this work is to demonstrate and share experiences with an open source package that allows for both initial monitor setup and routine performance evaluation. Methods: A software package, pacsDisplay, has been developed over the last decade to aid in the calibration of all monitors within the radiology group in our health system. The software is used to calibrate monitors to follow the DICOM Grayscale Standard Display Function (GSDF) via lookup tables installed on the workstation. Additional functionality facilitates periodic evaluations of both primary andmore » secondary medical monitors to ensure satisfactory performance. This software is installed on all radiology workstations, and can also be run as a stand-alone tool from a USB disk. Recently, a database has been developed to store and centralize the monitor performance data and to provide long-term trends for compliance with internal standards and various accrediting organizations. Results: Implementation and utilization of pacsDisplay has resulted in improved monitor performance across the health system. Monitor testing is now performed at regular intervals and the software is being used across multiple imaging modalities. Monitor performance characteristics such as maximum and minimum luminance, ambient luminance and illuminance, color tracking, and GSDF conformity are loaded into a centralized database for system performance comparisons. Compliance reports for organizations such as MQSA, ACR, and TJC are generated automatically and stored in the same database. Conclusion: An open source software solution has simplified and improved the standardization of displays within our health system. This work serves as an example method for calibrating and testing monitors within an enterprise health system.« less
NASA Astrophysics Data System (ADS)
Petronevich, V. V.
2016-10-01
The paper observes the issues related to the increase of efficiency and information content of experimental research in transonic wind tunnels (WT). In particular, questions of optimizing the WT Data Acquisition and Control Systems (DACS) to provide the continuous mode test method are discussed. The problem of Mach number (M number) stabilization in the test section of the large transonic compressor-type wind tunnels at subsonic flow conditions with continuous change of the aircraft model angle of attack is observed on the example of T-128 wind tunnel. To minimize the signals distortion in T-128 DACS measurement channels the optimal MGCplus filter settings of the data acquisition system used in T-128 wind tunnel to measure loads were experimentally determined. As a result of the tests performed a good agreement of the results of balance measurements for pitch/pause and continuous test modes was obtained. Carrying out balance tests for pitch/pause and continuous test methods was provided by the regular data acquisition and control system of T-128 wind tunnel with unified software package POTOK. The architecture and functional abilities of POTOK software package are observed.
The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Finelli, George B.
1991-01-01
This paper affirms that the quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exhorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance separately programmed versions fail independently is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation.
Software verification plan for GCS. [guidance and control software
NASA Technical Reports Server (NTRS)
Dent, Leslie A.; Shagnea, Anita M.; Hayhurst, Kelly J.
1990-01-01
This verification plan is written as part of an experiment designed to study the fundamental characteristics of the software failure process. The experiment will be conducted using several implementations of software that were produced according to industry-standard guidelines, namely the Radio Technical Commission for Aeronautics RTCA/DO-178A guidelines, Software Consideration in Airborne Systems and Equipment Certification, for the development of flight software. This plan fulfills the DO-178A requirements for providing instructions on the testing of each implementation of software. The plan details the verification activities to be performed at each phase in the development process, contains a step by step description of the testing procedures, and discusses all of the tools used throughout the verification process.
Using virtual reality for science mission planning: A Mars Pathfinder case
NASA Technical Reports Server (NTRS)
Kim, Jacqueline H.; Weidner, Richard J.; Sacks, Allan L.
1994-01-01
NASA's Mars Pathfinder Project requires a Ground Data System (GDS) that supports both engineering and scientific payloads with reduced mission operations staffing, and short planning schedules. Also, successful surface operation of the lander camera requires efficient mission planning and accurate pointing of the camera. To meet these challenges, a new software strategy that integrates virtual reality technology with existing navigational ancillary information and image processing capabilities. The result is an interactive workstation based applications software that provides a high resolution, 3-dimensial, stereo display of Mars as if it were viewed through the lander camera. The design, implementation strategy and parametric specification phases for the development of this software were completed, and the prototype tested. When completed, the software will allow scientists and mission planners to access simulated and actual scenes of Mars' surface. The perspective from the lander camera will enable scientists to plan activities more accurately and completely. The application will also support the sequence and command generation process and will allow testing and verification of camera pointing commands via simulation.
CrossTalk. The Journal of Defense Software Engineering. Volume 13, Number 6, June 2000
2000-06-01
Techniques for Efficiently Generating and Testing Software This paper presents a proven process that uses advanced tools to design, develop and test... optimal software. by Keith R. Wegner Large Software Systems—Back to Basics Development methods that work on small problems seem to not scale well to...Ability Requirements for Teamwork: Implications for Human Resource Management, Journal of Management, Vol. 20, No. 2, 1994. 11. Ferguson, Pat, Watts S
Open source IPSEC software in manned and unmanned space missions
NASA Astrophysics Data System (ADS)
Edwards, Jacob
Network security is a major topic of research because cyber attackers pose a threat to national security. Securing ground-space communications for NASA missions is important because attackers could endanger mission success and human lives. This thesis describes how an open source IPsec software package was used to create a secure and reliable channel for ground-space communications. A cost efficient, reproducible hardware testbed was also created to simulate ground-space communications. The testbed enables simulation of low-bandwidth and high latency communications links to experiment how the open source IPsec software reacts to these network constraints. Test cases were built that allowed for validation of the testbed and the open source IPsec software. The test cases also simulate using an IPsec connection from mission control ground routers to points of interest in outer space. Tested open source IPsec software did not meet all the requirements. Software changes were suggested to meet requirements.
The use of emulator-based simulators for on-board software maintenance
NASA Astrophysics Data System (ADS)
Irvine, M. M.; Dartnell, A.
2002-07-01
Traditionally, onboard software maintenance activities within the space sector are performed using hardware-based facilities. These facilities are developed around the use of hardware emulation or breadboards containing target processors. Some sort of environment is provided around the hardware to support the maintenance actives. However, these environments are not easy to use to set-up the required test scenarios, particularly when the onboard software executes in a dynamic I/O environment, e.g. attitude control software, or data handling software. In addition, the hardware and/or environment may not support the test set-up required during investigations into software anomalies, e.g. raise spurious interrupt, fail memory, etc, and the overall "visibility" of the software executing may be limited. The Software Maintenance Simulator (SOMSIM) is a tool that can support the traditional maintenance facilities. The following list contains some of the main benefits that SOMSIM can provide: Low cost flexible extension to existing product - operational simulator containing software processor emulator; System-level high-fidelity test-bed in which software "executes"; Provides a high degree of control/configuration over the entire "system", including contingency conditions perhaps not possible with real hardware; High visibility and control over execution of emulated software. This paper describes the SOMSIM concept in more detail, and also describes the SOMSIM study being carried out for ESA/ESOC by VEGA IT GmbH.
Wenski, Edward G [Lenexa, KS
2007-08-21
A micro-tensile testing system providing a stand-alone test platform for testing and reporting physical or engineering properties of test samples of materials having thicknesses of approximately between 0.002 inch and 0.030 inch, including, for example, LiGA engineered materials. The testing system is able to perform a variety of static, dynamic, and cyclic tests. The testing system includes a rigid frame and adjustable gripping supports to minimize measurement errors due to deflection or bending under load; serrated grips for securing the extremely small test sample; high-speed laser scan micrometers for obtaining accurate results; and test software for controlling the testing procedure and reporting results.
Wenski, Edward G.
2006-01-10
A micro-tensile testing system providing a stand-alone test platform for testing and reporting physical or engineering properties of test samples of materials having thicknesses of approximately between 0.002 inch and 0.030 inch, including, for example, LiGA engineered materials. The testing system is able to perform a variety of static, dynamic, and cyclic tests. The testing system includes a rigid frame and adjustable gripping supports to minimize measurement errors due to deflection or bending under load; serrated grips for securing the extremely small test sample; high-speed laser scan micrometers for obtaining accurate results; and test software for controlling the testing procedure and reporting results.
Wenski, Edward G [Lenexa, KS
2007-07-17
A micro-tensile testing system providing a stand-alone test platform for testing and reporting physical or engineering properties of test samples of materials having thicknesses of approximately between 0.002 inch and 0.030 inch, including, for example, LiGA engineered materials. The testing system is able to perform a variety of static, dynamic, and cyclic tests. The testing system includes a rigid frame and adjustable gripping supports to minimize measurement errors due to deflection or bending under load; serrated grips for securing the extremely small test sample; high-speed laser scan micrometers for obtaining accurate results; and test software for controlling the testing procedure and reporting results.
Gravity Probe B data system description
NASA Astrophysics Data System (ADS)
Bennett, Norman R.
2015-11-01
The Gravity Probe B data system, developed, integrated, and tested by Lockheed Missiles & Space Company, and later Lockheed Martin Corporation, included flight and ground command, control, and communications software. The development was greatly facilitated, conceptually and by the transfer of key personnel, through Lockheed’s earlier flight and ground test software development for the Hubble Space Telescope (HST). Key design challenges included the tight mission timeline (17 months, 9 days of on-orbit operation), the need to tune the system once on-orbit, and limited 2 Kbps real-time data rates and ground asset availability. The result was a completely integrated space vehicle and Stanford mission operations center, which successfully collected and archived 97% of the ‘guide star valid’ data to support the science analysis. Lessons learned and incorporated from the HST flight software development and on-orbit support experience, and Lockheed’s independent research and development effort, will be discussed.
Capturing a failure of an ASIC in-situ, using infrared radiometry and image processing software
NASA Technical Reports Server (NTRS)
Ruiz, Ronald P.
2003-01-01
Failures in electronic devices can sometimes be tricky to locate-especially if they are buried inside radiation-shielded containers designed to work in outer space. Such was the case with a malfunctioning ASIC (Application Specific Integrated Circuit) that was drawing excessive power at a specific temperature during temperature cycle testing. To analyze the failure, infrared radiometry (thermography) was used in combination with image processing software to locate precisely where the power was being dissipated at the moment the failure took place. The IR imaging software was used to make the image of the target and background, appear as unity. As testing proceeded and the failure mode was reached, temperature changes revealed the precise location of the fault. The results gave the design engineers the information they needed to fix the problem. This paper describes the techniques and equipment used to accomplish this failure analysis.
An Earthquake Shake Map Routine with Low Cost Accelerometers: Preliminary Results
NASA Astrophysics Data System (ADS)
Alcik, H. A.; Tanircan, G.; Kaya, Y.
2015-12-01
Vast amounts of high quality strong motion data are indispensable inputs of the analyses in the field of geotechnical and earthquake engineering however, high cost of installation of the strong motion systems constitutes the biggest obstacle for worldwide dissemination. In recent years, MEMS based (micro-electro-mechanical systems) accelerometers have been used in seismological research-oriented studies as well as earthquake engineering oriented projects basically due to precision obtained in downsized instruments. In this research our primary goal is to ensure the usage of these low-cost instruments in the creation of shake-maps immediately after a strong earthquake. Second goal is to develop software that will automatically process the real-time data coming from the rapid response network and create shake-map. For those purposes, four MEMS sensors have been set up to deliver real-time data. Data transmission is done through 3G modems. A subroutine was coded in assembler language and embedded into the operating system of each instrument to create MiniSEED files with packages of 1-second instead of 512-byte packages.The Matlab-based software calculates the strong motion (SM) parameters at every second, and they are compared with the user-defined thresholds. A voting system embedded in the software captures the event if the total vote exceeds the threshold. The user interface of the software enables users to monitor the calculated SM parameters either in a table or in a graph (Figure 1). A small scale and affordable rapid response network is created using four MEMS sensors, and the functionality of the software has been tested and validated using shake table tests. The entire system is tested together with a reference sensor under real strong ground motion recordings as well as series of sine waves with varying amplitude and frequency. The successful realization of this software allowed us to set up a test network at Tekirdağ Province, the closest coastal point to the moderate size earthquake activities in the Marmara Sea, Turkey.
CSE database: extended annotations and new recommendations for ECG software testing.
Smíšek, Radovan; Maršánová, Lucie; Němcová, Andrea; Vítek, Martin; Kozumplík, Jiří; Nováková, Marie
2017-08-01
Nowadays, cardiovascular diseases represent the most common cause of death in western countries. Among various examination techniques, electrocardiography (ECG) is still a highly valuable tool used for the diagnosis of many cardiovascular disorders. In order to diagnose a person based on ECG, cardiologists can use automatic diagnostic algorithms. Research in this area is still necessary. In order to compare various algorithms correctly, it is necessary to test them on standard annotated databases, such as the Common Standards for Quantitative Electrocardiography (CSE) database. According to Scopus, the CSE database is the second most cited standard database. There were two main objectives in this work. First, new diagnoses were added to the CSE database, which extended its original annotations. Second, new recommendations for diagnostic software quality estimation were established. The ECG recordings were diagnosed by five new cardiologists independently, and in total, 59 different diagnoses were found. Such a large number of diagnoses is unique, even in terms of standard databases. Based on the cardiologists' diagnoses, a four-round consensus (4R consensus) was established. Such a 4R consensus means a correct final diagnosis, which should ideally be the output of any tested classification software. The accuracy of the cardiologists' diagnoses compared with the 4R consensus was the basis for the establishment of accuracy recommendations. The accuracy was determined in terms of sensitivity = 79.20-86.81%, positive predictive value = 79.10-87.11%, and the Jaccard coefficient = 72.21-81.14%, respectively. Within these ranges, the accuracy of the software is comparable with the accuracy of cardiologists. The accuracy quantification of the correct classification is unique. Diagnostic software developers can objectively evaluate the success of their algorithm and promote its further development. The annotations and recommendations proposed in this work will allow for faster development and testing of classification software. As a result, this might facilitate cardiologists' work and lead to faster diagnoses and earlier treatment.
Software engineering and automatic continuous verification of scientific software
NASA Astrophysics Data System (ADS)
Piggott, M. D.; Hill, J.; Farrell, P. E.; Kramer, S. C.; Wilson, C. R.; Ham, D.; Gorman, G. J.; Bond, T.
2011-12-01
Software engineering of scientific code is challenging for a number of reasons including pressure to publish and a lack of awareness of the pitfalls of software engineering by scientists. The Applied Modelling and Computation Group at Imperial College is a diverse group of researchers that employ best practice software engineering methods whilst developing open source scientific software. Our main code is Fluidity - a multi-purpose computational fluid dynamics (CFD) code that can be used for a wide range of scientific applications from earth-scale mantle convection, through basin-scale ocean dynamics, to laboratory-scale classic CFD problems, and is coupled to a number of other codes including nuclear radiation and solid modelling. Our software development infrastructure consists of a number of free tools that could be employed by any group that develops scientific code and has been developed over a number of years with many lessons learnt. A single code base is developed by over 30 people for which we use bazaar for revision control, making good use of the strong branching and merging capabilities. Using features of Canonical's Launchpad platform, such as code review, blueprints for designing features and bug reporting gives the group, partners and other Fluidity uers an easy-to-use platform to collaborate and allows the induction of new members of the group into an environment where software development forms a central part of their work. The code repositoriy are coupled to an automated test and verification system which performs over 20,000 tests, including unit tests, short regression tests, code verification and large parallel tests. Included in these tests are build tests on HPC systems, including local and UK National HPC services. The testing of code in this manner leads to a continuous verification process; not a discrete event performed once development has ceased. Much of the code verification is done via the "gold standard" of comparisons to analytical solutions via the method of manufactured solutions. By developing and verifying code in tandem we avoid a number of pitfalls in scientific software development and advocate similar procedures for other scientific code applications.
DAQ: Software Architecture for Data Acquisition in Sounding Rockets
NASA Technical Reports Server (NTRS)
Ahmad, Mohammad; Tran, Thanh; Nichols, Heidi; Bowles-Martinez, Jessica N.
2011-01-01
A multithreaded software application was developed by Jet Propulsion Lab (JPL) to collect a set of correlated imagery, Inertial Measurement Unit (IMU) and GPS data for a Wallops Flight Facility (WFF) sounding rocket flight. The data set will be used to advance Terrain Relative Navigation (TRN) technology algorithms being researched at JPL. This paper describes the software architecture and the tests used to meet the timing and data rate requirements for the software used to collect the dataset. Also discussed are the challenges of using commercial off the shelf (COTS) flight hardware and open source software. This includes multiple Camera Link (C-link) based cameras, a Pentium-M based computer, and Linux Fedora 11 operating system. Additionally, the paper talks about the history of the software architecture's usage in other JPL projects and its applicability for future missions, such as cubesats, UAVs, and research planes/balloons. Also talked about will be the human aspect of project especially JPL's Phaeton program and the results of the launch.
ATE accomplishes receiver specification testing with increased speed and throughput
NASA Astrophysics Data System (ADS)
Moser, S. A.
1982-12-01
The use of automatic test equipment (ATE) for receiver specifications testing can result in a 90-95% reduction of test time, with a corresponding reduction of labor costs due both to the reduction of personnel numbers and a simplification of tasks that permits less skilled personnel to be employed. These benefits free high-level technicians for more challenging system management assignments. Accuracy and repeatability also improve with the adoption of ATE, since no possibility of human error can be introduced into the readings that are taken by the system. A massive and expensive software design and development effort is identified as the most difficult aspect of ATE implementation, since programming is both time-consuming and labor intensive. An attempt is therefore made by system manufacturers to conduct an integrated development program for both ATE system hardware and software.
Image analysis software for following progression of peripheral neuropathy
NASA Astrophysics Data System (ADS)
Epplin-Zapf, Thomas; Miller, Clayton; Larkin, Sean; Hermesmeyer, Eduardo; Macy, Jenny; Pellegrini, Marco; Luccarelli, Saverio; Staurenghi, Giovanni; Holmes, Timothy
2009-02-01
A relationship has been reported by several research groups [1 - 4] between the density and shapes of nerve fibers in the cornea and the existence and severity of peripheral neuropathy. Peripheral neuropathy is a complication of several prevalent diseases or conditions, which include diabetes, HIV, prolonged alcohol overconsumption and aging. A common clinical technique for confirming the condition is intramuscular electromyography (EMG), which is invasive, so a noninvasive technique like the one proposed here carries important potential advantages for the physician and patient. A software program that automatically detects the nerve fibers, counts them and measures their shapes is being developed and tested. Tests were carried out with a database of subjects with levels of severity of diabetic neuropathy as determined by EMG testing. Results from this testing, that include a linear regression analysis are shown.
A dual-waveband dynamic IR scene projector based on DMD
NASA Astrophysics Data System (ADS)
Hu, Yu; Zheng, Ya-wei; Gao, Jiao-bo; Sun, Ke-feng; Li, Jun-na; Zhang, Lei; Zhang, Fang
2016-10-01
Infrared scene simulation system can simulate multifold objects and backgrounds to perform dynamic test and evaluate EO detecting system in the hardware in-the-loop test. The basic structure of a dual-waveband dynamic IR scene projector was introduced in the paper. The system's core device is an IR Digital Micro-mirror Device (DMD) and the radiant source is a mini-type high temperature IR plane black-body. An IR collimation optical system which transmission range includes 3-5μm and 8-12μm is designed as the projection optical system. Scene simulation software was developed with Visual C++ and Vega soft tools and a software flow chart was presented. The parameters and testing results of the system were given, and this system was applied with satisfying performance in an IR imaging simulation testing.
DenInv3D: a geophysical software for three-dimensional density inversion of gravity field data
NASA Astrophysics Data System (ADS)
Tian, Yu; Ke, Xiaoping; Wang, Yong
2018-04-01
This paper presents a three-dimensional density inversion software called DenInv3D that operates on gravity and gravity gradient data. The software performs inversion modelling, kernel function calculation, and inversion calculations using the improved preconditioned conjugate gradient (PCG) algorithm. In the PCG algorithm, due to the uncertainty of empirical parameters, such as the Lagrange multiplier, we use the inflection point of the L-curve as the regularisation parameter. The software can construct unequally spaced grids and perform inversions using such grids, which enables changing the resolution of the inversion results at different depths. Through inversion of airborne gradiometry data on the Australian Kauring test site, we discovered that anomalous blocks of different sizes are present within the study area in addition to the central anomalies. The software of DenInv3D can be downloaded from http://159.226.162.30.
LevRad software as a tool to learn how to proceed with an evaluation of barriers.
Ferreira, C C; Souza, S O
2011-05-30
We developed the software LevRad with the objective of teaching how to proceed in an analysis of barriers shielding against x-rays to minimize the contact of the professional or the student with x-rays and also to prevent wearing out of the x-ray equipment. Some tests of the software were made, and preliminary results indicate that LevRad is efficient as a complementary tool for the development of professionals related to diagnostic radiology. In the case of education, an advantage is gained when the beginner uses the software before his or her first contact with x-ray equipment in locu. The software introduces a basic knowledge about evaluation of barriers, prevents wearing out of the x—ray tube, reinforces teaching of evaluation of barriers, and reduces the collective effective dose by avoiding unnecessary exposures when possible.
Software for Analyzing Laminar-to-Turbulent Flow Transitions
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan
2004-01-01
Software assurance is the planned and systematic set of activities that ensures that software processes and products conform to requirements, standards, and procedures. Examples of such activities are the following: code inspections, unit tests, design reviews, performance analyses, construction of traceability matrices, etc. In practice, software development projects have only limited resources (e.g., schedule, budget, and availability of personnel) to cover the entire development effort, of which assurance is but a part. Projects must therefore select judiciously from among the possible assurance activities. At its heart, this can be viewed as an optimization problem; namely, to determine the allocation of limited resources (time, money, and personnel) to minimize risk or, alternatively, to minimize the resources needed to reduce risk to an acceptable level. The end result of the work reported here is a means to optimize quality-assurance processes used in developing software. This is achieved by combining two prior programs in an innovative manner
Investigation of the Finite Element Software Packages at KSC
NASA Technical Reports Server (NTRS)
Lu, Chu-Ho
1991-01-01
The useful and powerful features of NASTRAN and three real world problems for the testing of the capabilities of different NASTRAN versions are discussed. The test problems involve direct transient analysis, nonlinear analysis, and static analysis. The experiences in using graphics software packages are also discussed. It was found that MSC/XL can be more useful if it can be improved to generate picture files of the analysis results and to extend its capabilities to support finite element codes other than MSC/NASTRAN. It was found that the current version of SDRC/I-DEAS (version VI) may have bugs in the module 'Data Loader'.
Study on optimum length of raw material in stainless steel high-lock nuts forging
NASA Astrophysics Data System (ADS)
Cheng, Meiwen; Liu, Fenglei; Zhao, Qingyun; Wang, Lidong
2018-04-01
Taking 302 stainless steel (1Cr18Ni9) high-lock nuts for research objects, adjusting the length of raw material, then using DEFORM software to simulate the isothermal forging process of each station and conducting the corresponding field tests to study the effects of raw material size on the stainless steel high-lock nuts forming performance. The tests show that the samples of each raw material length is basically the same as the results of the DEFORM software. When the length of the raw material is 10mm, the appearance size of the parts can meet the design requirements.
Test Driven Development of Scientific Models
NASA Technical Reports Server (NTRS)
Clune, Thomas L.
2014-01-01
Test-Driven Development (TDD), a software development process that promises many advantages for developer productivity and software reliability, has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices.After a brief overview of the TDD process and my experience in applying the methodology for development activities at Goddard, I will delve more deeply into some of the challenges that are posed by numerical and scientific software as well as tools and implementation approaches that should address those challenges.
[Application of Stata software to test heterogeneity in meta-analysis method].
Wang, Dan; Mou, Zhen-yun; Zhai, Jun-xia; Zong, Hong-xia; Zhao, Xiao-dong
2008-07-01
To introduce the application of Stata software to heterogeneity test in meta-analysis. A data set was set up according to the example in the study, and the corresponding commands of the methods in Stata 9 software were applied to test the example. The methods used were Q-test and I2 statistic attached to the fixed effect model forest plot, H statistic and Galbraith plot. The existence of the heterogeneity among studies could be detected by Q-test and H statistic and the degree of the heterogeneity could be detected by I2 statistic. The outliers which were the sources of the heterogeneity could be spotted from the Galbraith plot. Heterogeneity test in meta-analysis can be completed by the four methods in Stata software simply and quickly. H and I2 statistics are more robust, and the outliers of the heterogeneity can be clearly seen in the Galbraith plot among the four methods.
Open core control software for surgical robots.
Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo
2010-05-01
In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a "force guide" for supporting operators to perform precise manipulation by using a master-slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. The Open Core Control software was implemented on a surgical master-slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a "force guide" on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement "General Principles of Software Validation" or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field.
Flight software development for the isothermal dendritic growth experiment
NASA Technical Reports Server (NTRS)
Levinson, Laurie H.; Winsa, Edward A.; Glicksman, Martin E.
1989-01-01
The Isothermal Dendritic Growth Experiment (IDGE) is a microgravity materials science experiment scheduled to fly in the cargo bay of the shuttle on the United States Microgravity Payload (USMP) carrier. The experiment will be operated by real-time control software which will not only monitor and control onboard experiment hardware, but will also communicate, via downlink data and uplink commands, with the Payload Operations Control Center (POCC) at NASA George C. Marshall Space Flight Center (MSFC). The software development approach being used to implement this system began with software functional requirements specification. This was accomplished using the Yourdon/DeMarco methodology as supplemented by the Ward/Mellor real-time extensions. The requirements specification in combination with software prototyping was then used to generate a detailed design consisting of structure charts, module prologues, and Program Design Language (PDL) specifications. This detailed design will next be used to code the software, followed finally by testing against the functional requirements. The result will be a modular real-time control software system with traceability through every phase of the development process.
Flight software development for the isothermal dendritic growth experiment
NASA Technical Reports Server (NTRS)
Levinson, Laurie H.; Winsa, Edward A.; Glicksman, M. E.
1990-01-01
The Isothermal Dendritic Growth Experiment (IDGE) is a microgravity materials science experiment scheduled to fly in the cargo bay of the shuttle on the United States Microgravity Payload (USMP) carrier. The experiment will be operated by real-time control software which will not only monitor and control onboard experiment hardware, but will also communicate, via downlink data and unlink commands, with the Payload Operations Control Center (POCC) at NASA George C. Marshall Space Flight Center (MSFC). The software development approach being used to implement this system began with software functional requirements specification. This was accomplished using the Yourdon/DeMarco methodology as supplemented by the Ward/Mellor real-time extensions. The requirements specification in combination with software prototyping was then used to generate a detailed design consisting of structure charts, module prologues, and Program Design Language (PDL) specifications. This detailed design will next be used to code the software, followed finally by testing against the functional requirements. The result will be a modular real-time control software system with traceability through every phase of the development process.
Orbit attitude processor. STS-1 bench program verification test plan
NASA Technical Reports Server (NTRS)
Mcclain, C. R.
1980-01-01
A plan for the static verification of the STS-1 ATT PROC ORBIT software requirements is presented. The orbit version of the SAPIENS bench program is used to generate the verification data. A brief discussion of the simulation software and flight software modules is presented along with a description of the test cases.
1976-11-01
system. b. Read different program configurations to reconfigure the software during flight. c. Write Digital Integrated Test System (DITS) results...associated witn > inor C):l.e Event must be Unlatched. The sole difference between a Latched ana an lnratcrec Condition is that upon the Scheduling...Table. Furthermore, the block of pointers for one Minor Cycle may be wholly contained witnir the Diock of ocinters for a different Minor Cycle. For
ERIC Educational Resources Information Center
Khan, Muhammad Ahmad; Gorard, Stephen
2012-01-01
We report here the overall results of a cluster randomised controlled trial of the use of computer-aided instruction with 672 Year 7 pupils in 23 secondary school classes in the north of England. A new piece of commercial software, claimed on the basis of publisher testing to be effective in improving reading after just six weeks of use in the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gulliver, John S.
2015-03-01
Conventional hydropower turbine aeration test-bed for computational routines and software tools for improving environmental mitigation technologies for conventional hydropower systems. In achieving this goal, we have partnered with Alstom, a global leader in energy technology development and United States power generation, with additional funding from the Initiative for Renewable Energy and the Environment (IREE) and the College of Science and Engineering (CSE) at the UMN
Program Manager: Journal of the Defense Systems Management College, Volume 17, Number 3
1988-06-01
34 modernizing plants and processes, We have established a network with What does "quality" mean? First, the streamlining management, pooling trade associations...pant an opportunity to reflect on the - Network building may be the first opportunity for some organizational climate and hierarchical managers to...s devop slom p se result of the soaring cost of soft -Software Performance Testing. 3 ware enhancements. This difference in hardware and software
Pettit performs the EPIC Card Testing and X2R10 Software Transition
2011-12-28
ISS030-E-022574 (28 Dec. 2011) -- NASA astronaut Don Pettit (foreground),Expedition 30 flight engineer, performs the Enhanced Processor and Integrated Communications (EPIC) card testing and X2R10 software transition. The software transition work will include EPIC card testing and card installations, and monitoring of the upgraded Multiplexer/ Demultiplexer (MDM) computers. Dan Burbank, Expedition 30 commander, is setting up a camcorder in the background.
Pettit performs the EPIC Card Testing and X2R10 Software Transition
2011-12-28
ISS030-E-022575 (28 Dec. 2011) -- NASA astronaut Don Pettit (foreground),Expedition 30 flight engineer, performs the Enhanced Processor and Integrated Communications (EPIC) card testing and X2R10 software transition. The software transition work will include EPIC card testing and card installations, and monitoring of the upgraded Multiplexer/ Demultiplexer (MDM) computers. Dan Burbank, Expedition 30 commander, is setting up a camcorder in the background.
Tank Monitor and Control System (TMACS) Rev 11.0 Acceptance Test Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
HOLM, M.J.
The purpose of this document is to describe tests performed to validate Revision 11 of the TMACS Monitor and Control System (TMACS) and verify that the software functions as intended by design. This document is intended to test the software portion of TMACS. The tests will be performed on the development system. The software to be tested is the TMACS knowledge bases (KB) and the I/O driver/services. The development system will not be talking to field equipment; instead, the field equipment is simulated using emulators or multiplexers in the lab.
Computational Simulations and the Scientific Method
NASA Technical Reports Server (NTRS)
Kleb, Bil; Wood, Bill
2005-01-01
As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.
An Overview of the Guided Parafoil System Derived from X-38 Experience
NASA Technical Reports Server (NTRS)
Stein, Jenny M.; Madsen, Chris M.; Strahan, Alan L.
2005-01-01
The NASA Johnson Space Center built a 4200 sq ft parafoil for the U.S. Army Natick Soldier Center to demonstrate autonomous flight using a guided parafoil system to deliver 10,000 lbs of useable payload. The parafoil's design was based upon that developed during the X-38 program. The drop test payload consisted of a standard 20-foot Type V airdrop platform, a standard 12-foot weight tub, a 60 ft drogue parachute, a 4200 ft2 parafoil, an instrumentation system, and a Guidance, Navigation, and Control (GN&C) system. Instrumentation installed on the load was used to gather data to validate simulation models and preflight loads predictions and to perform post flight trajectory and performance reconstructions. The GN&C system, developed during NASA's X-38 program, consisted of a flight computer, modems for uplink commands and downlink data, a compass, laser altimeter, and two winches. The winches were used to steer the parafoil and to perform the dynamic flare maneuver for a soft landing. The laser was used to initiate the flare. The GN&C software was originally provided to NASA by the European Space Agency. NASA incorporated further software refinements based upon the X-38 flight test results. Three full-scale drop tests were conducted, with the third being performed during the Precision Airdrop Technology Conference and Demonstration (PATCAD) Conference at the U.S. Army Yuma Proving Ground (YPG) in November of 2003. For the PATCAD demonstration, the parafoil and GN&C software and hardware performed well, concluding with a good flare and the smallest miss distance ever experienced in NASA's parafoil drop test program. This paper describes the 4200 sq ft parafoil system, simulation results, and the results of the drop tests.
SAGA: A project to automate the management of software production systems
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Laliberte, D.; Render, H.; Sum, R.; Smith, W.; Terwilliger, R.
1987-01-01
The Software Automation, Generation and Administration (SAGA) project is investigating the design and construction of practical software engineering environments for developing and maintaining aerospace systems and applications software. The research includes the practical organization of the software lifecycle, configuration management, software requirements specifications, executable specifications, design methodologies, programming, verification, validation and testing, version control, maintenance, the reuse of software, software libraries, documentation, and automated management.
The anatomy of E-Learning tools: Does software usability influence learning outcomes?
Van Nuland, Sonya E; Rogers, Kem A
2016-07-08
Reductions in laboratory hours have increased the popularity of commercial anatomy e-learning tools. It is critical to understand how the functionality of such tools can influence the mental effort required during the learning process, also known as cognitive load. Using dual-task methodology, two anatomical e-learning tools were examined to determine the effect of their design on cognitive load during two joint learning exercises. A.D.A.M. Interactive Anatomy is a simplistic, two-dimensional tool that presents like a textbook, whereas Netter's 3D Interactive Anatomy has a more complex three-dimensional usability that allows structures to be rotated. It was hypothesized that longer reaction times on an observation task would be associated with the more complex anatomical software (Netter's 3D Interactive Anatomy), indicating a higher cognitive load imposed by the anatomy software, which would result in lower post-test scores. Undergraduate anatomy students from Western University, Canada (n = 70) were assessed using a baseline knowledge test, Stroop observation task response times (a measure of cognitive load), mental rotation test scores, and an anatomy post-test. Results showed that reaction times and post-test outcomes were similar for both tools, whereas mental rotation test scores were positively correlated with post-test values when students used Netter's 3D Interactive Anatomy (P = 0.007), but not when they used A.D.A.M. Interactive Anatomy. This suggests that a simple e-learning tool, such as A.D.A.M. Interactive Anatomy, is as effective as more complicated tools, such as Netter's 3D Interactive Anatomy, and does not academically disadvantage those with poor spatial ability. Anat Sci Educ 9: 378-390. © 2015 American Association of Anatomists. © 2015 American Association of Anatomists.
DOWNSTREAM-WATER-LEVEL CONTROL TEST RESULTS ON THE WM LATERAL CANAL
USDA-ARS?s Scientific Manuscript database
On steep canals, distant downstream water-level control can be challenging. SacMan (Software for Automated Canal Management) was developed, in part, to test various distant downstream water level controllers. It was implemented on the WM canal of the Maricopa Stanfield Irrigation and Drainage Distri...
Bourgarit, A; Mallet, H-P; Keshtmand, H; De Castro, N; Rambeloarisoa, J; Fain, O; Antoun, F; Picard, C; Rocher, G; Che, D; Farge, D
2009-10-01
The impact of the TB-info software was assessed on the care of patients treated with antituberculosis regimen (ATT). Cohort study of patients with tuberculosis who received an ATT in 2004 in two hospitals and five medical centres in Paris. Follow-up was implemented with the TB-info software. Data were compared to those of the 1999-2003 cohort. Two hundred and nine ATT were initiated in 2004, with a mean duration of 7.2 months. Demographic and clinical data reflected this population precariousness: 79% were foreign-born, 25% lived in institutions and half of them had no or unusual health insurance. Compared to the previous cohort, viral co-infections were tested in more than 80% cases and showed association with HIV, HBV or HCV in 11, 10 and 5% of the patients, respectively. Twenty-one patients were lost for follow-up (11%) and 76% of the smear-positive pulmonary tuberculosis therapies were declared successful but only 34% were declared cured with the WHO criteria. Analysis of the data obtained with TB-info software showed an improvement of tuberculosis patients care with more co-infection tested and less lost for follow-up. These results confirm the usefulness of this software for patients care and assessment of physicians practice in France.
Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile
NASA Astrophysics Data System (ADS)
Hoľko, Michal; Stacho, Jakub
2014-12-01
The article deals with numerical analyses of a Continuous Flight Auger (CFA) pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed. Our analyses show that both types of software permit the modelling of pile foundations. The Plaxis software uses advanced material models as well as the modelling of the impact of groundwater or overconsolidation. The load-settlement curve calculated using Plaxis is equal to the results of a static load test with a more than 95 % degree of accuracy. In comparison, the load-settlement curve calculated using Ansys allows for the obtaining of only an approximate estimate, but the software allows for the common modelling of large structure systems together with a foundation system.
Liu, Zhijian; Liu, Kejun; Li, Hao; Zhang, Xinyu; Jin, Guangya; Cheng, Kewei
2015-01-01
Measurements of heat collection rate and heat loss coefficient are crucial for the evaluation of in service water-in-glass evacuated tube solar water heaters. However, conventional measurement requires expensive detection devices and undergoes a series of complicated procedures. To simplify the measurement and reduce the cost, software based on artificial neural networks for measuring heat collection rate and heat loss coefficient of water-in-glass evacuated tube solar water heaters was developed. Using multilayer feed-forward neural networks with back-propagation algorithm, we developed and tested our program on the basis of 915measuredsamples of water-in-glass evacuated tube solar water heaters. This artificial neural networks-based software program automatically obtained accurate heat collection rateand heat loss coefficient using simply "portable test instruments" acquired parameters, including tube length, number of tubes, tube center distance, heat water mass in tank, collector area, angle between tubes and ground and final temperature. Our results show that this software (on both personal computer and Android platforms) is efficient and convenient to predict the heat collection rate and heat loss coefficient due to it slow root mean square errors in prediction. The software now can be downloaded from http://t.cn/RLPKF08. PMID:26624613
Liu, Zhijian; Liu, Kejun; Li, Hao; Zhang, Xinyu; Jin, Guangya; Cheng, Kewei
2015-01-01
Measurements of heat collection rate and heat loss coefficient are crucial for the evaluation of in service water-in-glass evacuated tube solar water heaters. However, conventional measurement requires expensive detection devices and undergoes a series of complicated procedures. To simplify the measurement and reduce the cost, software based on artificial neural networks for measuring heat collection rate and heat loss coefficient of water-in-glass evacuated tube solar water heaters was developed. Using multilayer feed-forward neural networks with back-propagation algorithm, we developed and tested our program on the basis of 915 measured samples of water-in-glass evacuated tube solar water heaters. This artificial neural networks-based software program automatically obtained accurate heat collection rate and heat loss coefficient using simply "portable test instruments" acquired parameters, including tube length, number of tubes, tube center distance, heat water mass in tank, collector area, angle between tubes and ground and final temperature. Our results show that this software (on both personal computer and Android platforms) is efficient and convenient to predict the heat collection rate and heat loss coefficient due to it slow root mean square errors in prediction. The software now can be downloaded from http://t.cn/RLPKF08.
A CMMI-based approach for medical software project life cycle study.
Chen, Jui-Jen; Su, Wu-Chen; Wang, Pei-Wen; Yen, Hung-Chi
2013-01-01
In terms of medical techniques, Taiwan has gained international recognition in recent years. However, the medical information system industry in Taiwan is still at a developing stage compared with the software industries in other nations. In addition, systematic development processes are indispensable elements of software development. They can help developers increase their productivity and efficiency and also avoid unnecessary risks arising during the development process. Thus, this paper presents an application of Light-Weight Capability Maturity Model Integration (LW-CMMI) to Chang Gung Medical Research Project (CMRP) in the Nuclear medicine field. This application was intended to integrate user requirements, system design and testing of software development processes into three layers (Domain, Concept and Instance) model. Then, expressing in structural System Modeling Language (SysML) diagrams and converts part of the manual effort necessary for project management maintenance into computational effort, for example: (semi-) automatic delivery of traceability management. In this application, it supports establishing artifacts of "requirement specification document", "project execution plan document", "system design document" and "system test document", and can deliver a prototype of lightweight project management tool on the Nuclear Medicine software project. The results of this application can be a reference for other medical institutions in developing medical information systems and support of project management to achieve the aim of patient safety.
Workstation-Based Avionics Simulator to Support Mars Science Laboratory Flight Software Development
NASA Technical Reports Server (NTRS)
Henriquez, David; Canham, Timothy; Chang, Johnny T.; McMahon, Elihu
2008-01-01
The Mars Science Laboratory developed the WorkStation TestSet (WSTS) to support flight software development. The WSTS is the non-real-time flight avionics simulator that is designed to be completely software-based and run on a workstation class Linux PC. This provides flight software developers with their own virtual avionics testbed and allows device-level and functional software testing when hardware testbeds are either not yet available or have limited availability. The WSTS has successfully off-loaded many flight software development activities from the project testbeds. At the writing of this paper, the WSTS has averaged an order of magnitude more usage than the project's hardware testbeds.
Integrated testing and verification system for research flight software design document
NASA Technical Reports Server (NTRS)
Taylor, R. N.; Merilatt, R. L.; Osterweil, L. J.
1979-01-01
The NASA Langley Research Center is developing the MUST (Multipurpose User-oriented Software Technology) program to cut the cost of producing research flight software through a system of software support tools. The HAL/S language is the primary subject of the design. Boeing Computer Services Company (BCS) has designed an integrated verification and testing capability as part of MUST. Documentation, verification and test options are provided with special attention on real time, multiprocessing issues. The needs of the entire software production cycle have been considered, with effective management and reduced lifecycle costs as foremost goals. Capabilities have been included in the design for static detection of data flow anomalies involving communicating concurrent processes. Some types of ill formed process synchronization and deadlock also are detected statically.
78 FR 1162 - Cardiovascular Devices; Reclassification of External Cardiac Compressor
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-08
... safety and electromagnetic compatibility; For devices containing software, software verification... electromagnetic compatibility; For devices containing software, software verification, validation, and hazard... electrical components, appropriate analysis and testing must validate electrical safety and electromagnetic...
Välikangas, Tommi; Suomi, Tomi; Elo, Laura L
2017-05-31
Label-free mass spectrometry (MS) has developed into an important tool applied in various fields of biological and life sciences. Several software exist to process the raw MS data into quantified protein abundances, including open source and commercial solutions. Each software includes a set of unique algorithms for different tasks of the MS data processing workflow. While many of these algorithms have been compared separately, a thorough and systematic evaluation of their overall performance is missing. Moreover, systematic information is lacking about the amount of missing values produced by the different proteomics software and the capabilities of different data imputation methods to account for them.In this study, we evaluated the performance of five popular quantitative label-free proteomics software workflows using four different spike-in data sets. Our extensive testing included the number of proteins quantified and the number of missing values produced by each workflow, the accuracy of detecting differential expression and logarithmic fold change and the effect of different imputation and filtering methods on the differential expression results. We found that the Progenesis software performed consistently well in the differential expression analysis and produced few missing values. The missing values produced by the other software decreased their performance, but this difference could be mitigated using proper data filtering or imputation methods. Among the imputation methods, we found that the local least squares (lls) regression imputation consistently increased the performance of the software in the differential expression analysis, and a combination of both data filtering and local least squares imputation increased performance the most in the tested data sets. © The Author 2017. Published by Oxford University Press.
Wizard CD Plus and ProTaper Universal: analysis of apical transportation using new software
GIANNASTASIO, Daiana; da ROSA, Ricardo Abreu; PERES, Bernardo Urbanetto; BARRETO, Mirela Sangoi; DOTTO, Gustavo Nogara; KUGA, Milton Carlos; PEREIRA, Jefferson Ricardo; SÓ, Marcus Vinícius Reis
2013-01-01
Objective This study has two aims: 1) to evaluate the apical transportation of the Wizard CD Plus and ProTaper Universal after preparation of simulated root canals; 2) to compare, with Adobe Photoshop, the ability of a new software (Regeemy) in superposing and subtracting images. Material and Methods Twenty five simulated root canals in acrylic-resin blocks (with 20º curvature) underwent cone beam computed tomography before and after preparation with the rotary systems (70 kVp, 4 mA, 10 s and with the 8×8 cm FoV selection). Canals were prepared up to F2 (ProTaper) and 24.04 (Wizard CD Plus) instruments and the working length was established to 15 mm. The tomographic images were imported into iCAT Vision software and CorelDraw for standardization. The superposition of pre- and post-instrumentation images from both systems was performed using Regeemy and Adobe Photoshop. The apical transportation was measured in millimetres using Image J. Five acrylic resin blocks were used to validate the superposition achieved by the software. Student's t-test for independent samples was used to evaluate the apical transportation achieved by the rotary systems using each software individually. Student's t-test for paired samples was used to compare the ability of each software in superposing and subtracting images from one rotary system per time. Results The values obtained with Regeemy and Adobe Photoshop were similar to rotary systems (P>0.05). ProTaper Universal and Wizard CD Plus promoted similar apical transportation regardless of the software used for image's superposition and subtraction (P>0.05). Conclusion Wizard CD Plus and ProTaper Universal promoted little apical transportation. Regeemy consists in a feasible software to superpose and subtract images and appears to be an alternative to Adobe Photoshop. PMID:24212994
Dynamic Modelling with "MLE-Energy Dynamic" for Primary School
NASA Astrophysics Data System (ADS)
Giliberti, Enrico; Corni, Federico
During the recent years simulation and modelling are growing instances in science education. In primary school, however, the main use of software is the simulation, due to the lack of modelling software tools specially designed to fit/accomplish the needs of primary education. In particular primary school teachers need to use simulation in a framework that is both consistent and simple enough to be understandable by children [
VirtualDose: a software for reporting organ doses from CT for adult and pediatric patients
NASA Astrophysics Data System (ADS)
Ding, Aiping; Gao, Yiming; Liu, Haikuan; Caracappa, Peter F.; Long, Daniel J.; Bolch, Wesley E.; Liu, Bob; Xu, X. George
2015-07-01
This paper describes the development and testing of VirtualDose—a software for reporting organ doses for adult and pediatric patients who undergo x-ray computed tomography (CT) examinations. The software is based on a comprehensive database of organ doses derived from Monte Carlo (MC) simulations involving a library of 25 anatomically realistic phantoms that represent patients of different ages, body sizes, body masses, and pregnant stages. Models of GE Lightspeed Pro 16 and Siemens SOMATOM Sensation 16 scanners were carefully validated for use in MC dose calculations. The software framework is designed with the ‘software as a service (SaaS)’ delivery concept under which multiple clients can access the web-based interface simultaneously from any computer without having to install software locally. The RESTful web service API also allows a third-party picture archiving and communication system software package to seamlessly integrate with VirtualDose’s functions. Software testing showed that VirtualDose was compatible with numerous operating systems including Windows, Linux, Apple OS X, and mobile and portable devices. The organ doses from VirtualDose were compared against those reported by CT-Expo and ImPACT—two dosimetry tools that were based on the stylized pediatric and adult patient models that were known to be anatomically simple. The organ doses reported by VirtualDose differed from those reported by CT-Expo and ImPACT by as much as 300% in some of the patient models. These results confirm the conclusion from past studies that differences in anatomical realism offered by stylized and voxel phantoms have caused significant discrepancies in CT dose estimations.
NASA Astrophysics Data System (ADS)
Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.
Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.
2014-01-01
Background According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). Methods The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. Results The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. Conclusions The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects. It has been demonstrated that a standards-compliant development of small and medium-sized medical software can be carried out by a small team with limited resources in a clinical setting. This is of particular relevance as the upcoming revision of the Medical Device Directive is expected to harmonize and tighten the current legal requirements for all European in-house manufacturers. PMID:24655818
Software Graphics Processing Unit (sGPU) for Deep Space Applications
NASA Technical Reports Server (NTRS)
McCabe, Mary; Salazar, George; Steele, Glen
2015-01-01
A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.
Byrska-Bishop, Marta; Wallace, John; Frase, Alexander T; Ritchie, Marylyn D
2018-01-01
Abstract Motivation BioBin is an automated bioinformatics tool for the multi-level biological binning of sequence variants. Herein, we present a significant update to BioBin which expands the software to facilitate a comprehensive rare variant analysis and incorporates novel features and analysis enhancements. Results In BioBin 2.3, we extend our software tool by implementing statistical association testing, updating the binning algorithm, as well as incorporating novel analysis features providing for a robust, highly customizable, and unified rare variant analysis tool. Availability and implementation The BioBin software package is open source and freely available to users at http://www.ritchielab.com/software/biobin-download Contact mdritchie@geisinger.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:28968757
Massively parallel quantum computer simulator
NASA Astrophysics Data System (ADS)
De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.
2007-01-01
We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagar, M; Friesen, S; Mannarino, E
2014-06-01
Purpose: Collision between the gantry and the couch or patient during Radiotherapy is not a common concern for conventional RT (static fields or arc). With the increase in the application of stereotactic planning techniques to the body, collisions have become a greater concern. Non-coplanar beam geometry is desirable in stereotatic treatments in order to achieve sharp gradients and a high conformality. Non-coplanar geometry is less intuitive in the body and often requires an iterative process of planning and dry runs to guarantee deliverability. Methods: Purpose written software was developed in order to predict the likelihood of collision between the headmore » of the gantry and the couch, patient or stereotatic body frame. Using the DICOM plan and structures set, exported by the treatment planning system, this software is able to predict the possibility of a collision. Given the plan's isocenter, treatment geometry and exterior contours, the software is able to determine if a particular beam/arc is clinically deliverable or if collision is imminent. Results: The software was tested on real world treatment plans with untreatable beam geometry. Both static non-coplanar and VMAT plans were tested. Of these, the collision prediction software could identify all as having potentially problematic geometry. Re-plans of the same cases were also tested and validated as deliverable. Conclusion: This software is capable of giving good initial indication of deliverability for treatment plans that utilize complex geometry (SBRT) or have lateral isocenters. This software is not intended to replace the standard pre-treatment QA dry run. The effectiveness is limited to those portions of the patient and immobilization devices that have been included in the simulation CT and contoured in the planning system. It will however aid the planner in reducing the iterations required to create complex treatment geometries necessary to achieve ideal conformality and organ sparing.« less
Czwoydzinski, J; Girnus, R; Sommer, A; Heindel, W; Lenzen, H
2011-09-01
Physical-technical quality assurance is one of the essential tasks of the National Reference Centers in the German Breast Cancer Screening Program. For this purpose the mammography units are required to transfer the measured values of the constancy tests on a daily basis and all phantom images created for this purpose on a weekly basis to the reference centers. This is a serious logistical challenge. To meet these requirements, we developed an innovative software tool. By the end of 2005, we had already developed web-based software (MammoControl) allowing the transmission of constancy test results via entry forms. For automatic analysis and transmission of the phantom images, we then introduced an extension (MammoControl DIANA). This was based on Java, Java Web Start, the NetBeans Rich Client Platform, the Pixelmed Java DICOM Toolkit and the ImageJ library. MammoControl DIANA was designed to run locally in the mammography units. This allows automated on-site image analysis. Both results and compressed images can then be transmitted to the reference center. We developed analysis modules for the daily and monthly consistency tests and additionally for a homogeneity test. The software we developed facilitates the immediate availability of measurement results, phantom images, and DICOM header data in all reference centers. This allows both targeted guidance and short response time in the case of errors. We achieved a consistent IT-based evaluation with standardized tools for the entire screening program in Germany. © Georg Thieme Verlag KG Stuttgart · New York.
STGT program: Ada coding and architecture lessons learned
NASA Technical Reports Server (NTRS)
Usavage, Paul; Nagurney, Don
1992-01-01
STGT (Second TDRSS Ground Terminal) is currently halfway through the System Integration Test phase (Level 4 Testing). To date, many software architecture and Ada language issues have been encountered and solved. This paper, which is the transcript of a presentation at the 3 Dec. meeting, attempts to define these lessons plus others learned regarding software project management and risk management issues, training, performance, reuse, and reliability. Observations are included regarding the use of particular Ada coding constructs, software architecture trade-offs during the prototyping, development and testing stages of the project, and dangers inherent in parallel or concurrent systems, software, hardware, and operations engineering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, A.; Chadwick, T.; Makhlouf, M.
This paper deals with the effects of various solidification variables such as cooling rate, temperature gradient, solidification rate, etc. on the microstructure and shrinkage defects in aluminum alloy (A356) castings. The effects are first predicted using commercial solidification modeling softwares and then verified experimentally. For this work, the authors are considering a rectangular bar cast in a sand mold. Simulation is performed using SIMULOR, a finite volume based casting simulation program. Microstructural variables such as dendritic arm spacing (DAS) and defects (percentage porosity) are calculated from the temperature fields, cooling rate, solidification time, etc. predicted by the computer softwares. Themore » same variables are then calculated experimentally in the foundry. The test piece is cast in a resin (Sodium Silicate) bonded sand mold and the DAS and porosity variables are calculated using Scanning Electron Microscopy and Image Analysis. The predictions from the software are compared with the experimental results. The results are presented and critically analyzed to determine the quality of the predicted results. The usefulness of the commercial solidification modeling softwares as a tool for the foundry are also discussed.« less